Renato Golin via llvm-dev | 9 Oct 04:28 2015

LNT bug?

Anyone seeing this, too?

File "/home/renato.golin/buildslave/clang-cmake-aarch64-quick/test/lnt/lnt/server/ui/", line 18, in <module> from loremipsum import get_sentences
ImportError: No module named loremipsum

LLVM Developers mailing list
llvm-dev <at>

llvm:cl::parser subclasses final in 3.7?


I'm upgrading some code that uses LLVM from 3.6 to 3.7. It looks like the llvm::cl::parser subclasses are now final?

We had been doing:

struct MaxThreadsParser : public llvm::cl::parser<unsigned> {
  bool parse(llvm::cl::Option &O, llvm::StringRef ArgName, llvm::StringRef Arg,
             unsigned &Val);

But that's now causing:

In file included from /home/lak/my_svn/llvm-carte/llvm-3.7.0/tools/carte++/tools/ir2v/ThreadSupport.cpp:1:
/home/lak/my_svn/llvm-carte/llvm-3.7.0/tools/carte++/tools/ir2v/ThreadSupport.h:12:34: error:
      base 'parser' is marked 'final'
struct MaxThreadsParser : public llvm::cl::parser<unsigned> {

What's the new way to do this now? It looks like the documentation at describes a way that doesn't work anymore.

LLVM Developers mailing list
llvm-dev <at>

beta: cloud compiler bisection tool

I am happy to announce we are contributing one of our favorite internal tools: llvm bisect!  

First: I want to thank Daniel Dunbar for writing all the code for this tool, and Google for providing cloud
storage and bandwidth to host it!

We keep the compilers we build in the Green Dragon CI cluster, and now upload them to the Google Storage
Cloud.  The llvmlab bisect tool, takes those compilers and allows you to quickly bisect on a predicate. 
Instead of compiling those compilers again, the tool downloads them from the cloud.  The tool allows you to
down download and run compilers, or bisect on a range of commits with a predicate.   Example, find where a
crash started:

$ llvmlab bisect bash -c "%(path)s/bin/clang -c crashy.c”

FAIL: clang-r219899-t2014-10-15_21-42-48-b809
FAIL: clang-r219778-t2014-10-15_06-18-17-b787
FAIL: clang-r219773-t2014-10-14_21-18-07-b779
FAIL: clang-r219746-t2014-10-14_17-01-07-b775
FAIL: clang-r219739-t2014-10-14_16-09-35-b771
FAIL: clang-r219729-t2014-10-14_15-21-29-b757
clang-r219719-t2014-10-14_14-46-50-b756: first working build
clang-r219729-t2014-10-14_15-21-29-b757: next failing build

Grab the most recent compiler from a build:

$ llvmlab fetch "clang-stage1-configure-RA_build”
downloaded root: clang-r249752-b13228.tar.gz
extracted path : clang-r249752-b13228

For this initial release, I am uploading the ~18000 compilers produced on Green Dragon, which span the last
8 months of builds. As well, all new builds on Green Dragon upload their compilers right away.  These are
Darwin compilers, in release asserts and release + LTO configurations.  More configurations will be made
available soon, including ASANified compilers and release branch compilers.

The tool is in the zorg repo, in the llvmbisect directory. You install it like a regular python program. 
There are extensive docs with many examples included.

I invite other bot owners to to upload their compilers as well.  You will need the Google Storage tool
installed on your bot, as well as an account.  Contact me if you are interested, and I can help you get setup.

Happy bisecting!
LLVM Developers mailing list
llvm-dev <at>

Using AAResultsWrapperPass in ModulePass

Did anyone faced any issues with the recent changes made for AliasAnalysis.
AliasAnalysis interface changed to AAResultsWrapperPass.
Prior was able to use AliasAnalysis in module pass, now facing error while using AAResultsWrapperPass:
ld: /home/ashutosh/I2DPromo/llvm/include/llvm/PassAnalysisSupport.h:230: AnalysisType& llvm::Pass::getAnalysisID(llvm::AnalysisID) const [with AnalysisType = llvm::AAResultsWrapperPass; llvm::AnalysisID = const void*]: Assertion `ResultPass && "getAnalysis*() called on an analysis that was not " "'required' by pass!"' failed.
clang: error: unable to execute command: Aborted (core dumped)
AAResultsWrapperPass is a FunctionPass. Is that the reason this can’t be used in ModulePass ?
Looks like I’m missing something, can anyone help to get AliasAnalysis in ModulePass ?
class myclass: public ModulePass {
void getAnalysisUsage(AnalysisUsage &AU) const override {
bool  myclass::runOnModule(Module &M) {
  AA = &getAnalysis<AAResultsWrapperPass>().getAAResults();
LLVM Developers mailing list
llvm-dev <at>

Adding a function attribute with an argument



I'm trying to add a function attribute to clang/llvm.

I need an attribute that will have an additional information of a variable name.

e.g. I would like to add support for something like:


void foo() __attribute__ ((xyz(v)) {…}


such that on the IR stage I can realize that the function foo has the attribute xyz and an argument v that  marks variable v (which is a variable inside foo) for my needs.


I was able to add a function attribute without an argument, but I don't understand how to parse/save/analyze the argument.


I would appreciate any ideas/examples/references.


Thanks a lot,




LLVM Developers mailing list
llvm-dev <at>
carr27 via llvm-dev | 8 Oct 03:44 2015

crash with vector store


I'm working on a pass where I would like to do something to the pointer 
operand of certain store instructions.  (The exact details are not 
relevant to my current problem).

My pass works fine without optimizations but my compiled programs crash 
with -O2.  I've noticed the difference is that with -O2 the bitcode 
contains vector stores.  The resulting -O2 optimized program crashes 
with a SIGSEV invalid address when I run it -- even when I do something 
that I don't think should change the store.

For example, I have two bitcode files: works [1] and broken [2].

The only differences are in broken:
1) the pointer operand of the vector store is bitcasted to int8*
2) then the bitcasted value is passed to a function the returns the same 
3) then the return value is bitcasted back to the original type
4) then the store's pointer operand is replaced with this final 
bitcasted value (that shouldn't have changed the pointer at all)

Can anyone give me a pointer as to what might be going on?

The core of my problem is that I want to be able to do manipulation of 
the pointer operand of the vector store.  But if I just pass it to a 
function that returns the same pointer my program is crashing.


[3] test.c:
LLVM Developers mailing list
llvm-dev <at>

ilist/iplist are broken (maybe I'll fix them?)

I've been digging into some undefined behaviour stemming from how ilist
is typically configured.  r247937, r247944, and r247978 caused a UBSan
failure to start firing on our Green Dragon bots, and after an IRC
conversation between David and Nick and Mehdi, we added a blacklist:
$echo "src:$WORKSPACE/llvm/include/llvm/CodeGen/MachineFunction.h" >> sanitize.blacklist

ilist/iplist is a pretty strange list, and the more I dig into it (to
fix the UB) the more broken I think it is.

I want to change a few things about it, but it'll be somewhat
intrusive (pun not really intended), so I want to get some buy-in before
really diving in.  I've CC'ed the people in the IRC conversation and a
couple of others that seem to care about ADT and/or UB.

"Normal" intrusive lists

First, here's a simple ("normal") intrusive doubly-linked list:

    struct ListNodeBase {
      ListNodeBase *next = nullptr;
      ListNodeBase *prev = nullptr;
    struct ListBase {
      struct iterator {
        ListNodeBase *I = nullptr;
        iterator &operator++() { I = I->next; return *this; }
        iterator &operator--() { I = I->prev; return *this; }
        ListNodeBase &operator*() const { return *I; }

      ListNodeBase L;
      ListBase() { clear(); }
      void clear() { = L.prev = &L; }
      iterator begin() { return iterator{}; }
      iterator end() { return iterator{&L}; }
      bool empty() const { return begin() == end(); }
      void insert(iterator P, ListNodeBase &N) {
        assert(! && !N.prev); = &*P;
        N.prev = P->prev;>prev = &N;
        N.prev->next = &N;
      void erase(iterator N) {
        assert(N != end());
        N->next->prev = N->prev;
        N->prev->next = N->next;
        N->next = N->prev = nullptr;
    template <class T> class List : ListBase {
      class iterator : ListBase::iterator {
        friend class List;
        typedef ListBase::iterator Super;
        iterator &operator++() { Super::operator++(); return *this; }
        iterator &operator--() { Super::operator--(); return *this; }
        T &operator*() const { static_cast<T &>(Super::operator*()); }

      using ListBase::clear;
      using ListBase::empty;
      iterator begin() { return iterator(ListBase::begin()); }
      iterator end() { return iterator(ListBase::end()); }
      void insert(iterator P, ListNodeBase &N) { ListBase::insert(P, N); }
      void erase(iterator N) { ListBase::erase(N); }

In case it's not clear, to use `List<SomeType>`, `SomeType` has to
inherit from `ListNodeBase`.

There are a few nice properties here.

 1. Traversal logic requires no knowledge of the downcast nodes.
 2. `List<T>` owns none of the nodes (clear ownership semantics).
 3. There are no branches (outside of asserts).
 4. We never touch the heap.

As a result it's fairly simple.

It's also fairly easy to wrap it to provide extra features if desired
(e.g., adding ownership semantics, or the 32 variations of `insert()`
that we seem to like having).

Our ilist/iplist

Our ilist/iplist implementation is far more complicated, has none of the
above properties (except sometimes #4, and only with extra configuration
that exposes UB), and provides a few extra features that I don't think
are really worth paying for.

The default configuration effectively does the following:

    template <class T> ListNode { T *prev, T *next; };
    template <class T> struct List {
      T *Head = nullptr;
      void ensureHead() {
        if (!Head) {
          Head = new T();
          Head->next = nullptr;
          Head->prev = Head;
      // complex insertion/removal logic.

Every list operation (even begin()/end()) starts by somehow calling

The key structural differences here:

  - Instead of the list containing the sentinel, it contains a pointer
    to the head of the list, and the sentinel is created on demand.
  - The sentinel is the full `T` (rather than ListNodeBase).
  - While "prev" pointers are circular, the sentinel's "next" pointer
    points at `nullptr` (it's "snipped off").
  - All pointers are to the downcast type, `T`.

(If you look at the code, you'll see it's a little more complicated.
There are also a number of arbitrary "hooks" for managing external
state.  Yay?)


What do we get in return?

  - Nodes "know" if they are the sentinel, since `next == nullptr`.
    This lets you do cute things like `NodeTy *getNextNode()` and
    `NodeTy *getPrevNode()`, which do the obvious, returning nullptr
    instead of the sentinel.  A naive list would need to compare against
  - Iterating forward will eventually dereference `nullptr` (can't
    infinite loop, even in release builds).
  - A whole lot of branches :/.

UB from sentinel hack

The memory management is particularly wasteful, so it's not surprising
that almost every user overrides it.  They effectively do this:

    template <class T> ListNode { T *prev, *next; };
    template <class T> struct List {
      ListNode<T> Sentinel;
      T *Head = static_cast<T *>(&Sentinel);
      void ensureHead() {}

Because of iterators have pointers to `T` instead of `ListNodeBase` --
this exposes UB.  It's *probably* harmless?

More UB, broken code from half-node hack

It gets worse though :(.

This still wastes a pointer compared to the naive list.  Many uses of
iplist/ilist make a further optimization to win back that pointer,
splitting `ListNode<T>` into two, and using only half the node for the

    template <class T> struct ListHalfNode { T *prev; };
    template <class T> struct ListNode : ListHalfNode<T> { T *next; };
    template <class T> struct List {
      ListHalfNode<T> Sentinel;
      T *Head = static_cast<T *>(&Sentinel);
      void ensureHead() {}

Besides exposing the same UB as above, this drops the "next" pointer
from the sentinel.  This is cute, but it's horribly wrong.

In particular, the getPrevNode/getNextNode() methods, which use
`N->next == nullptr` to check for sentinels, are instead poking into
"Head" in the list itself (which is never null).  For these lists,
`getNextNode()` will never return `nullptr`, because the "snipped" head
has been "reattached".

Ironically, this means we now have a "naive" doubly-linked circular list
in memory, but we have a whack-ton of logic that has no idea.  And we
have broken API.

What's broken?

Unless I'm misreading the code horribly,
BasicBlock/Instruction/Argument::getPrevNode/getNextNode() will *never*
return `nullptr`.  Instead they'll just return the sentinel.  Which has
been `static_cast`ed to a completely different type.  And if someone
dereferences it, *bang*.  There are other cases, too, but these are the
obviously scary ones.

Why am I writing this email instead of just fixing it?

I don't feel totally comfortable just going ahead and fixing this
without buy-in, since it'll touch some things.  Here's my basic plan:

  - Fix `getNextNode()` and `getPrevNode()` by taking in the parent list
    and checking against `end()` instead of checking for `nullptr`.
  - Change the lists to be normal circular linked lists (possibly I'll
    stick an "is-sentinel" bit in the prev pointer in asserts builds so
    we can have assertions instead of infinite loops).  I'll layer our
    weird partial ownership semantics on top to keep this part NFC.
  - Re-layer the external state hooks so that they're on top of the
    basic list instead of inside of it (so that someone in the future
    can understand the code with far less studying).

Anyone want to back me on this before I dive in?  Even better, did I
miss something fundamental?
LLVM Developers mailing list
llvm-dev <at>

trouble understanding tablegen

I'm somewhat new to LLVM and I'm trying to learn tablegen. I found a tutorial on how to create a backend for a given target but I can't figure out how does tablegen do pattern matching. 

Here is the specific link to the tutorial. 

The image describes how instruction selection occurs but I can't make heads or tails of it. I'd really appreciate any help on this.

LLVM Developers mailing list
llvm-dev <at>

[buildbot] clang-cmake-mips



I've just restarted this machine after finding a several hundred lit instances running on it. This was causing timeouts.


Galina: I don't seem to be receiving the email notifications I thought I'd configured in r249293. Do you know if there's something I missed?


Daniel Sanders

Leading Software Design Engineer, MIPS Processor IP

Imagination Technologies Limited


LLVM Developers mailing list
llvm-dev <at>

LLVM IR from static/dynamic libraries

Hi All,

Is this possible to generate LLVM IR from any static/dynamic link library?
I have some static and dynamic link library and I want to generate LLVM IR as I want to obfuscate these libraries.
Is there any way to do the same.

Please give me some pointers on this.


LLVM Developers mailing list
llvm-dev <at>
Thomas Jablin via llvm-dev | 7 Oct 02:58 2015

RFC: Convert Returned Constant i1 Values to i32 on PPC64

Presently, on PPC64 there is some silliness regarding on constant
boolean values are allocated to registers. Specifically, when crbits
are enabled, these values tend to be allocated to cr registers even
though the calling convention stipulates that they be returned in gp
registers. Additionally, constant booleans tracking control flow
through functions tend to be live for the entire function and
consequently are allocated to non-volatile registers, forcing a
non-volatile cr register to be saved and restored.

I have created a patch to address this issue
( and would
appreciate some feedback. The pass I implemented is quite simple and
only handles the very specific situation of constant boolean values
passing through PHINode to RetInsts. There is somewhat similar logic
already present in PPCISelLowering.cpp (DAGCombineExtBoolTrunc and
DAGCombineExtBoolTrunc). Originally, I hoped to either add logic to
PPCISelLowering to handle this case or write a new pass encompassing
the PPCISelLowering logic. However, neither design proved viable. The
existing PPCISelLowering functionality is based on SelectionDAGs and
consequently operates on BasicBlock scope, rather than Function-level
scope needed for this pass. The existing PPCISelLowering logic could
not be incorporated into the new LLVM IR level pass, since lowering to
MachineInsts creates new Trunc and Ext operations that are not present
at the LLVM IR level. I feel a little guilty proposing something that
is fairly special purpose, but I don't see a way to generalize the
logic in this pass in a useful way.

LLVM Developers mailing list
llvm-dev <at>