Michael Lewis via llvm-dev | 25 Jul 03:18 2016

Gauging interest in generating PDBs from LLVM-backed languages

Hi all,

I've recently been doing some work on a novel language that uses LLVM for optimization and machine codegen. The language is self-hosted, I'm building my third iteration of its garbage collector, and also writing a thin IDE to stretch the language a bit. Needless to say, debugging is a major concern for me.

My primary experience (and primary development focus) is Windows-centric, so my go-to debuggers are Visual Studio and WinDbg. I know of the lldb/VS integration efforts, and of course with appropriate setup I could also leverage gdb (IIRC). But these aren't quite what I was looking for, personally.

Long story short I set out to build a PDB emitter that could generate debug information for my language based on the existing CodeView emission support as of LLVM 3.8. I'm happy to report success as of this afternoon. More details about the effort and its status can be found at [0].

The high-level overview of my strategy is to crack the CodeView blob from LLVM (.data$S COFF section) and reassemble it, plus some augmentation, then feed that to the API exposed by MSPDB140.dll (Visual Studio 2015's version). This works and I can debug programs in both VS and WinDbg assuming the front-end supplies sane metadata to the LLVM layer.

My question to the list - is this work valuable for anyone else? Would there be general interest in documentation or even example code that assembles what I've learned throughout this effort?


 - Mike

LLVM Developers mailing list
llvm-dev <at> lists.llvm.org
Sedat Dilek via llvm-dev | 24 Jul 17:58 2016

[llvm-3.8.1] /usr/bin/objcopy: unrecognized option '--extract-dwo'


I am still struggling with my optimized/speedup build of llvm-toolchain v3.8.1.
Here: Enable LTO, PGO, optimized-TableGen, split-DWARF and build with
GNU/gold and LLVMgold-plugin.

The objcopy (binutils v2.22) here on Ubuntu/precise AMD64 does not
support '--extract-dwo'.

My build fails with... /usr/bin/objcopy: unrecognized option '--extract-dwo'.

Now, I did a full build of binutils v2.26.1 and using its binaries.

Is it possible to embed a test if objcopy is able to perform '--extract-dwo'?
( I cannot say which of the speedup options require this option. )


- Sedat -
Attachment (build_llvm-toolchain.sh): application/x-sh, 9 KiB
LLVM Developers mailing list
llvm-dev <at> lists.llvm.org
koffie drinker via llvm-dev | 24 Jul 17:50 2016

Memory usage with MCJit

Hi all,

I'm building a runtime that  can jit and execute code. I've followed the kaleidoscope tutorial and had a couple of questions. Basically I have a pre-compiler that compiles the code to cache objects. These cached objects are then persisted  and used to reduce Jit compile time.

1. I took the approach of 1 execution engine with multiple modules (I'm not removing modules once they have been added). During profiling, I noticed that the memory usage is high with a lot of code. How can I reduce the memory usage? Is one execution engine per module preferred? I would imagine that this would take up more memory.

2. When I add a module and jit it, can I invoke a remove module and still be able to execute the compiled function?

3. Is it better to have one runtime mem manager that will be associated with multiple execution engines (one engine per module)? Using this approach I could throw away the execution engines once they have jitted the code. I probably need to take ownership of the jitted code somehow. Are there any examples available on how this could be achieved?


LLVM Developers mailing list
llvm-dev <at> lists.llvm.org
Sedat Dilek via llvm-dev | 24 Jul 15:45 2016

[PGO] cmake: Check for opagent.h when building with -DLLVM_USE_OPROFILE=ON


I am building a llvm-toolchain v3.8.1 and use...


...in my build-script (see file-attachments).

My build stopped saying... <opagent.h> not found.

[1] says...

00024 #include <opagent.h>

So, it is possible to check for the existence of <opagent.h> in the
cmake/configure stage *before* starting the build.
It is not very pleasant to see the build breaking.

NOTE: Ubuntu/precise has no official oprofile package available which
ships "/usr/include/opagent.h".

I did not check if upstream has a fix or workaround for this.


- Sedat -

[1] http://llvm.org/docs/doxygen/html/OProfileWrapper_8h_source.html

P.S.: Note2myself: Backporting oprofile v0.9.9

For backporting oprofile v0.9.9 from Ubuntu/trusty I needed
libiberty-dev from kubuntu-ppa/backports PPA software repository.
Furthermore, hack around to disable the build of oprofile-gui and
java-related libs (see attached diff-file).

[2] http://packages.ubuntu.com/search?keywords=libiberty
[3] http://www.ubuntuupdates.org/package/kubuntu-ppa_backports/precise/main/base/libiberty-dev
--- oprofile-0.9.9.orig/debian/changelog	2014-04-05 01:20:04.000000000 +0200
+++ oprofile-0.9.9/debian/changelog	2016-07-24 15:22:51.270272652 +0200
 <at>  <at>  -1,3 +1,19  <at>  <at> 
+oprofile (0.9.9-0ubuntu8~precise+dileks1) precise; urgency=low
+  * Suppress building of oprofile-gui and libjvmti-oprofile0 packages.
+  * debian/control
+    + Downgrade to automake in Build-Depends
+    + Remove Build-Depends for oprofile-gui and libjvmti-oprofile0
+    + Remove Packages sections of oprofile-gui and libjvmti-oprofile0
+  * debian/rules:
+    + Remove aclocal options from autogen line
+    + Drop unavailable options from configure line
+    + Remove lines referring to oprofile-gui and libjvmti-oprofile0
+  * Build against libiberty-dev (20131116-1~ubuntu12.04~ppa1) from kubuntu-ppa/backports.
+  * Rebuild on Ubuntu/precise. 
+ -- Sedat Dilek <sedat.dilek <at> gmail.com>  Sun, 24 Jul 2016 15:17:10 +0200
 oprofile (0.9.9-0ubuntu8) trusty; urgency=medium

   * debian/patches/Fix-up-event-codes-for-marked-architected-events.patch:
--- oprofile-0.9.9.orig/debian/control	2014-03-10 18:14:55.000000000 +0100
+++ oprofile-0.9.9/debian/control	2016-07-24 15:22:51.270272652 +0200
 <at>  <at>  -6,7 +6,7  <at>  <at>  XSBC-Original-Maintainer: LIU Qi <liuqi8
 Uploaders: Roberto C. Sanchez <roberto <at> connexer.com>
 Standards-Version: 3.9.3
- automake1.10,
+ automake,
  libpfm4-dev [ppc64 ppc64el],
 <at>  <at>  -15,14 +15,11  <at>  <at>  Build-Depends:
  g++ (>>3.3.1),
- libqt4-dev,
- qt4-dev-tools,
- zlib1g-dev,
- default-jdk
+ zlib1g-dev
 Homepage: http://oprofile.sourceforge.net
 Vcs-Browser: http://git.printk.org/?p=liuqi/debian/oprofile.git;a=summary
 Vcs-git: git://git.printk.org/liuqi/debian/oprofile.git
 <at>  <at>  -45,7 +42,6  <at>  <at>  Depends:
 Replaces: oprofile-common
 Recommends: binutils
-Suggests: oprofile-gui
 Description: system-wide profiler for Linux systems
  OProfile is a performance profiling tool for Linux systems, capable
  of profiling all running code at low overhead.  It consists of a
 <at>  <at>  -74,24 +70,3  <at>  <at>  Description: system-wide profiler for Li
  for turning data into information.
  This package contains the opagent runtime library.
-Package: libjvmti-oprofile0
-Architecture: linux-any
-Depends: ${misc:Depends}, ${shlibs:Depends}
-Description: system-wide profiler for Linux systems (Java runtime library)
- OProfile is a performance profiling tool for Linux systems, capable
- of profiling all running code at low overhead.  It consists of a
- daemon for collecting sample data, plus several post-profiling tools
- for turning data into information.
- .
- This package contains the jvmti_oprofile runtime library for Java support.
-Package: oprofile-gui
-Architecture: linux-any
-Replaces: oprofile
-Depends: debconf | debconf-2.0, oprofile, ${misc:Depends}, ${shlibs:Depends}
-Recommends: binutils
-Description: system-wide profiler for Linux systems (GUI components)
- This package contains only the GUI components of the oprofile package.
- This allows oprofile to be used on machines that require a much
- smaller footprint, or that do not wish to use an X Windows interface.
--- oprofile-0.9.9.orig/debian/rules	2014-03-10 18:14:55.000000000 +0100
+++ oprofile-0.9.9/debian/rules	2016-07-24 15:40:39.682239142 +0200
 <at>  <at>  -18,9 +18,9  <at>  <at>  endif

 configure: config-stamp 
 config-stamp: $(QUILT_STAMPFN)
-	cd $(CURDIR) && ACLOCAL=aclocal-1.10 AUTOMAKE=automake-1.10 ./autogen.sh
+	cd $(CURDIR) && AUTOMAKE=automake ./autogen.sh
 #	cd $(CURDIR) && autoreconf -vfi
-	cd $(CURDIR) && ./configure --host=$(DEB_HOST_GNU_TYPE) --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --with-qt-includes=/usr/include/qt4 --with-kernel-support
--disable-werror --enable-gui=qt4 --with-java=/usr/lib/jvm/default-java
+	cd $(CURDIR) && ./configure --host=$(DEB_HOST_GNU_TYPE) --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --disable-werror --enable-gui=no --with-java=no
 	touch config-stamp

 build-arch: configure build-arch-stamp
 <at>  <at>  -54,24 +54,17  <at>  <at>  install: build
 	cd $(CURDIR) && DESTDIR=$(CURDIR)/debian/oprofile $(MAKE) install 

 	# Move some files to their proper location
-	mv debian/oprofile/usr/bin/oprof_start \
-	   debian/oprofile-gui/usr/bin
-	cp debian/oprofile/usr/share/man/man1/oprofile.1 \
-	   debian/oprofile-gui/usr/share/man/man1/oprof_start.1
+	# oprofile-gui: Remove man-page
+	rm -f debian/oprofile/usr/share/man/man1/oprof_start.1

 	mkdir -p debian/libopagent1/usr/lib
-	mkdir -p debian/libjvmti-oprofile0/usr/lib
 	mv debian/oprofile/usr/lib/oprofile/libopagent.so.* \
-	mv debian/oprofile/usr/lib/oprofile/libjvmti_oprofile.so.* \
-	  debian/libjvmti-oprofile0/usr/lib/
-	for i in debian/libopagent1/usr/lib/*.so.* \
-	         debian/libjvmti-oprofile0/usr/lib/*.so.*; do \
+	for i in debian/libopagent1/usr/lib/*.so.*; do \
 	  b=$$(basename $$i); \
 	  ln -sf ../$$b debian/oprofile/usr/lib/oprofile/$$b; \
 	ln -sf libopagent.so.1 debian/oprofile/usr/lib/libopagent.so
-	ln -sf libjvmti_oprofile.so.0 debian/oprofile/usr/lib/libjvmti_oprofile.so

 	# Fixup non-empty-dependency_libs-in-la-file lintian error
 	sed -i "/dependency_libs/ s/'.*'/''/" `find . -name '*.la'`
Attachment (build_llvm-toolchain.sh): application/x-sh, 8 KiB
LLVM Developers mailing list
llvm-dev <at> lists.llvm.org
凌英剑 via llvm-dev | 24 Jul 08:43 2016

Getting bc files of some multithreading workloads

I am trying to generate bc files from some workloads like redis, nginx and mysql.Now, I have got some of them by changing makefiles of redis and nginx. However,when I use the same method to complie Mysql, I find its makefile can only be generated by cmake. The problem is the makefile generated by cmake is too complicated to understand it.
1.Has anyone tried to get bc files by changing makefiles? 
2.Are there any other ways to generate these files?
Thanks in advance.

LLVM Developers mailing list
llvm-dev <at> lists.llvm.org
Paweł Bylica via llvm-dev | 23 Jul 23:42 2016

Improving deb packages


I complained about the deb packages couple of times previously, even fixed some issues in packaging. I'm mostly interested in having reliable share cmake files available in llvm-dev packages. The version 3.7 was fine, but 3.8+ have regressions.

I'm not here to blame anybody. I would like to identify the issues and discuss long term solutions.

I started with building very simple test framework that checks different Ubuntu/Debian versions and currently supported LLVM versions. The first and only test just checks find_package(LLVM CONFIG) cmake function.

I've tested {3.8, 3.9, 4.0} x {xenial, jessie} using docker images and Travis CI.
As you can see, only the 3.8 on Jessie passed the test.

Issues I've identified:
  1. Packaging adds version prefix to binaries, directories, etc. -- eg. llc is renamed to llc-3.8. I'm not sure how it is done, but maybe we should add support for such feature to LLVM's cmake?
  2. Default install location for cmake shared files is <prefix>/lib/llvm/share/llvm/cmake. find_package() is not able to find them there as it is non-standard path. find_package() needs a hint, like setting LLVM_DIR variable as in https://github.com/chfast/llvm-apt-tests/blob/master/configure.sh#L6
    If we moved the cmake shared files to <prefix>/lib/llvm/cmake no hint would be needed.
  3. On Ubuntu, packaging script moves cmake shared files to <prefix>/share/llvm-X.Y/cmake. I'm guessing to solve the issue (2). find_package() is able to find LLVMConfig.cmake file without any hint, but this file contains absolute paths referencing previous locations of other files. You usually get issues like this one:

    CMake Error at /usr/share/llvm-3.8/cmake/LLVMConfig.cmake:181 (include):
    include could not find load file:

    Maybe it is good idea to include other cmake files assuming there are located next to the main file instead of relying on absolute paths.
  4. It's a bit weird Debian and Ubuntu packages has different layout of installed shared files.
  5. Packages 3.9+ does not have any cmake's shared files, just empty dirs where those files are supposed to be. That might be a bug in the latest packaging script. 
- Paweł
LLVM Developers mailing list
llvm-dev <at> lists.llvm.org
Renato Golin via llvm-dev | 23 Jul 21:44 2016

Docs bots email

Galina, folks,

It seems the document buildbots are not sending emails, and they keep
going back to red.


Given that the server is setup in the same way (-Werror), whenever the
bot is red, means the website is *NOT* updated. It stands to reason
that we can't let that bot go red.

Did we disable them on purpose? Can we go back sending emails?

LLVM Developers mailing list
llvm-dev <at> lists.llvm.org
Elias Pipping via llvm-dev | 23 Jul 19:12 2016

Delta reduction of front-end bugs: New tool

Dear list,



it continues to be recommended that input which leads to crashes or assertion failures in clang be reduced
using e.g. the delta tool[1]. Even though there are by now far more elaborate tools for such tasks, I’m
still using delta and I assume so are many (I should probably be able to justify this better but I think
it’s mostly down to my inability to get C-Reduce to work in the first place). Now, delta hasn’t been
updated since 2006 as far as I can see. That’s probably in part because it does a pretty decent job (it’s
never crashed on me or produced incorrect results in any way). It could be improved upon in rather
straightforward ways, though. E.g. by allowing multicore machines to utilise more than one core. The
potential for time savings is rather large here and since delta reductions can take up quite a lot of time,
that’s true both in relative and absolute terms.

For no good reason, I’ve reimplemented delta (which is implemented in perl) in common lisp a few years
ago. The other day then, I cleaned it up a bit (I’m quite happy with it now), added asynchronous IO, and all
of a sudden the tool became actually useful — it now does something the original delta didn’t do, after
all. So I’d like to advertise here a bit (the tool is open source/free software(*) so I hope this isn’t
frowned upon): It’s up at


It has at the time of this writing almost surely exactly one user (me). So maybe it’s broken on a platform
that I don’t have access to. Or maybe I made some obvious common lisp packaging mistakes (probably,
judging by the warnings about redefinitions). And maybe the entire interface is a disaster and highly
unintuitive. So I’m hoping for bug reports and pull requests. While no guarantee that looking into this
will be worth your time, an indication that it might be is e.g. these runtimes that I got for a simple test
case on macOS: perl (1 process): 76.24s, lisp (1 process): 57.57s, lisp (4 processes): 29.90s(**). On
linux, I even saw a speedup of 2.6 when going from one process to four for this test (since it’s rather
contrived and silly it lives in the subdirectory test-silly; the README has the details on how to run it).

Thanks for reading.

Elias Pipping

(*) I haven’t spent a lot of thought on the particular license so if you wish it were a different one, let me know.
(**) The fact that the single-process lisp implementation is fast than perl is down to a different order in
which options are checked, I assume. The perl implementation calls the test script 570 times, the lisp
implementation just 485 times for this test case. I haven’t looked into this in detail so this might also
swing the other way. The speedup from asynchronous IO is the more interesting number here, I think.
LLVM Developers mailing list
llvm-dev <at> lists.llvm.org
Alex Susu via llvm-dev | 23 Jul 13:33 2016

Loop optimizations implemented in the front end (source-to-source transformation)

     I'd like to implement loop transformations that are currently implemented in LLVM 
(loop distribution/fission, strip, coalescing, etc) in the front end, normally using Clang 
normally using libtooling?

     Did anybody tried this already or is considering it? I know currently of the Scout 

which does something similar with Clang, without using libtooling, making it harder to use 
and I guess less standard.

     Another feasible option I'm considering is using CIL (for the C language; available 
at https://github.com/cil-project/cil) - well, a more complete list of similar tools is 
available at https://en.wikipedia.org/wiki/List_of_program_transformation_systems .

   Thank you,
LLVM Developers mailing list
llvm-dev <at> lists.llvm.org

Re: Creating llvm/DebugInfo/Msf folder

FWIW, I much prefer initialisms not use rolling caps convention.

On Fri, Jul 22, 2016 at 1:02 PM Zachary Turner via llvm-dev <llvm-dev <at> lists.llvm.org> wrote:
I have about 5 patches in the pipeline which are all using Camel case.  If I fix this one up, it's going to make the rest very difficult.  To keep things simple I will probably check it in as rolling case, and after the rest of these patches go in, if people feel strongly I can do a single pass to change everything to all caps at once.

On Fri, Jul 22, 2016 at 10:56 AM Zachary Turner <zturner <at> google.com> wrote:
I actually regret doing all caps for PDB, and there seems to be mixed use of all caps / rolling caps even with dwarf (some places say DWARF, others say Dwarf).  I don't feel too strongly aside from the minor inconvenience of having to change thousands of occurrences where I already used rolling caps though :-/

On Fri, Jul 22, 2016 at 10:52 AM Reid Kleckner <rnk <at> google.com> wrote:
While I personally prefer the rolling caps convention for initialisms, all caps is the more widely used convention across LLVM. In fact, lib/DebugInfo/PDB is right next to it, so I'd go with that.

Other than that, yeah, sounds good. :)

On Fri, Jul 22, 2016 at 12:07 PM, Zachary Turner via llvm-dev <llvm-dev <at> lists.llvm.org> wrote:
Hi all,

If you don't care about Debug Info or PDB files you can stop reading now.

Just wanted to give a heads up that I'm planning to add a new folder under DebugInfo called Msf.

MSF stands for Multi-Stream File and is the container format used by PDB debug info files.  However, MSF by itself is generic enough that it need not contain PDB data, and in fact I can think of at least one other case of MSF files being used to store non-PDB data.  

Currently, we have llvm/DebugInfo/PDB which contains both our knowledge of the PDB format as well as our knowledge of the MSF format.  And worse, some of this knowledge of MSF files is in DebugInfo/CodeView.  And in some cases we are saying PDB when we really mean MSF, and in some cases we are saying MSF when we really mean PDB.

To make the distinction clearer, and to provide a theoretical means by which someone could use MSF to store non-PDB data, I have a patch to move all our of MSF knowledge into a separate library.

I'm planning to commit this later today, and mostly just wanted to give a heads up in case people are surprised when they see a new directory pop up under DebugInfo.

LLVM Developers mailing list
llvm-dev <at> lists.llvm.org

LLVM Developers mailing list
llvm-dev <at> lists.llvm.org
LLVM Developers mailing list
llvm-dev <at> lists.llvm.org
Allen Lorenz via llvm-dev | 23 Jul 06:12 2016

What is the update on the AVR backend integration for the 3.9 release?

Checking out https://reviews.llvm.org/ for the present status of AVR integration, the latest commit for review is dated Jun 29.  So it seems there is till integration happening.  However I don't even see any honorable mention in the draft release as for progress. The progress at least should get 3-4 sentences in the release notes!
  What is the present status/percent integrated ?  And will there be an experimental testing with the 3.9 version or least from the bleeding edge git before 4.0 ? Thanks Dlyan and the llvm team / reviewers for the hard work.
LLVM Developers mailing list
llvm-dev <at> lists.llvm.org