Philippe Proulx | 1 Apr 02:10 2015

[PATCH lttng-tools] Tests: Python agent: update after Python agent refactoring

This patch makes the tests follow the recent refactoring
of the LTTng-UST Python agent.

You need both python2 and python3 in your $PATH to run this
test now (since the agent is compatible with both versions
of Python).

Signed-off-by: Philippe Proulx <eeppeliteloop <at>>
 tests/regression/ust/python-logging/   | 76 ----------------------
 tests/regression/ust/python-logging/        | 71 ++++++++++++++++++++
 .../ust/python-logging/test_python_logging         | 76 ++++++++++++----------
 3 files changed, 112 insertions(+), 111 deletions(-)
 delete mode 100644 tests/regression/ust/python-logging/
 create mode 100644 tests/regression/ust/python-logging/

diff --git a/tests/regression/ust/python-logging/ b/tests/regression/ust/python-logging/
deleted file mode 100644
index a3ed8f9..0000000
--- a/tests/regression/ust/python-logging/
+++ /dev/null
 <at>  <at>  -1,76 +0,0  <at>  <at> 
-#!/usr/bin/env python
-# Copyright (C) 2014 - David Goulet <dgoulet <at>>
-# This library is free software; you can redistribute it and/or modify it under
-# the terms of the GNU Lesser General Public License as published by the Free
-# Software Foundation; version 2.1 of the License.
(Continue reading)

Philippe Proulx | 1 Apr 02:09 2015

[PATCH lttng-ust] Refactor Python agent

This patch refactors the whole LTTng-UST Python agent.
Notorious changes are:

  * Python module "lttng_agent" moved to Python package
    "lttngust". This removes "agent" from the name, which
    really is an implementation detail. "lttngust" is used
    because "lttng" would clash with LTTng-tools Python
  * Python package instead of simple module. Splitting the
    code in various modules will make future development
  * Use daemon threads to make sure logging with tracing
    support is available as long as the regular threads live,
    while making sure that the application exits instantly when
    its regular threads die.
  * Create client threads and register to session daemons
    at import time. This allows the package to be usable just
    by importing it (no need to instanciate any specific class
    or call any specific function).
  * Do not use a semaphore + sleep to synchronize client threads
    with the importing thread: use a blocking synchronized
    queue with appropriate timeouts.
  * Add debug statements at strategic locations, enabled by
    setting the $LTTNG_UST_PYTHON_DEBUG environment variable
    to 1 before importing the package.
  * Override the default session daemon registration timeout
  * Override the default session daemon registration retry
  * Honor $LTTNG_HOME (to retrieve session daemon TCP ports).
(Continue reading)

Wolfgang Rostek | 31 Mar 21:46 2015

major variations in perfomance figures

Hi all,

I did a first perfomance test extending the demo example
by tracing 30 simple integers in a loop.

For a similar CPU (i5 quad core 2.8GHz) I saw values
mentioned around 250ns in the forum.

I've tried the test several time and could come down
to about 350ns. The machine was almost idle for all

What makes me wonder is a large variation with frequent
runs giving me 750-800ns.

Not the absolute time but the variations isn't clear
to me. From my understanding the caller path is more
or less straight to shared memory. Why do different
runs show more than double the time for the traces?

Wolfgang R.
Chidhu R | 31 Mar 18:39 2015

Printing complex data types with LTTng


I wanted to know if there is a way to print complex data types, given the pointer to it.

For ex,

class Msg {
int a;
char *str

// Create an object for Msg
// pass the address of the object to LTTng and make LTTng print the values of a and str. I do not want to dereference the pointer and send the individual values. Or, if there is a function which can be written to dereference the values and this function could be called each time I pass the address of the object is also fine.

Is there a way to achieve this with LTTng?

lttng-dev mailing list
lttng-dev <at>
Sébastien Lorrain | 31 Mar 17:50 2015

Added information about PID namespace to tracepoints

Hello fellow LTTNG devs,

We are students from Polytechnique Montreal and we are currently working on an TraceCompass analysis module for Linux containers (LXC/Docker/Etc...). The information we track is mostly CPU usage by pid namespace, which would allow the identification of CPU-utilization related bottlenecks on a Linux container host.

We tried to come up with targeted information to recreate the container/PID namespace tree of a Linux host and we have modified lttng-modules to be able to do so :

In our analysis, we try to re-use as many information that was already available in the LTTNG kernel tracer. We build our container/namespace tree using the tasks and their parent recursively (using only the pid/vppid/ppid). However, we were unable to have a reliable model without some light modifications on some tracepoints.

The modifications to the lttng-modulues where the following:

We added the PID namespace INode (from /proc/PID/ns/pid) to the LTTNG statedump tracepoint.

Also, to track new task/containers that would spawn during the tracing session, we also added multiple fields to the sched_process_fork event :
  • Added a VTID field for the children task. This is mandatory in our analysis, as we keep track of VTID/TID association.
  • Added a parent_ns_inum and child_ns_inum field wich represent the pid namespaces inodes of the parent and child task respectively.
    • The parent_ns_inum is "not mandatory" in our analysis, but it keeps things simple as we don't have to track TID from parent containers and it keeps the code relatively independant wheter statedump is enabled or not.
    • The child_ns_inum IS mandatory, because even if we keep track of the PID/VPID/PPID/VPPID that have spawned, it is possible to "inject" a task in an already existing namespace without repareting it to the child reaper of the container (this means the task is sent in a namespace, but it is not part of the process tree of the container of that namespace)

We hope to integrate our analysis to TraceCompass soon, and without the modification to the LTTNG tracer approved, we would be unable to proceed throught code review.
We would be really grateful to the community if we could have feedback, and we will make every modifications possibles to have our analysis up and working!

The code is supposed to work on kernel version 3.8 through 3.19.
It was tested on 3.18 and 3.19, and I'am going to test it for 3.8 today.

Sebastien & Francis.

lttng-dev mailing list
lttng-dev <at>
Anand Neeli | 31 Mar 15:44 2015

Creating session on 64-bit system using a 32-bit wrapper

I have a wrapper app which is 32-bit which is similar to lttng binary, it links to liblttng-ctl for creating lttng sessions (similar to lttng) . But the session create is stuck with below logs.

To simulate this i have done following
1) On a 64-bit system replace /usr/bin/lttng with a 32-bit version of lttng binary
2) Spawn lttng-sessiond (sessiond is 64-bit, we have not changed it)
3) Now create lttng session using the 32-bit lttng binary using command "lttng -vvv --no-sessiond create mysession --live 2000000 -U net://localhost"

After step 3, the console is stuck and doesn't return.

Can anyone please tell me why cant we use 32-bit lttng (or any other wrapper) which can link to liblttng-ctl and launch sessions? is there any limitation?

Anand Neeli

root <at> fpc0:/usr/bin# lttng  -vvv --no-sessiond create mysession --live 2000000 -U net://
DEBUG1 - 13:16:49.240956 [840/840]: Session live timer interval set to 2000000 (in cmd_create() at commands/create.c:575)
DEBUG3 - 13:16:49.241199 [840/840]: URI string: net:// (in uri_parse() at uri.c:293)
DEBUG2 - 13:16:49.241228 [840/840]: IP address resolved to (in set_ip_address() at uri.c:134)
DEBUG3 - 13:16:49.241235 [840/840]: URI dtype: 1, proto: 1, host:, subdir: , ctrl: 0, data: 0 (in uri_parse() at uri.c:507)
DEBUG1 - 13:16:49.241380 [840/840]: LSM cmd type : 30 (in send_session_msg() at lttng-ctl.c:133)
DEBUG1 - 13:16:49.241452 [830/833]: Wait for client response (in thread_manage_clients() at main.c:4090)
DEBUG1 - 13:16:49.241498 [830/833]: Receiving data from client ... (in thread_manage_clients() at main.c:4135)
DEBUG1 - 13:16:49.241753 [830/833]: Processing client command 30 (in process_client_msg() at main.c:2813)
DEBUG1 - 13:16:49.241775 [830/833]: Waiting for 2 URIs from client ... (in process_client_msg() at main.c:3740)

lttng-dev mailing list
lttng-dev <at>
Michael Haberler | 30 Mar 12:48 2015

LTTng + Xenomai - status?


Xenomai is the primary RT kernel we use, in particular on embedded ARM's, so I want to be sure this works

I cannot fully decode this thread:

so I thought I'd ask here - am I in safe territory?

thanks in advance,

Michael Haberler | 29 Mar 17:17 2015

Re: introduction & usage of LTTNG in machinekit


> Am 29.03.2015 um 16:31 schrieb Michel Dagenais <michel.dagenais <at>>:
>> - is there a complete example out-of-tree kernel module instrumented for
>> LTTng? I worked through Steve Rostedt's sillymod
>> ( ff) but am fuzzy on "Adding the LTTng
>> adaptation layer" - is the example from the manual available in toto
>> somewhere?
> A few people in my group have done so. I will check on Monday to get sample code for you.

on re-reading, I am almost there, but I'd appreciate an example anyway

>> - much of the machinekit RT code (HAL library and components) can be compiled
>> as userland shared objects (Posix, RT-PREEMPP, Xenomai threads) or kernel
>> modules (RTAI, and the deprecated Xenomai kernel API), sitting ontop of an
>> abstracted realtime API ("RTAPI"). Ideally the tracepoints would work
>> unchanged except for the different context. The manual naturally assumes an
>> either-or context. Am I out on a limb here ;-? How would I bring in the
>> tracepoint definitions in a kernel context - as a separate module maybe or
>> linked into each using module?
> I dont believe that we have examples of code that compile in both kernel and UST contexts. This can
certainly be handled with conditional compilation. 

right, I need to wrap my head around the maze of #defines and #includes ;) but it looks possible

>> - we cannot assume that lttng is available when building machinekit packages
>> at least for now, meaning we likely need shim macro definitions and possibly
>> fake header files to adjust for the lacking lttng files. Any examples I can
>> follow?
> MariaDB and others can target different tracers (LTTng, DTrace, none)... Again, I will try to get you good
sample code early next week.

I thought I might not be the first one to do this

semi-related because I was bitten by it: the Xenomai configs referenced here: turn CONFIG_FTRACE off,
citing performance reasons. I think we'll turn it back on in our kernels, cant be that much extra overhead.

thanks in advance!

- Michael
Mathieu Desnoyers | 27 Mar 20:57 2015

Re: 答复: clock_gettime vdso issue

The feature is really just that: a way to let users override the clock
source. Depending on the clock provided by the end user, it may or
may not hurt resolution.

The feature branches are here:




  Are there any documents about what the clock override feature is? It seems that the feature will hurt time resolution,doesn't it?
In fact,I will try to get the symbol addr of  __vdso_clock_gettime just like the same way glibc does with __vdso_gettimeofday.
Is there any other cheaper way to call   __vdso_clock_gettime  ?


发件人:Mathieu Desnoyers <mathieu.desnoyers <at>>
发送时间:2015年3月27日(星期五) 00:33
收件人:Jesper Derehag <jderehag <at>>
抄 送 < <at>>,lttng-dev <lttng-dev <at>>
主 题:Re: [lttng-dev] clock_gettime vdso issue

Yes, the clock override feature, as well as the getcpu override,
should be merged within lttng-ust and lttng-tools master branches
this week. Stay tuned!



----- Original Message -----
> There has been discussions regarding adding capability to override the clock
> implementation.
> If that support is arriving, you could always write your own (say a separate
> thread reading clock periodically and storing in a cache, then your own
> clock implementation could get that cached timestamp).
> Obviously you would need to think very carefully about clock resolution and
> frequency for your clock thread, but it should from a performance point of
> view be big improvement..
> Maybe Mathieu(?) could shed some light on the clock override?
> /Jesper
> ________________________________
> > Date: Thu, 26 Mar 2015 15:25:49 +0800
> > From: <at>
> > To: lttng-dev <at>
> > Subject: [lttng-dev] clock_gettime vdso issue
> >
> >
> > Hi,Team
> >
> > You may not feel how painful I am with a too old GLIBC which does
> > NOT support vdso clock_gettime. Especially in ust case,
> > trace_clock_read will call clock_gettime to record the time of every
> > event.
> > Does anyone know how to avoid system_call clock_gettime with this old
> > GLIBC??
> >
> > Thanks in advance
> >
> >
> >
> > _______________________________________________ lttng-dev mailing list
> > lttng-dev <at>
> >
> _______________________________________________
> lttng-dev mailing list
> lttng-dev <at>

Mathieu Desnoyers
EfficiOS Inc.

Mathieu Desnoyers
EfficiOS Inc.
lttng-dev mailing list
lttng-dev <at>
Mathieu Desnoyers | 27 Mar 20:52 2015

[PATCH lttng-tools] Fix: leak on error in lttng-crash

Found by Coverity:
** CID 1291945:  Resource leaks  (RESOURCE_LEAK)
/src/bin/lttng-crash/lttng-crash.c: 769 in copy_crash_data()

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers <at>>
 src/bin/lttng-crash/lttng-crash.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/bin/lttng-crash/lttng-crash.c b/src/bin/lttng-crash/lttng-crash.c
index 793877d..4f1efe9 100644
--- a/src/bin/lttng-crash/lttng-crash.c
+++ b/src/bin/lttng-crash/lttng-crash.c
 <at>  <at>  -766,7 +766,8  <at>  <at>  int copy_crash_data(const struct lttng_crash_layout *layout, int fd_dest,
 	readlen = lttng_read(fd_src, buf, src_file_len);
 	if (readlen < 0) {
 		PERROR("Error reading input file");
-		return -1;
+		ret = -1;
+		goto end;

 	prod_offset = crash_get_field(layout, buf, prod_offset);

Michael Haberler | 27 Mar 19:46 2015

basic questions on LTTng


I've perused the docs and some of the code, and have the following questions:

- is it possible to use LTTng from Xenomai RT threads? (old mails on the Xenomai list suggest so, but it is
unclear if special precautions/incantations/patches are needed)

- do tracepoints use ANY linux system services (system calls in the UST context, and kernel API in the kernel
context)? (background - if that is the case with Xenomai, an RT thread could be relaxed; with RTAI things
could go really haywire as RT threads are running in something similar to an IRQ context)

- are there any precautions to take when using an RT-PREEMPT kernel?

- is there a ballpark figure for the cost (roundabout ns on typical hardware) for a dormant and a fired tracepoint?

- related - does it make sense to conditionally compile in tracepoints, or are they so cheap they could just
as well stay in production code?

- in our scenario, we'd like to find sources of delay which could vary according to arguments (e.g. the math
library function I mentioned, which runs exceedingly long for certain argument values). Is there a way to
trigger on the time delta between tracepoints, like as a filter? would the Python API help me here?

Our usage scenario will be mostly UST and maybe some kernel tracing.
Right now I'm using self-built 2.6.0 on x86 and 2.5.1 on arm7hf from the jessie stream. Xenomai is 2.6.3 and
2.6.4 on kernels at and beyond 3.8.13 at the moment. RTAI is also 3.8.13, but not the platform of choice.

A note on demons and tracing: one demon we use does the classic double fork to go into the background, and in
that case the "" support seems not to work. Not an issue, I patched it so
it can stay in the foreground which takes care of the problem. It's this piece I disabled with an option:

thanks in advance for,