Element Green | 30 Nov 03:10 2015

[LAU] Changing default Jack output ports to be an application's ports

Hello Linux Audio Users,

I'm working on a program to drive LED lights based on music playback on a Linux system.  My application creates Jack input ports for frequency analysis which I'm currently connecting to the Jack monitor ports, this frequency analysis info is then used for controlling some RGB LED lights.  The application I'm currently using for music playback is Clementine, which has a Jack sink option.

All this works fine.  However, the frequency analysis of the input audio incurs unwanted latency, so there is a slight delay between changes in the music and lighting changes.  What I want to do is delay the audio playback to match the full system latency to match the lighting updates as close to real time as possible.

My current thinking is to route the incoming Jack audio in my application back to the Jack playback ports, with a fixed delay buffer.

The problem is getting applications, such as Clementine, to connect to my application's ports instead of the physical output ports.  My understanding of Clementine is that it is using gstreamer, and thus the jackaudiosink plugin.  This plugin appears to have some properties for changing the default ports it connects to ("connect" and "port-pattern").  Clementine does not seem to offer a way to specify what ports it connects to though.  What complicates it more is that the ports are disconnected/recreated for each song.

I suppose I could look into modifying the Clementine code to allow for a value for the "port-pattern" property to be specified, but I thought someone on this list might have some better ideas.

Some other thoughts I had:

Is it possible to modify the default system wide Jack playback ports to be an application?  From what I have seen of code that auto connects to the playback ports, is that it looks for the first input port marked as Physical.

Is it possible to change the default system wide setting "port-pattern" for jackaudiosink?
Is it possible to modify jackaudiosink settings on a per application basis (say with an environment variable or config file) without having to modify the client program's source.

Thanks in advance for any help on this and cheers!

Element Green

Linux-audio-user mailing list
james | 29 Nov 11:56 2015

[LAU] Hint for people with no HDMI sound.

I have a PC with an AMD APU chip, and I'm trying to send sound to my 
Audiolab 8200AP processor over HDMI.

The system is 'Ubuntu Studio 15.10'.

I believe the hardware works - I can boot into Windows 10 and send audio 
from jriver.

With Linux, however:
  - aplay -L does list the hdmi:Generic device, and pulse is using it
  - Pulse Audio volume control does detect whether the HDMI cable is 
plugged in at both ends
  - speaker_test -c 8 -D pulse loops continuously and thinks its playing
  - Pulse Audio volume control shows the ALSA plug-in [speaker-test] stream

However, the Audiolab is stubbornly said the channel is 'silent' on its 

And then I used 'Settings/Display' to select the 'IAG 6"' and select 
'use this display' and to mirror the displays - the computer is actually 
displaying on an attached VGA monitor.

And that made things start to work.

Ideally I would only access the system over xrdp - but you can't enable 
the extra monitor without being on the console.

Quite possibly I would not have had the issue if either the display was 
running through HDMI or the VGA monitor was not plugged in - but it is 
worth bearing in mind if you are having issues.

I guess it would be handy if the display on the volume control could 
indicate 'plugged in but output is inactive' - or (better still) if the 
HDMI output were active if either the display or sound system wants to 
use it, and the display is just an unchanged or black background.
Joël Krähemann | 28 Nov 23:30 2015

[LAU] Minor bug-fixes and accessibility improvements of GSequencer 0.6.23

Hi all

Advanced Gtk+ Sequencer release 0.6.23 has defenitily not to be
missed. Fixes done so far:

        * fixed allocation of AgsDial
        * fixed focus in ags_dial.c
        * fixed _File mnemonic in menubar
        * fixed recover of GSequencer project as doing properties
        * fixed SIGINT while reading XML files including AgsRecallLadspa

see ChangeLog for indept view. Further on http://gsequencer.org is an
empty project with one drum and two synths connected by a mixer with
output panel. So you just have to fill in the gaps.

Further the mixer is using LADSPA caps plugins:

        * drum -> 10 band equaliser
        * matrix0 -> Mono Phaser
        * matrix1 -> Noise Gate

Would be great to hear from you ...

$ wget -c http://gsequencer.org/ags_drum_and_synth.xml
$ gsequencer --filename ags_drum_and_synth.xml

Linux-audio-user mailing list
Linux-audio-user <at> lists.linuxaudio.org
Benoît Rouits | 28 Nov 22:56 2015

[LAU] music written with Linux (musescore)

Hello dear list,
Here is a litle tune written and rendered with musescore for the piano,
It is called Novembre (i.e: November in english) and has a simple 
crescendo progression with a few harmonic changes. Enjoy at :


have a good and peaceful week-end,
- Benoît
Linux-audio-user mailing list
Linux-audio-user <at> lists.linuxaudio.org
Will Godfrey | 28 Nov 15:14 2015

[LAU] Yoshimi

A minor bugfix on a low profile release.

Yoshimi 1.3.7 is another consolidating release. Nothing dramatic, but general
usability enhancements. Probably the most significant one is that command line
access has moved from being a proof of concept to a real usable feature. It
still has a long way to go though.

A nice addition to the gui is that significant window locations are mostly
remembered now which makes organising your desktop just that little bit easier.

Yoshimi source code is available from either:


Will J Godfrey
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.

[LAU] Recorded in Linux: restored from archive: KillVideGill


This one dates from 2006.  The date is easy to remember since it is in
Wikipedia.  September 13th 2006.  The Dawson College shooting in
Montreal.  It could have been any other at any other place.  That
evening I made this and almost uploaded it on the killer's web page
which was still active at that time, to share the feelings to anyone
touched by this. It is about sadness, expressed against a busy
backdrop of events.

This is entirely done using Zynaddsubfx synthesizer.  The sequences
are made of 1 short clip each copy/pasted for the duration of the 1:48
piece.  The resonance line and chords were played over.  What was done
this month was to restore the sequence of clips from an old Ardour
session that would not load as it was, by copy/pasting and aligning
each clip carefully one after the other.  Then Robin Gareus' 4-band EQ
was applied to all tracks including master.  A touch of reverb was
added, as well as echo at one place.  The original piece was on-going
all the same from the start.  This new version has a break near the
end that surfaces what would be the expression of sad emptiness before
the 'business' restarts.  Automation was used on EQ and echo as well
as here and there for slight volume control.  

Raffaele Morelli | 26 Nov 13:56 2015

[LAU] Ardroid

Hi there,

does Ardroid 1.0 works with Ardour4 as well?



«My mama said to get things done
You'd better not mess with Major Tom»
public | 25 Nov 14:43 2015

[LAU] [CyberSoul] - pinGnu

Hi list,

Made this little jam about a little Pingu, riding on a Gnu, 
contemplating the world from the freedomgalaxy while singing:

I'm a little Pingu, riding on a GNU.
Looking at the world, looking at you.
Free at the core, i wonder why don't you,
Look at me the way i look at you.

It would be so nice, if you could take me in.
Your computer and i could change the world we're in.
Your freedom is being diluted.
And your code is being polluted.

I'm a big GNU with my little Pingu,
Looking at the world, looking at you.
Free at the core, i wonder why don't you,
Look at me the way i look at you.

It would be so nice, if you could compute.
Your freedom is not to dilute,
Someone has taken your tool into dispute,
This is what together we should refute.


Hope someone finds it soothing.

Set Sakrecoer
F. Silvain | 24 Nov 23:57 2015

[LAU] Alligning audio without graphics (for phase inversion cancellation)

Hey hey,
it's easier to describe the problem I think.

I hae two pieces, one the song with vocals, the other the exact same song, but 
only the instrumental. Now I'd love to isolate the voice by alligning them and 
mixing them with one having inverted phase. Since I can't see, and apparently 
often instrumentals don't start at exactly the same time, I need a way to 
synchronise those tracks. time shifting by hand is very tedious and hasn't yet 
yielded a perfect result. There are other issues, which can cause "confusion", 
so I'd like to get this one out of the way.

Does anyone know of a way to do it from the commandline? Sometimes alligning 
the beginnen of a piece (the onset of sound) might do, but sometimes it's a 
little more complicated. Anything from Sox to Csound will do. :)

I appreciate any hint in the right direction! Thank you!

* Homepage: https://freeshell.de/~silvain
* Twitter:  http://twitter.com/ffanci_silvain
* GitHub:   https://github.com/fsilvain

[LAU] Audio seeping into another track

This might be not/so/easy to prouve although a number of screenshots
should. I have a guitar track that has a pre-fader send to another
track that adds delay.  Then there is an audio drum track that records
what has/is played on a MIDI track that is in contact with a Korg
Microstation.  There are no connections between the drum and guitar and
delay tracks,  I have double verified in qjackctl.  All of those tracks
go to the master bus.  When the drum is played there is echo added to
it from the echo plugin of the other track.  Taking out the echo plugin
results of having no more echo on the drum track.  I will continue to
see what is going on.  meanwhile, has anyone experience such a
behaviour in Ardour (4.4.0) ?

(Ardour 440, ah! :)


[LAU] Mono tracks to mono or stereo bus ? And sampling rate

Hello all,

When having multiple mono guitar tracks in Ardour and wanting to
regroup them in a single bus, would that bus be preferably mono or
stereo ?  How is this in relation to eventually placing the guitar bus
in a stereo field ?

Also, is it possible in Ardour to raise the sampling rate of single
tracks ?  I just followed the part of the Brian Bollman music production
(free) course about sampling rates.  The higher the rate the more
fidelity to the original signal and also the more material to work with
when bringing it down to CD or ogg/mp3 level.  I would like to try with
the guitar and bass tracks as I don't care so much about synths for
now.  Is it possible to tell Ardour to work with a higher sampling rate
for a track ?  I don't seem to have found the option.