Balazs Scheidler | 1 Aug 12:39 2011

Re: 3.3.0beta1 leaking memeory (Re: syslog-ng 3.3.0beta & ESX crashes)


Thanks for the info, I'll try to look at this today.

On Fri, 2011-07-22 at 14:04 +0200, Gergely Nagy wrote:
> Gergely Nagy <algernon <at>> writes:
> > Hendrik Visage <hvjunk <at>> writes:
> >
> >>  I've mailed Gergely the Valgrind output to analyze 3.3.0beta1 memory
> >> leak(s), so if anybody else are interested in it, please contact me
> >> for them.
> >
> > I had a little time to look into the memory leak issue, and I can
> > confirm: the leak happens all the time with latest git.
> After a bit of bisecting, this is what i found to be the cause of the
> massive leak:
> b8cc9fe3bb41d862918d9e39b0ded812dd756ef5 is the first bad commit
> commit b8cc9fe3bb41d862918d9e39b0ded812dd756ef5
> Author: Balazs Scheidler <bazsi <at>>
> Date:   Thu Jul 14 12:26:01 2011 +0200
>     delegate flow-control early-ack decision to destination drivers
>     Sources always expect an acknowledgement for each message they produce.
>     This acknowledgement is usually generated when all the destinations have
>     finished their processing of the given message, unless flow-control
>     is disabled.
(Continue reading)

Balazs Scheidler | 1 Aug 20:15 2011

Re: 3.3.0beta1 leaking memeory (Re: syslog-ng 3.3.0beta & ESX crashes)

On Mon, 2011-08-01 at 12:39 +0200, Balazs Scheidler wrote:
> Hi,
> Thanks for the info, I'll try to look at this today.
> On Fri, 2011-07-22 at 14:04 +0200, Gergely Nagy wrote:
> > Gergely Nagy <algernon <at>> writes:
> > 
> > > Hendrik Visage <hvjunk <at>> writes:

The patch wasn't the root cause for the leak, rather it was in the ref
counter cache used to spare some atomic operations in the log processing
fast path.

This patch fixes that (I've pushed it just now):

commit 13a87ed5548cb7b152b708d24bb5b3684771d7ca
Author: Balazs Scheidler <bazsi <at>>
Date:   Mon Aug 1 20:13:45 2011 +0200

    refcache: process the ref count differences in two steps

    The refcache may have caused a leak, when the ack callback performed an
    additional unref operation (which it does), because folding in the ref
    difference counter didn't take the additional unref into account.

    Thus the ref difference needs to be folded back into the atomic counter
    in two steps:

     1) we add in the difference that was present when entering the
(Continue reading)

Balazs Scheidler | 1 Aug 20:17 2011

Re: Inability to filter/log hostnames

On Wed, 2011-07-20 at 12:39 -0400, Norman Elton wrote:
> I'm running syslog-ng 3.2.4 from RedHat's RPM. Unfortunately, I can't
> seem to log the hostname as specified in the incoming UDP packet. We
> don't do DNS resolution; rather, just want to log what the sending
> host is passing along. No relays in the mix, but we have
> keep_hostname() enabled. My global options:
>         flush_lines(10);
>         flush_timeout(750);
>         time_reopen (10);
>         log_fifo_size (1000);
>         keep_hostname (yes);
> When I log $HOSTNAME or $HOST, I just get the sender's IP address.
> Similarly, filters based on these macros don't work properly. This all
> seemed to work on prior versions of syslog-ng (2.something).

Sorry for the long delay, summer holidays and such. The issue you are
seeing seems to indicate that syslog-ng failed to recognize the hostname
in the packet for some reason. Can you please produce a dump of the
incoming frame as it was received on the network?

the udp payload should be ok.



Member info:
(Continue reading)

Balazs Scheidler | 1 Aug 20:19 2011

Re: Check connection to server

On Thu, 2011-07-21 at 13:58 +0200, Josu Lazkano wrote:
> Hello again.
> I had my syslog server configured and running. All servers has public
> IP but they are on the same LAN. Now I want to get all logs from a
> remote server.
> I configured the remote server same as LAN servers but there is no
> logs from WAN.
> How could I check if it is sending data to my syslog server?

Well, the easiest would be to confirm that there are indeed network
packets, coming from the WAN to your server. You could do that with
tcpdump, wireshark or something similar.

> Maybe I must configure filters to do that?

No, filters() are meant to sort messages into different destinations,
_iff_ they arrived. As it seems you have a network connection issue. 


(Continue reading)

Balazs Scheidler | 1 Aug 20:21 2011

Re: TCP / max-connections & flow-control

On Sun, 2011-07-24 at 21:48 +0200, Fekete RĂ³bert wrote:
> Hi, 
> yes, the connections are reused unless the client does not send new logs for a time (60 seconds by default).
> This can be set on the server side using the time_reopen option: 

time_reopen() applies if the connection breaks. otherwise the connection
is kept open indefinitely



Member info:

Balazs Scheidler | 1 Aug 20:23 2011

Re: Parsing Question

On Fri, 2011-07-29 at 19:22 +0200, Jakub Jankowski wrote:
> On 2011-07-29, Brandon Phelps wrote:
> > Could anyone explain how I would parse a message that looks like this:
> > Jul 29 08:58:38 id=firewall sn=0017C5158708 time="2011-07-29
> > 08:58:38" fw= pri=6 c=262144 m=98 msg="Connection Opened" n=0
> > src= dst= proto=udp/ntp
> >
> > I am logging to mysql and would like to extract the 'src' and 'dst'
> > fields from the above message so that I can insert them into indexed
> > fields in my database.
> [...]
> > Is my only option in this case to write a perl script or something that
> > watches a named pipe and have syslog-ng log to the named pipe instead,
> > while my perl script does the actual parsing?  Or can I do what I want
> > with syslog-ng alone?
> You seriously need to look at patterndb functionality.

patterndb() would work if the order of the fields is definite. if they
are not, it's going to be ugly. I was pondering to write a welf parser
(which the above format is), that could  be used to preprocess logs
prior to going to db-parser(), but that's something that you either have
to wait for, implement yourself or wait someone who has the same itch
and does it for you. :)


(Continue reading)

Balazs Scheidler | 1 Aug 20:25 2011

Re: [Bug 128] template() broken in latest 3.3 git

On Fri, 2011-07-22 at 13:18 +0100, Nix wrote:
> On 22 Jul 2011, Balazs Scheidler verbalised:
> > The error messages indicate that your test suite couldn't load the
> > "basicfuncs" plugin that contains an implementation for $(if) $(grep)
> > and $(echo)
> Indeed it can't, because it's looking for them in the install location,
> and I'm running tests before installation (which seems only sane with
> something as system-critical as a logging daemon, especially when it's
> a beta version and my last three intallations were failures and led to
> its being backed out ;) ).
> access("/usr/lib/syslog-ng/", F_OK) = -1 ENOENT (No such file or directory)
> access("/usr/lib/syslog-ng/", F_OK) = -1 ENOENT (No such file or directory)
> access("/usr/lib/syslog-ng/", F_OK) = -1 ENOENT (No such file or directory)
> access("/usr/lib/syslog-ng/", F_OK) = -1 ENOENT (No such file or directory)
> The test_template_LDADD line has led to the right directories for these
> plugins being added to LD_LIBRARY_PATH, but dlopen() of course does not
> follow LD_LIBRARY_PATH, it's following the module-path, which is unset
> because all we've called is cfg_new().
> This could all be fixed by setting module_path, but unfortunately the
> module-path variable consulted by the loader (as opposed to the global
> which is its ultimate source) is not that easy to set: you need to run
> the lexer, which means you need a configuration file, and none of the
> tests in tests/unit have one of those. Perhaps it would be best to move
> the lexer->args out of the lexer, or just provide an outside-lexer way
> to initialize it? (Or simply provide a trivial configuration file and
> parse it in the unit tests... that's probably least invasive.)
(Continue reading)

Brandon Phelps | 1 Aug 20:52 2011

Re: Parsing Question

Thanks Martin,

However using the below configuration I get the following in the output 
of 'syslog-ng -d':

Running application hooks; hook='1'
Running application hooks; hook='3'
Unknown parser type specified; type='id='
Log pattern database reloaded; file='/etc/syslog-ng/sonicwall.xml', 
version='3', pub_date='2011-08-01'
syslog-ng starting up; version='3.1.3'
Incoming log entry; line='<134>id=firewall sn=0017C5158708 
time="2011-08-01 14:34:51" fw= pri=6 c=1024 m=537 msg="Connection 
Closed" n=0 src= dst= proto=tcp/smtp 
sent=460 rcvd=748  '
Running SQL query; query='SELECT * FROM test_table WHERE 0=1'
Running SQL query; query='INSERT INTO test_table (when, src_ip, dst_ip) 
VALUES (\'2011-08-01 14:34:51\', \'\', \'\')'
Error running SQL query; type='mysql', host='localhost', port='', 
user='myuser', database='syslog', error='1064: You have an error in your 
SQL syntax; check the manual that corresponds to your MySQL server 
version for the right syntax to use near \'when, src_ip, dst_ip) VALUES 
(\'2011-08-01 14:34:51\', \'\', \'\')\' at line 1', query='INSERT INTO 
test_table (when, src_ip, dst_ip) VALUES (\'2011-08-01 14:34:51\', \'\', 

So it would appear that A) $src and $dst are not being set properly 
(they are empty) and B) some stuff is getting escaped that shouldn't be, 
namely all of those "'" marks.

(Continue reading)

Javi Polo | 2 Aug 20:55 2011

Problem with program_override in upgrade from 3.0.8 to 3.2.4

Hello there

I've been using syslog-ng for a long time, no problems so far, till 
today ... :p

I'm using Open Source Edition, upgrading from 3.0.8 to 3.2.4, installing 
from the .run file to /opt

Today I wanted to update our syslog-ng's to the latest version and found 
that for some reason, when I override a program via program_override, 
the PROGRAM macro is empty when I send it to another loghost.
program_override seems to be working, as locally writen files show so

I did upgrade both syslog-ng, the client and the logserver
When I switched back to the old version I found everything began working 

Here's the conflicting config in the client:
source s_apache_access { file("/var/log/apache2/access.log" 
program_override ("apache_access")); };

destination d_logserver01 { tcp("logserver01"); };
destination d_tmp { file("/var/log/tmp.log" template("$HOST $PROGRAM 
$MESSAGE\n")); };

log {   source(s_apache_error);
         flags(final); };
(Continue reading)

Viacheslav Biriukov | 2 Aug 23:09 2011

MongoDB and syslog-ng 3.3 on Centos 5.5


Can you help me with syslog-ng 3.3 on the CentoOS 5.5 with MongoDB 1.6
I install syslog-ng version 3.3 where mongodb support exists. But when I add to the config mongodb destination I get an error:

Starting syslog-ng: Error parsing destination, destination plugin mongodb not found in /etc/syslog-ng/syslog-ng.conf at line 70, column 5:


<at> version: 3.3
# configuration file for syslog-ng, customized for remote logging
source s_internal { internal(); };
destination d_syslognglog { file("/var/log/syslog-ng.log"); };
log { source(s_internal); destination(d_syslognglog); };
# Local sources, filters and destinations are commented out
# If you want to replace sysklogd simply uncomment the following
# parts and disable sysklogd
# Local sources
source s_local {
file("/proc/kmsg" program_override("kernel"));
# Local filters
filter f_messages { level(info..emerg); };
filter f_secure { facility(authpriv); };
filter f_mail { facility(mail); };
filter f_cron { facility(cron); };
filter f_emerg { level(emerg); };
filter f_spooler { level(crit..emerg) and facility(uucp, news); };
filter f_local7 { facility(local7); };
# Local destinations
destination d_messages { file("/var/log/messages"); };
destination d_secure { file("/var/log/secure"); };
destination d_maillog { file("/var/log/maillog"); };
destination d_cron { file("/var/log/cron"); };
destination d_console { usertty("root"); };
destination d_spooler { file("/var/log/spooler"); };
destination d_bootlog { file("/var/log/boot.log"); };
# Local logs - order DOES matter !
log { source(s_local); filter(f_emerg); destination(d_console); };
log { source(s_local); filter(f_secure); destination(d_secure); flags(final); };
log { source(s_local); filter(f_mail); destination(d_maillog); flags(final); };
log { source(s_local); filter(f_cron); destination(d_cron); flags(final); };
log { source(s_local); filter(f_spooler); destination(d_spooler); };
log { source(s_local); filter(f_local7); destination(d_bootlog); };
log { source(s_local); filter(f_messages); destination(d_messages); };

# Remote logging
source s_remote {
tcp(ip( port(514));
udp(ip( port(514));
destination d_separatedbyhosts {
file("/var/log/syslog-ng/$HOST/messages" owner("root") group("root") perm(0640) dir_perm(0750) create_dirs(yes));
log { source(s_remote); destination(d_separatedbyhosts); };

 destination d_mongodb {
      keys("date", "facility", "level", "host", "program", "pid", "message")
log { source(s_local); destination(d_mongodb); };

In the modules.conf I try to add  <at> afmongodb but this isn't change the situation.

What's wrong?
Viacheslav Biriukov

Member info: