Ray Tayek | 19 May 21:55 2016

cgainsaw: xml config for rolling file appender

i got chainsaw to work (please see attached).

so chainsaw received lots of records from 7 hosts and ran for many hours and then hung (i was running on a laptop).

the records from each host were in a separate tab.

but the sample.log file had 0 bytes.

i am new to log4j and chainsaw, so please forgive the dumb questions.

i would like too see everything, so should i make <param name="Threshold" value="ALL" /> for the appender and then  <priority value="all"/> for the root?

is there a way to get the records from each host into a different logfile?

is there a way to make the appender roll over more quickly or rollover on a reconnection from the host?

ideally, it would rollover every day as well as on every reconnect or when the log file got too big, and retain these files for a week or so.


-- Honesty is a very expensive gift. So, don't expect it from cheap people - Warren Buffett http://tayek.com/
Attachment (chainsawconfig.xml): text/xml, 810 bytes

To unsubscribe, e-mail: log4j-user-unsubscribe <at> logging.apache.org
For additional commands, e-mail: log4j-user-help <at> logging.apache.org
Laurent Hasson | 19 May 05:18 2016

Questions about RollingFile


I have the following XML configuration file for my webapp (under Tomcat 9).

<?xml version="1.0" encoding="UTF-8"?>

<Configuration status="info">



              <Property name="now">${sys:startup}</Property>



              <RollingFile name="FILES" fileName="${log-path}/capsico.log"


                     <pattern>%d{MMdd.HHmmss.SSS}#%-3t %level{length=1}
%15.15c{1}|  %m%ex{20}%n</pattern>



                           <SizeBasedTriggeringPolicy size="100 MB" />


                     <DefaultRolloverStrategy max="99999"


              <Async name="ASYNC">

                     <AppenderRef ref="FILES" />




              <Root level="debug">

                     <AppenderRef ref="ASYNC" />




I have done one thing that I haven't seen in any example, which is that all
the logs are tagged with the timestamp of when the system started (using
${sys:startup}), rather than using an inline timestamp format. This allows
me to group all log files per "server startup". This works well. 

Now, I would like to do the same thing for the initial/starting file, and I
have tried fileName="${log-path}/capsico.${sys:startup}.log" but with no
success. I have also tried:

-          fileName="${log-path}/capsico.$${sys:startup}.log"

-          fileName="${log-path}/capsico.${now}.log"

-          fileName="${log-path}/capsico.$S{now}.log"

I find this easier to manage in development or when troubleshooting
deployment issues where the server may be re-started multiple times: instead
of having a single "start" file with all the logging appended across
multiple server restart, I would like to always get a new clean file

Can this be done? I am sure there is a way by writing some Java code and
all, but trying to figure out a config-level way first if available.

Thank you,

Laurent Hasson
Co-Founder and CTO

CapsicoHealth Inc.

Ray Tayek | 16 May 23:13 2016

chainsaw - socket hander - jdk util logging

hi,  please excuse me if this is the wrong place to post, but i am new 
to log4j and chainsaw.

i posted this question

but have no results yet.

jigsaw seems kinda dead, is this what people still use to monitor logs 
from socket handlers or should i look at something more recent?

any pointers will be appreciated.



Honesty is a very expensive gift. So, don't expect it from cheap people - Warren Buffett

Brad Medinger | 12 May 20:43 2016

Rfc5424Layout Structured Data Parameter Order

I'm trying to use Log4j 2 to construct and send RFC 5424 compliant syslog messages to a syslog receiver.  I've
ran into an issue with how the Rfc5424Layout orders the SD-PARAM key/value pairs.  The
StructuredDataMessage constructor accepts a SortedMap that could be using a Comparator to manage the
order of the SD-PARAMs.  Unfortunately in the Rfc5424Layout.appendMap() method, a new TreeMap is
constructed with the already sorted map from the StructuredDataMessage as input.  This causes the Map to
be sorted by the 'natural' order (alphabetically), which is not the order that I need for the SD-PARAMs.

According to RFC-5424 Section 8.3, the more important SD-PARAMs should be earlier in the message to avoid truncation:
        Important information should be placed as early in the message as
        possible because information at the beginning of the message is less
        likely to be discarded by a size-limited transport receiver.

To me, this seems like a bug in the Rfc5424Layout.  Also, I don't see the need for forcing the use of a SortedMap
implementation in the StructuredDataMessage either, wouldn't it be better to let the caller use
whatever type of Map they want and use that when iterating over the entrySet in the appendMap method?  This
would allow for the use of a LinkedHashMap that could persist the order based on when an Entry was inserted
into the map, instead of requiring a Comparator to sort the map.

If I'm misunderstanding something and this is not a bug, I would greatly appreciate an explanation of how I
can define the order of the SD-PARAMs within an RFC 5424 Syslog message.

Brad Medinger
Senior Software Engineer
Office: 1-800-949-4696
Outside US: +1-402-944-4242
bmedinger <at> linoma.com<mailto:bmedinger <at> linoma.com>
LINOMA SOFTWARE | LinomaSoftware.com<http://www.linomasoftware.com/> | GoAnywhere.com<http://www.goanywhere.com/>
Van Jaarsveldt, Charl | 9 May 14:10 2016

time based file roller anomoly

We have a cluster of 16 machines, all running the same code with the same log4j2 config. It has been logging
fine for months, when suddenly – on May 1st – the original log file stopped resetting on ONE of the
machines, so since then the main log file has been growing every day with the first log entry from 5/1.
Everything else is still logging fine. I checked the config, and it did not change on this one machine.
Additionally, I restarted the service on that machine on Friday, thinking it had something to do with the
internal state of that service, but the problem did not resolve.

We’re using log4j 2.2. The next release will have it upgraded to the latest version, but I’m hoping to
identify the cause before then. Has anybody else seen anything like this?

Here is the relevant part of the config:

<RollingFile  name="mainLog"  fileName="logs/logfile.log" 
filePattern="logs/logfile-%d{yyyy-MM-dd}_%i.log"  append="true">
                <PatternLayout  pattern="%date{yyyy-MM-dd HH:mm:ss.SSS}, [%thread], %-5level, %logger{1}, -
%message%n" />
                                <TimeBasedTriggeringPolicy />
                                <SizeBasedTriggeringPolicy size="200 MB"/>
                <DefaultRolloverStrategy max="15" />

Thank you.

NOTICE: The foregoing message contains confidential information (including attachments) intended
only for the use of the addressee(s) named above. If you are not the addressee, or the person responsible
for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing,
printing, or copying this message is strictly prohibited. If you have received this message in error,
please notify us by replying to the message immediately. Please also proceed to delete the original
message directly following your reply.   Not responsible for typographical errors.  Prices subject to
change without notice.  Prices for backordered product are not guaranteed.   If you do not wish to receive
future promotional announcements, please reply to this email with "Remove" as your subject.
Julian Keppel | 4 May 21:43 2016

Flume appender: Dependency problem when building with maven assembly plugin?

Hi everyone,

I tested the flume appender for log4j2. My configuration XML looks like

<Configuration status="ERROR" name="some_name">




        <Flume name="FLUME" compress="true">
        <Agent host="${FLUME_HOST}" port="${FLUME_PORT}"/>
        <PatternLayout pattern="${LOG_PATTERN}"/>
        <ThresholdFilter level="INFO" onMatch="ACCEPT" onMismatch="DENY"/>

        <Root level="DEBUG">
                <AppenderRef ref="FLUME"/>

And I build my application with maven assembly plugin to get an uber jar
which contains all dependencies (I want to ship a single jar file to all
the destination runtime environments).

When I start the application I get the following error:
ERROR Error processing element Flume ([Appenders: null]): CLASS_NOT_FOUND

So it looks like there is some dependency missing. But in the official doc
I read that for remote mode of the flume appender I only need the following
dependency (including the necessary log4j dependencies):




Another hint: When I start the application from eclipse, it seems to work
totally fine (at least I don't get the error from above).

So what I am doing wrong here? Has anyone some advice for me? Thanks in

v yang | 3 May 18:29 2016

uncaught exception with log4j2

Hello list,

I'm looking for a way to log any uncaught exceptions to a file.  I'm currently running Wildfly 9.0.2 server
and all uncaught exceptions are printing to console.  I'd like to use log4j2 log those exceptions. Can
anyone point me in the right direction.


Dehan de Croos | 3 May 07:03 2016

RE: JDK 1.4 Support

Thanks for the update Ralph. I have a requirement to work with JDK 1.4 and that's why I needed this clarified
because I was unable to find any compatibility info in the release notes.
Going forward with 1.2.17 is there anything I should be mindful when working with it or any fatal issues that
I should be aware of? 

-----Original Message-----
From: Ralph Goers [mailto:ralph.goers <at> dslextreme.com] 
Sent: Friday, April 29, 2016 6:57 PM
To: Log4J Users List <log4j-user <at> logging.apache.org>
Subject: Re: JDK 1.4 Support

You would have to use Log4j 1.2.17.   No Log4j 2 version has ever supported 1.4.


> On Apr 29, 2016, at 3:26 AM, Dehan de Croos <ddcroos <at> virtusapolaris.com> wrote:
> Hi User Mailing List,
> What is the final supported release of Log4j with JDK 1.4 support?
> Thanks & Regards,
> Dehan de Croos,
> Virtusa (Pvt) Ltd.
> 752, Dr. Danister De Silva Mawatha
> Colombo 9
> Sri Lanka
> Phone:  +114605500
>  <http://www.virtusa.com/>  <http://www.virtusa.com/blog/>  <https://twitter.com/VirtusaCorp> 
<http://www.linkedin.com/companies/virtusa>  <http://www.facebook.com/VirtusaCorp> 
<http://www.youtube.com/virtusacorporation>  <http://www.flickr.com/photos/virtusa>
Matt Sicker | 27 Apr 17:54 2016

What pattern or feature would you use to pass along the MDC between worker threads?

Is there any way to do this without manually passing along the context map
manually? These threads may be in a thread pool and execute asynchronous
callbacks, so I don't think using inheritable thread locals will help.

I can provide more details on framework usage and whatnot if it helps.


Matt Sicker <boards <at> gmail.com>
Benjamin Jaton | 27 Apr 01:58 2016

Log4j2 ThreadContext for child threads

Hi all,

I am using the ThreadContext a lot, but I am sometimes in a situation where
I would need to set some variable for a task that runs in a thread that
might spawn children threads. I need that logging variable to be also
available to those child threads.

Unless I create the sub threads manually and pass the variable myself, it's
not possible for a thread to know where it comes from and grab the variable
from its ancestors right?

Just asking because maybe someone has run into the same problem.

Jochen Wiedmann | 15 Apr 11:02 2016

Dynamically creating loggers


I've got an application, where I would like to obtain loggers on the
fly, because the logger name isn't known in advance. (Think of it as a
logging server, which will be used by remote clients.)

Now, creating a Logger might be an expensive operation. Thus, my question:

- Would you recommend to always invoke LogManager.getLogger(String)
and use the result?
- Or would it be better to maintain a Map<String,Logger> with the
logger name as key?




The next time you hear: "Don't reinvent the wheel!"