Jeff Shaw | 16 Apr 21:55 2014

How to make a custom connection source available in log4j 2 config?

Hello,
I made a custom connection source that I want to use. (Source follows
this message.) However, when I attempt to use my BoneCP connection
source in my config, I get the error, "ERROR JDBC contains an invalid
element or attribute "BoneCP"". What else do I need to do to make my
custom connection source available in the configuration?

I'm hoping the answer will also apply to a custom appender and manager
I've written, neither of which work because they also cannot be
instantiated from the configuration, however the error is a class not
found error.

Thanks,
Jeff

Source:

/**
 * Copyright (c) Bit Gladiator on 2014.
 */

 <at> Plugin(name = "BoneCP", category="Core", elementType =
"connectionSource", printObject = true)
public class BoneCPConnectionSource implements ConnectionSource {
  private static final Logger LOGGER = StatusLogger.getLogger();

  private final BoneCP pool;

  private BoneCPConnectionSource(final BoneCP pool) {
    this.pool = pool;
(Continue reading)

Vin Karthik | 15 Apr 05:37 2014
Picon

Fw: News

Hi!       

News:  http://teochewassocjb.org/ki/link.php

Vin Karthik

David KOCH | 15 Apr 15:10 2014

Issue with log4j and Glassfish

Hello,

Glassfish 3.1.2 does not seem to find (some) log4j2 classes when the log4j2
dependencies are not directly included in the wep application's pom.xml. In
my case, I have a separate artifact which contains a custom log appender
and all of the log4j2 stuff.

I already followed the advice in
LPS-21525<https://issues.liferay.com/browse/LPS-21525> to
help avoid some errors but I still get the following messages when trying
to deploy my application,

[#|2014-04-15T14:24:15.933+0200|WARNING|glassfish3.1.2|org.glassfish.weld.BeanDeploymentArchiveImpl|_ThreadID=21;_ThreadName=Thread-3;|Error
while trying to load Bean Class
org.apache.logging.log4j.core.async.RingBufferLogEvent$Factory :
java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory|#]

See here <http://pastebin.com/RWRH2uxm> for a verbose list.

The classes are present in the web application's jar, albeit in:
WEB-INF/lib/≤my_jar_with_log4j2_and_customer_appender>/org/..../*<log4j2_related>.class.

How can I fix this? Any help is appreciated,

Regards,

/David
Mahesh Dilhan | 10 Apr 04:24 2014
Picon

Catalina.out trace : memory leak

HI

I got following catalina console out continuously  when I try to stop the
 web application.

Brief  on configuration
*version : rc1*

*Log4j2.xml*

<Configuration status="OFF" >
  <Appenders>
    <RollingRandomAccessFile name="RollingFile-${web:contextPath}"
fileName="${sys:catalina.home}/logs/current/${web:contextPath}.log"
immediateFlush="false" append="false"

 filePattern="${sys:catalina.home}/logs/archived/%d{yyyy-MM-dd}${contextName}-%d{yyyy-MM-dd}.log.gz">
      <PatternLayout>
        <Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
      </PatternLayout>
      <Policies>
        <TimeBasedTriggeringPolicy />
      </Policies>
    </RollingRandomAccessFile>
  </Appenders>
  <Loggers>
    <Root level="INFO" includeLocation="false">
      <AppenderRef ref="RollingFile-${web:contextPath}"/>

    </Root>
(Continue reading)

Manuel Teira | 9 Apr 09:57 2014
Picon

Compressing only old rollover files

Hello all,

I'm evaluating a switch to log4j-2 since my application is required to
rollover files by age and size (for what the composite triggering policies
come handy). The rollover files shall also be compressed, but only those
reaching a given age.

What would be the preferred approach to achieve that using log4j-2? Should
be reasonable to write a custom rollover strategy or is there any other way
out-of-the box that may work?

Thanks and best regards,

Manuel.
Matt Sicker | 6 Apr 03:10 2014
Picon

Slides for my upcoming Introduction to Log4j 2 talk at ApacheCon 2014

I submitted these a bit late due to not noticing when we were supposed to submit them, but I finished them! Attached is a PDF rendering of the slides (hopefully this works).

--
Matt Sicker <boards <at> gmail.com>

---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-user-unsubscribe <at> logging.apache.org
For additional commands, e-mail: log4j-user-help <at> logging.apache.org
Arkin Yetis | 4 Apr 21:04 2014
Picon

Flume Appender failure due to filesystem issue

We use the Flume Appender. Our logging stopped after a certain point in
time and we noticed the exception at the end of this message in our
application logs. It looks like there was an issue with the filesystem. But
although the filesystem has recovered, the appender (or probably the
persistence mechanism it uses) was stuck in this state and it took an
application restart for it to continue logging. It does not look like there
is a recovery mechanism or if there is one it failed.
Would you like me to open a log4j JIRA ticket for this? Or is this
something that can be prevented by something simple you can share over
e-mail such as a certain configuration setting?

Thanks,
- Arkin

Exception stack is:
1. Stale NFS file handle (java.io.IOException)
  java.io.RandomAccessFile:-2 (null)
2. Environment invalid because of previous exception: (JE 5.0.73)
/app/logs/abs-workflow/flumeDir java.io.IOException: Stale NFS file handle
LOG_READ: IOException on read, log is likely invalid. Environment is
invalid and must be closed. fetchTarget of 0x542/0x4af13c parent IN=5 IN
class=com.sleepycat.je.tree.BIN lastFullVersion=0x543/0x62d6c5
lastLoggedVersion=0x543/0x62d6c5 parent.getDirty()=true state=0
(com.sleepycat.je.EnvironmentFailureException)
  com.sleepycat.je.log.FileManager:1883 (null)

********************************************************************************
Root Exception stack trace:java.io.IOException: Stale NFS file handle
    at java.io.RandomAccessFile.readBytes(Native Method)
    at java.io.RandomAccessFile.read(RandomAccessFile.java:338)
    at
com.sleepycat.je.log.FileManager.readFromFileInternal(FileManager.java:1918)
    at com.sleepycat.je.log.FileManager.readFromFile(FileManager.java:1869)
    at com.sleepycat.je.log.FileManager.readFromFile(FileManager.java:1807)
    at com.sleepycat.je.log.FileSource.getBytes(FileSource.java:56)
    at
com.sleepycat.je.log.LogManager.getLogEntryFromLogSource(LogManager.java:919)
    at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:848)
    at
com.sleepycat.je.log.LogManager.getLogEntryAllowInvisibleAtRecovery(LogManager.java:809)
    at com.sleepycat.je.tree.IN.fetchTarget(IN.java:1412)
    at com.sleepycat.je.tree.BIN.fetchTarget(BIN.java:1251)
    at com.sleepycat.je.dbi.CursorImpl.fetchCurrent(CursorImpl.java:2261)
    at
com.sleepycat.je.dbi.CursorImpl.getCurrentAlreadyLatched(CursorImpl.java:1466)
    at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1593)
    at
com.sleepycat.je.cleaner.UtilizationProfile.getObsoleteDetail(UtilizationProfile.java:632)
    at
com.sleepycat.je.cleaner.FileProcessor.processFile(FileProcessor.java:439)
    at
com.sleepycat.je.cleaner.FileProcessor.doClean(FileProcessor.java:289)
    at
com.sleepycat.je.cleaner.FileProcessor.onWakeup(FileProcessor.java:148)
    at com.sleepycat.je.utilint.DaemonThread.run(DaemonThread.java:163)
    at java.lang.Thread.run(Thread.java:662)
********************************************************************************
Mohit Anchlia | 2 Apr 19:00 2014
Picon

Non blocking JMS appender

I am trying to configure log4j such that the jms appender is non blocking.
Does this configuration make it non blocking?

   <appender name="async" class="org.apache.log4j.AsyncAppender">
        <param name="BufferSize" value="4096" />
        <param name="blocking" value="false"/>
    </appender>

    <appender name="search-indexer-async-jms"
class="org.apache.log4j.net.JMSAppender">
        <param name="InitialContextFactoryName"
value="org.apache.activemq.jndi.ActiveMQInitialContextFactory" />
        <param name="ProviderURL" value="tcp://localhost:61616"/>
        <param name="TopicBindingName" value="indexTopicEndpoint"/>
        <param name="TopicConnectionFactoryBindingName"
value="ConnectionFactory"/>

        <appender-ref ref="async" />
    </appender>
Mohit Anchlia | 1 Apr 21:01 2014
Picon

Unit testing log4j JMS Appender

I am trying to unit test log4j with jms appender, however even before I
bring up the jms embedded broker service log4j.properties get loaded and it
fails to connect to the broker. Is there a way to reload log4j after the
broker is up?
James Hutton | 30 Mar 07:42 2014
Picon

Merging log4j2 contexts

I'm looking to leverage log4j2 in a spring application that uses spring
profiles heavily.  I know in log4j1.2 we could use the DOMConfigurator to
parse an additional xml and merge it into the context.

James
Rebecca Ahlvarsson | 29 Mar 04:03 2014
Picon

Running disruptor async performance tests

I am trying to run the async performance tests described on the link below
on my machine.

http://logging.apache.org/log4j/2.x/manual/async.html#Asynchronous_Logging_Performance

I am not an expert with log4j, so here is how far I got after building
log4j with maven:

java -cp
target/classes:target/test-classes:lib/disruptor-3.2.1.jar:../log4j-api/target/classes
org.apache.logging.log4j.core.async.perftest.PerfTest
org.apache.logging.log4j.core.async.perftest.RunLog4j2 blah blah.log 1
-verbose

Then I get this in the output:

avg=17 99%=32 99.99%=64 sampleCount=5000000
9962247 operations/second

The questions I have are:

1. It looks like the source code IPerfTestRunner uses a much shorter
message "Short Msg" instead of the 500 characters message stated in the
link above. Is that intentional or is it a bug? Do we want to test the
latency with the 500-character message or just a short message?

2. I notice that my logs are NOT going to any file. I am probably
misconfiguring something with log4j. How do I generate a file with the
messages from the performance test?

3. I just want to test with one asynchronous logging thread, so I am
passing threadCount 1 above. What does the second parameter 'blah' mean?

4. Not sure why I get operations/seconds if I am not passing -throughput in
the command-line. I just want to get the latency numbers for now. After
that I will worry about throughput.

So basically I just want to run the same test you run to see those great
numbers on my production machine.

Thanks for the help!

-Becky

Gmane