Mark Moales | 5 Feb 23:12 2016

Performance using FileDataStore/MySql vs. MongoDB in Oak

Hi,

We are currently using Jackrabbit 2 in a production application and are looking to move to Oak. In our Jackrabbit 2 application, we are using MySql, BundleDbPersistenceManager, and a FileDataStore. When using a similar configuration with Oak, we are noticing significant performance issues when removing nodes from the repository. To show the performance issue, I have created a simple example that creates a single parent node of nt:unstructured type and then adds 150 more child nodes of the same type. Each node, including the parent, has 10 properties and has a mix:referenceable mixin.  Here is how I create the DocumentNodeStore:

MySql:

    static FileStore store;
    static FileDataStore fds;
    
    static void initFileStore() throws IOException {
        File testDir = new File("F:\\oak-datastore");
        FileDataStore _fds = new OakFileDataStore();
        _fds.init(testDir.getPath());
        FileStore.Builder fileStoreBuilder = FileStore.newFileStore(testDir)
            .withBlobStore(new DataStoreBlobStore(_fds))
            .withMaxFileSize(256)
            .withCacheSize(64)
            .withMemoryMapping(false);
        store = fileStoreBuilder.create();
        fds = _fds;
    }

    static DocumentNodeStore createDocumentNodeStoreMySql() {
        System.out.println("Using MySql/FileDataStore");
        String url = "jdbc:mysql://localhost:3306/oak";
        String userName = "root";
        String password = "password";
        String driver = "org.mariadb.jdbc.Driver";
        String prefix = "T" + UUID.randomUUID().toString().replace("-", "");
        DataSource ds = RDBDataSourceFactory.forJdbcUrl(url, userName, password, driver);
        RDBOptions options = new RDBOptions().tablePrefix(prefix).dropTablesOnClose(true);
        DocumentNodeStore ns = new DocumentMK.Builder().setAsyncDelay(0).setBlobStore(
            new DataStoreBlobStore(fds)).setClusterId(1).setRDBConnection(
                ds, options).getNodeStore();
        return ns;
    }

MongoDB:

    static DocumentNodeStore createDocumentNodeStoreMongo() {
        System.out.println("Using MongoDB");
        DB db = new MongoClient("127.0.0.1", 27017).getDB("oak");
        DocumentNodeStore ns = new DocumentMK.Builder().setMongoDB(db).getNodeStore();
        return ns;
    }

And here is how I create the nodes:
        
    static Node createNodes(Session session) throws RepositoryException {
        Node root = session.getRootNode();
        Date start = new Date();
        Node parent = root.addNode("Test", "nt:unstructured");
        totalNodes++;
        //
        // Commenting out the following line speeds up delete.
        //
        parent.addMixin("mix:referenceable");

        for (int p = 0; p < 10; p++) {
            parent.setProperty("count" + p, p);
        }
        for (int i = 0; i < 150; i++) {
            createChild(parent, i);
        }
        session.save();
        Date end = new Date();
        double seconds = (double) (end.getTime() - start.getTime()) / 1000.00;
        System.out.println("Created " + totalNodes + " nodes in " + seconds + " seconds");
        return parent;
    }
    
    static void createChild(Node parent, int index)
        throws RepositoryException {
        Node child = parent.addNode("Test" + index, "nt:unstructured");
        totalNodes++;
        //
        // Commenting out the following line speeds up delete.
        //
        child.addMixin("mix:referenceable");
        
        for (int p = 0; p < 10; p++) {
            child.setProperty("count" + p, p);
        }
    }

Finally, here is how I remove the parent node:
    
    static void deleteNodes(Session session, Node parent) throws RepositoryException {
        Date start = new Date();
        parent.remove();
        session.save();
        Date end = new Date();
        double seconds = (double) (end.getTime() - start.getTime()) / 1000.00;
        System.out.println("Deleted " + totalNodes + " nodes in " + seconds + " seconds");
    }

When I run using MongoDB:

Using MongoDB
Created 151 nodes in 0.31 seconds
Deleted 151 nodes in 0.341 seconds

When I run using MySql/FileDataStore:

Using MySql/FileDataStore
Created 151 nodes in 0.391 seconds
Deleted 151 nodes in 10.946 seconds

As you can see, deleting the nodes is quite slow using MySql/FileDataStore. Using Jackrabbit 2, a similar type of operation using MySql and file data store takes approximately 1 second. Can anyone shed any light on why it is taking so long to remove the nodes? Is there something I should be doing differently when creating the DocumentNodeStore?

Thanks,

Mark

smg11 | 3 Feb 14:27 2016
Picon

Jackrabbit Oak: DocumentStoreException- blob expected: value

Hi

I am getting DocumentStoreExcpetion very frequently and inconsistently. Back
end is Mongo/RDB. Stack trace is attached below.
How to correct the repository? Any tool/steps/process available to recover?

Awaiting for earlier response.

stack trace :
org.apache.jackrabbit.oak.plugins.document.DocumentStoreException:
org.apache.jackrabbit.oak.plugins.document.DocumentStoreException:
org.apache.jackrabbit.oak.plugins.document.DocumentStoreException:
java.lang.IllegalArgumentException:
"blob",[["=","_lastRev","r0-0-1","r1529d00d8cc-0-1"]], ........expected:
value

--
View this message in context: http://jackrabbit.510166.n4.nabble.com/Jackrabbit-Oak-DocumentStoreException-blob-expected-value-tp4663597.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.

multi | 29 Jan 09:17 2016
Picon

Rebuild only a single Workspace index

Hi guys,

we use a Repository with several Workspaces. Is it possible to rebuild only
the index from on of these workspaces with deleting the index folder an
restart the System?

I know there is also an repository wide index containing the nodetypes,
version store etc. under /repository/index. 
Are there any risks to keep this repository index and the other workspace
indexes?

Thanks for your advices. Have a nice day.
Greetings

--
View this message in context: http://jackrabbit.510166.n4.nabble.com/Rebuild-only-a-single-Workspace-index-tp4663595.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.

hsp | 28 Jan 18:03 2016
Picon

Phrase searches not working with BrazilianAnalizer

I am digging into this issue (for me is what it is), and to get some
appreciation from of you, there is a little java project with a class
FirstHops.java where there is a main to execute. Run that as a example where
I am trying to explain why the phrase search is not working the way I think
it should. I do not know if it is only Lucene issue related or if it is
jackrabbit code, but for me the result is that, to summarize:

I am using BrazilianAnalyser in my project, and in all phrase that there is
stopwords between two words makes the phrase search without results for
that.

If I am not being that clear enough let me know and I will try to give more
examples to you realize where it gets.

(sorry by my bad english...)

The output after run the project will be:

--
View this message in context: http://jackrabbit.510166.n4.nabble.com/Phrase-searches-not-working-with-BrazilianAnalizer-tp4663594.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.

Torgeir Veimo | 28 Jan 02:36 2016
Picon

query with isdecendantnode join causes duplicates in results

I have a query where I select nodes of a type based on values in some child
nodes of those nodes, thus I use a isdescendantnode. This causes some
entries to be returned twice in the results, if those nodes have multiple
children where some children meet the first criteria, and some the other.

Is there a way to avoid this by using a different query?

select asset.* from [ka:asset] as asset
inner join [ka:asset] as agreement on isdescendantnode(agreement, asset)
where agreement.[ka:assetType] = 'agreement'
and asset.[ka:assetType] = 'company'
and (agreement.[ka:owner] = '5e9d5632-c219-44bd-b6e1-f96e9f3ff37e' or
  agreement.[ka:owner] = '731f35f6-e205-4a53-8ca8-144a6f7dec26')

--

-- 
-Tor
Amit Jain | 25 Jan 04:22 2016
Picon

[ANNOUNCE] Apache Jackrabbit Oak 1.2.10 released

The Apache Jackrabbit community is pleased to announce the release of
Apache Jackrabbit Oak 1.2.10 The release is available for download at:

    http://jackrabbit.apache.org/downloads.html

See the full release notes below for details about this release:

Release Notes -- Apache Jackrabbit Oak -- Version 1.2.10

Introduction
------------

Jackrabbit Oak is a scalable, high-performance hierarchical content
repository designed for use as the foundation of modern world-class
web sites and other demanding content applications.

Apache Jackrabbit Oak 1.2.10 is a patch release that contains fixes and
improvements over Oak 1.2. Jackrabbit Oak 1.2.x releases are considered
stable and targeted for production use.

The Oak effort is a part of the Apache Jackrabbit project.
Apache Jackrabbit is a project of the Apache Software Foundation.

Changes in Oak 1.2.10
--------------------

Technical task

    [OAK-3645] - RDBDocumentStore: server time detection for DB2 fails due
to timezone/dst differences
    [OAK-3662] - Add bulk createOrUpdate method to the DocumentStore API
    [OAK-3729] - RDBDocumentStore: implement RDB-specific VersionGC support
for lookup of deleted documents
    [OAK-3730] - RDBDocumentStore: implement RDB-specific VersionGC support
for lookup of split documents
    [OAK-3739] - RDBDocumentStore: allow schema evolution part 1: check for
required columns, log unexpected new columns
    [OAK-3764] - RDB/NodeStoreFixture fails to track DataSource instances
    [OAK-3807] - Oracle DB doesn't support lists longer than 1000
    [OAK-3816] - RDBBlobStoreTest should use named parameters
    [OAK-3851] - RDB*Store: update PostgreSQL and MySQL JDBC driver
dependencies
    [OAK-3852] - RDBDocumentStore: batched append logic may loose property
changes
    [OAK-3867] - RDBDocumentStore: refactor JSON support
    [OAK-3896] - RDBDocumentStore: export tool - improve handling of export
files allowing to override column order

Bug

    [OAK-1648] - Creating multiple checkpoint on same head revision
overwrites previous entries
    [OAK-3424] - ClusterNodeInfo does not pick an existing entry on startup
    [OAK-3733] - Sometimes hierarchy conflict between concurrent add/delete
isn't detected
    [OAK-3763] - EmptyNodeState.equals() broken
    [OAK-3765] - Parallelized test runner does not wait for test completion
    [OAK-3769] - QueryParse exception when fulltext search performed with
term having '/'
    [OAK-3863] - [oak-blob-cloud] Incorrect export package
    [OAK-3872] - [RDB] Updated blob still deleted even if deletion interval
lower
    [OAK-3891] - AsyncIndexUpdateLeaseTest doesn't use the provided
NodeStore

Improvement

    [OAK-3436] - Prevent missing checkpoint due to unstable topology from
causing complete reindexing
    [OAK-3572] - enhance logging in TypeEditorProvider
    [OAK-3577] - NameValidator diagnostics could be more helpful
    [OAK-3830] - Provide size for properties for PropertyItearator returned
in Node#getProperties(namePattern)
    [OAK-3831] - Allow relative property to be indexed but excluded from
aggregation
    [OAK-3885] - enhance stability of clusterNodeInfo's machineId

Task

    [OAK-3750] - BasicDocumentStoreTest: improve robustness of
.removeWithCondition test

Test

    [OAK-3754] - RepositoryStub does not dispose DocumentStore

In addition to the above-mentioned changes, this release contains
all changes included up to the Apache Jackrabbit Oak 1.2.9 release.

For more detailed information about all the changes in this and other
Oak releases, please see the Oak issue tracker at

  https://issues.apache.org/jira/browse/OAK

Release Contents
----------------

This release consists of a single source archive packaged as a zip file.
The archive can be unpacked with the jar tool from your JDK installation.
See the README.md file for instructions on how to build this release.

The source archive is accompanied by SHA1 and MD5 checksums and a PGP
signature that you can use to verify the authenticity of your download.
The public key used for the PGP signature can be found at
http://www.apache.org/dist/jackrabbit/KEYS.

About Apache Jackrabbit Oak
---------------------------

Jackrabbit Oak is a scalable, high-performance hierarchical content
repository designed for use as the foundation of modern world-class
web sites and other demanding content applications.

The Oak effort is a part of the Apache Jackrabbit project.
Apache Jackrabbit is a project of the Apache Software Foundation.

For more information, visit http://jackrabbit.apache.org/oak

About The Apache Software Foundation
------------------------------------

Established in 1999, The Apache Software Foundation provides organizational,
legal, and financial support for more than 140 freely-available,
collaboratively-developed Open Source projects. The pragmatic Apache License
enables individual and commercial users to easily deploy Apache software;
the Foundation's intellectual property framework limits the legal exposure
of its 3,800+ contributors.

For more information, visit http://www.apache.org/
Davide Giannella | 22 Jan 10:29 2016
Picon

[ANNOUNCE] Apache Jackrabbit Oak 1.3.14 released

The Apache Jackrabbit community is pleased to announce the release of
Apache Jackrabbit Oak 1.3.14 The release is available for download at:

    http://jackrabbit.apache.org/downloads.html

See the full release notes below for details about this release:

Technical task

    [OAK-3637] - Bulk document updates in RDBDocumentStore
    [OAK-3645] - RDBDocumentStore: server time detection for DB2 fails
    due to timezone/dst differences
    [OAK-3739] - RDBDocumentStore: allow schema evolution part 1:
    check for required columns, log unexpected new columns
    [OAK-3843] - MS SQL doesn't support more than 2100 parameters in
    one request
    [OAK-3845] - AbstractRDBConnectionTest fails to close the
    DataSource
    [OAK-3851] - RDB*Store: update PostgreSQL and MySQL JDBC driver
    dependencies
    [OAK-3852] - RDBDocumentStore: batched append logic may loose
    property changes
    [OAK-3867] - RDBDocumentStore: refactor JSON support
    [OAK-3868] - Move createSegmentWriter() from FileStore to
    SegmentTracker
    [OAK-3873] - Don't pass the compaction map to FileStore.cleanup

Bug

    [OAK-2592] - Commit does not ensure w:majority
    [OAK-3470] - Utils.estimateMemoryUsage has a NoClassDefFoundError
    when Mongo is not being used
    [OAK-3634] - RDB/MongoDocumentStore may return stale documents
    [OAK-3646] - Inconsistent read of hierarchy
    [OAK-3653] - Incorrect last revision of cached node state
    [OAK-3769] - QueryParse exception when fulltext search performed
    with term having '/'
    [OAK-3826] - Lucene index augmentation doesn't work in Osgi
    environment
    [OAK-3838] - IndexPlanner incorrectly lets all full text indices
    to participate for suggest/spellcheck queries
    [OAK-3848] - ConcurrentAddNodesClusterIT.addNodesConcurrent()
    fails occasionally
    [OAK-3849] - After partial migration versions are not restorable
    [OAK-3856] - Potential NPE in SegmentWriter
    [OAK-3859] - Suspended commit depends on non-conflicting change
    [OAK-3863] - [oak-blob-cloud] Incorrect export package
    [OAK-3864] - Filestore cleanup removes referenced segments
    [OAK-3872] - [RDB] Updated blob still deleted even if deletion
    interval lower
    [OAK-3882] - Collision may mark the wrong commit
    [OAK-3891] - AsyncIndexUpdateLeaseTest doesn't use the provided
    NodeStore

Improvement

    [OAK-2472] - Add support for atomic counters on cluster solutions
    [OAK-3577] - NameValidator diagnostics could be more helpful
    [OAK-3727] - Broadcasting cache: auto-configuration
    [OAK-3791] - Time measurements for DocumentStore methods
    [OAK-3812] - Disable compaction gain estimation if compaction is
    paused
    [OAK-3830] - Provide size for properties for PropertyItearator
    returned in Node#getProperties(namePattern)
    [OAK-3836] - Convert simple versionable nodes during upgrade
    [OAK-3841] - Change return type of Document.getModCount() to Long
    [OAK-3847] - Provide an easy way to parse/retrieve facets
    [OAK-3853] - Improve SegmentGraph resilience
    [OAK-3857] - Simplify SegmentGraphTest
    [OAK-3861] - MapRecord reduce extra loop in MapEntry creation
    [OAK-3877] - PerfLogger should use System.nanoTime instead of
    System.currentTimeMillis
    [OAK-3885] - enhance stability of clusterNodeInfo's machineId
    [OAK-3890] - Robuster test expectations for FileStoreIT

New Feature

    [OAK-3819] - Collect and expose statistics related to Segment
    FileStore operations

Task

    [OAK-3799] - Drop module oak-js
    [OAK-3803] - Clean up the fixtures code in core and jcr modules

Test

    [OAK-3874] - DocumentToExternalMigrationTest fails occasionally

Sanjeev | 21 Jan 17:03 2016
Picon

JBoss XAPool - No connection allowed for anonymous user

Hi,

We are seeing these warning messages in our logs. Looks like JBoss is trying
to create connections for the XA Pool and is running into this
AnonymousConnection class. Not sure why its trying to create
AnonymousConnection. Application is working fine otherwise. User sessions
are created and saved with no issues.

Has this to do with how Jackrabbit is configured? Below is the warning log
and configuration. Appreciate you help.

Environment --- JBoss EAP 6.1.1 (JBoss AS 7.2.1) / JackRabbit 2.8.1

11:01:02,487 WARN  [com.arjuna.ats.jta] (Periodic Recovery) ARJUNA016009:
Caught:: java.lang.UnsupportedOperationException: No connection allowed for
anonymous user.
        at
org.apache.jackrabbit.jca.AnonymousConnection.getConnection(AnonymousConnection.java:110)
        at
org.jboss.jca.core.tx.jbossts.XAResourceRecoveryImpl.openConnection(XAResourceRecoveryImpl.java:426)
        at
org.jboss.jca.core.tx.jbossts.XAResourceRecoveryImpl.getXAResources(XAResourceRecoveryImpl.java:176)
        at
com.arjuna.ats.internal.jbossatx.jta.XAResourceRecoveryHelperWrapper.getXAResources(XAResourceRecoveryHelperWrapper.java:51)
[jbossjts-integration-4.17.7.Final-redhat-4.jar:4.17.7.Final-redhat-4]
        at
com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.resourceInitiatedRecoveryForRecoveryHelpers(XARecoveryModule.java:500)
[jbossjts-jacorb-4.17.7.Final-redhat-4.jar:4.17.7.Final-redhat-4]
        at
com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:158)
[jbossjts-jacorb-4.17.7.Final-redhat-4.jar:4.17.7.Final-redhat-4]
        at
com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:743)
[jbossjts-jacorb-4.17.7.Final-redhat-4.jar:4.17.7.Final-redhat-4]
        at
com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:371)
[jbossjts-jacorb-4.17.7.Final-redhat-4.jar:4.17.7.Final-redhat-4]

Standalone-full.xml

        <subsystem xmlns="urn:jboss:domain:resource-adapters:1.1">
            <resource-adapters>
                <resource-adapter id="jackrabbit-jca-2.8.1.rar">
                    <archive>
                        jackrabbit-jca-2.8.1.rar
                    </archive>
                    <transaction-support>XATransaction</transaction-support>
                    <connection-definitions>
                        <connection-definition
class-name="org.apache.jackrabbit.jca.JCAManagedConnectionFactory"
jndi-name="java:/jcr/local" enabled="true" use-java-context="true"
pool-name="JCRPool" use-ccm="true">
                            <config-property name="ConfigFile">
                                /opt/jackrabbit/repository.xml
                            </config-property>
                            <config-property name="HomeDir">
                                /opt/jackrabbit
                            </config-property>
                             <xa-pool>
                                <min-pool-size>3</min-pool-size>
                                <max-pool-size>40</max-pool-size>
                                <prefill>true</prefill>
                                <use-strict-min>true</use-strict-min>
                            </xa-pool>
                            <security>

<security-domain>Jackrabbit</security-domain>
                            </security>
                        </connection-definition>
                    </connection-definitions>
                </resource-adapter>
            </resource-adapters>
        </subsystem>

                <security-domain name="Jackrabbit" cache-type="default">
                    <authentication>
                        <login-module
code="org.jboss.security.auth.spi.DatabaseServerLoginModule"
flag="required">
                            <module-option name="unauthenticatedIdentity"
value="ANONYMOUS"/>
                            <module-option name="dsJndiName"
value="java:/jdbc/jcr"/>
                            <module-option name="principalsQuery"
value="SELECT * FROM JCR_USER WHERE userid = ?"/>
                            <module-option name="rolesQuery" value="SELECT
ROLE_DESC, 'Roles' FROM JCR_PROJ_ROLE WHERE USERID=?"/>
                        </login-module>
                    </authentication>
                </security-domain>

repository.xml

    <Security appName="Jackrabbit">
        <SecurityManager
class="org.apache.jackrabbit.core.security.simple.SimpleSecurityManager"
workspaceName="security">
        </SecurityManager>

        
        <AccessManager
class="org.apache.jackrabbit.core.security.simple.SimpleAccessManager">

        </AccessManager>
    </Security>

--
View this message in context: http://jackrabbit.510166.n4.nabble.com/JBoss-XAPool-No-connection-allowed-for-anonymous-user-tp4663556.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.

Pewpew2001 | 20 Jan 09:20 2016

repository.xml reset on each tomcat startup (liferay ?)

Hi, 

I've migrated my liferay from 6.0 to 6.2 and now each time I start my
server, the reposiroty.xml located in /data/jackrabbit is reseted ; all the
settings are empty !!!

Is anyone ever encountered this problem?

Any help will be appreciated

Thanks

--
View this message in context: http://jackrabbit.510166.n4.nabble.com/repository-xml-reset-on-each-tomcat-startup-liferay-tp4663551.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.

Setty, Uday | 11 Jan 15:43 2016
Picon

Jackrabbit-webapp-2.10.1 deploymnet error on WebLogic 12.1.3 domain

Hi All,

I am not able to deploy jackrabbit-webapp-2.10.1 to my WebLogic 12.1.3  domain, getting following error
while deploying.

Questions:
Anything changed related to jackrabbit-webapp deployment? Why I am started getting this error after 2.6.1?

Any help or hint will be a big help.

Note:

1.       I am able to deploy 2.6.1 successfully

2.       jcr-2.0.jar file in domains lib directory

3.        Any version after 2.6.1 has following error

####<Jan 8, 2016 2:41:43 PM EST> <Warning> <Deployer> <UISWHLPT1822040> <AdminServer> <[ACTIVE]
ExecuteThread: '5' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <>
<1452282103357> <BEA-149078> <Stack trace for message 149004
weblogic.application.ModuleException: java.lang.VerifyError: Cannot inherit from final class
                at weblogic.application.internal.ExtensibleModuleWrapper.prepare(ExtensibleModuleWrapper.java:114)
                at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:100)
                at weblogic.application.internal.flow.ModuleStateDriver$1.next(ModuleStateDriver.java:175)
                at weblogic.application.internal.flow.ModuleStateDriver$1.next(ModuleStateDriver.java:170)
                at weblogic.application.utils.StateMachineDriver$ParallelChange.run(StateMachineDriver.java:80)
                at weblogic.work.ContextWrap.run(ContextWrap.java:40)
                at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:548)
                at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311)
                at weblogic.work.ExecuteThread.run(ExecuteThread.java:263)
Caused By: java.lang.VerifyError: Cannot inherit from final class
                at java.lang.ClassLoader.defineClass1(Native Method)
                at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
                at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
                at weblogic.utils.classloaders.GenericClassLoader.defineClass(GenericClassLoader.java:412)
                at weblogic.utils.classloaders.GenericClassLoader.findLocalClass(GenericClassLoader.java:366)
                at weblogic.utils.classloaders.GenericClassLoader.findClass(GenericClassLoader.java:318)
                at weblogic.utils.classloaders.ChangeAwareClassLoader.findClass(ChangeAwareClassLoader.java:80)
                at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
                at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
                at weblogic.utils.classloaders.GenericClassLoader.loadClass(GenericClassLoader.java:186)
                at weblogic.utils.classloaders.ChangeAwareClassLoader.loadClass(ChangeAwareClassLoader.java:50)
                at com.oracle.injection.integration.BeanLoaderUtils.loadBeanClassesFromJar(BeanLoaderUtils.java:54)
                at com.oracle.injection.integration.BeanLoaderUtils.loadBeanClassesFromEmbeddedJar(BeanLoaderUtils.java:34)
                at com.oracle.injection.integration.CDIModuleExtension.loadBeanClassesFromEmbeddedJar(CDIModuleExtension.java:732)
                at com.oracle.injection.integration.CDIModuleExtension.makeInjectionArchivesForResourceType(CDIModuleExtension.java:531)
                at com.oracle.injection.integration.CDIModuleExtension.createLibInjectionArchives(CDIModuleExtension.java:489)
                at com.oracle.injection.integration.CDIModuleExtension.createWebModuleInjectionArchive(CDIModuleExtension.java:196)
                at com.oracle.injection.integration.CDIModuleExtension.createInjectionArchive(CDIModuleExtension.java:182)
                at com.oracle.injection.integration.CDIModuleExtension.postPrepare(CDIModuleExtension.java:85)
                at weblogic.application.internal.ExtensibleModuleWrapper$PrepareStateChange.next(ExtensibleModuleWrapper.java:297)
                at weblogic.application.internal.ExtensibleModuleWrapper$PrepareStateChange.next(ExtensibleModuleWrapper.java:285)
                at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:42)
                at weblogic.application.internal.ExtensibleModuleWrapper.prepare(ExtensibleModuleWrapper.java:109)
                at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:100)
                at weblogic.application.internal.flow.ModuleStateDriver$1.next(ModuleStateDriver.java:175)
                at weblogic.application.internal.flow.ModuleStateDriver$1.next(ModuleStateDriver.java:170)
                at weblogic.application.utils.StateMachineDriver$ParallelChange.run(StateMachineDriver.java:80)
                at weblogic.work.ContextWrap.run(ContextWrap.java:40)
                at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:548)
                at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311)
                at weblogic.work.ExecuteThread.run(ExecuteThread.java:263)

Thanks
Uday

Davide Giannella | 7 Jan 15:21 2016
Picon

[ANNOUNCE] Apache Jackrabbit Oak 1.3.13 released

The Apache Jackrabbit community is pleased to announce the release of
Apache Jackrabbit Oak 1.3.13 The release is available for download at:

    http://jackrabbit.apache.org/downloads.html

See the full release notes below for details about this release:

Release Notes -- Apache Jackrabbit Oak -- Version 1.3.13

Introduction
------------

Jackrabbit Oak is a scalable, high-performance hierarchical content
repository designed for use as the foundation of modern world-class
web sites and other demanding content applications.

Apache Jackrabbit Oak 1.3.13 is an unstable release cut directly from
Jackrabbit Oak trunk, with a focus on new features and other
improvements. For production use we recommend the latest stable 1.2.x
release.

The Oak effort is a part of the Apache Jackrabbit project.
Apache Jackrabbit is a project of the Apache Software Foundation.

Changes in Oak 1.3.13
---------------------

Sub-task

    [OAK-2509] - Support for faceted search in query engine
    [OAK-2510] - Support for faceted search in Solr index
    [OAK-2511] - Support for faceted search in Lucene index
    [OAK-2512] - ACL filtering for faceted search

Technical task

    [OAK-3586] - ConflictException and CommitQueue should support a
    list of revisions
    [OAK-3620] - Increase lock stripes in RDBDocumentStore
    [OAK-3662] - Add bulk createOrUpdate method to the DocumentStore
    API
    [OAK-3729] - RDBDocumentStore: implement RDB-specific VersionGC
    support for lookup of deleted documents
    [OAK-3730] - RDBDocumentStore: implement RDB-specific VersionGC
    support for lookup of split documents
    [OAK-3764] - RDB/NodeStoreFixture fails to track DataSource
    instances
    [OAK-3774] - Tool for detecting references to pre compacted
    segments
    [OAK-3785] - IndexDefinition should expose underlying node state
    [OAK-3807] - Oracle DB doesn't support lists longer than 1000
    [OAK-3816] - RDBBlobStoreTest should use named parameters

Bug

    [OAK-2656] - Test failures in LDAP authentication: Failed to bind
    an LDAP service
    [OAK-2877] - Test failure: OrderableNodesTest.setPrimaryType
    [OAK-2878] - Test failure: AutoCreatedItemsTest.autoCreatedItems
    [OAK-3295] - Test failure: NodeTypeTest.trivialUpdates
    [OAK-3424] - ClusterNodeInfo does not pick an existing entry on
    startup
    [OAK-3663] - LastRevRecoveryRandomizedIT fails with seed 10848868
    [OAK-3668] - Potential test failure:
    CompactionAndCleanupIT#testMixedSegments
    [OAK-3733] - Sometimes hierarchy conflict between concurrent
    add/delete isn't detected
    [OAK-3740] - ValueImpl has references on classes internal to
    SegmentStore
    [OAK-3741] - AbstractCheckpointMBean references
    SegmentCheckpointMBean
    [OAK-3751] - Limit the unique index "authorizableId" to the
    "rep:Authorizable" nodetype
    [OAK-3756] - NodeStateUtils wrong indentation for toString method
    [OAK-3759] - UserManager.onCreate is not omitted for system users
    in case of XML import
    [OAK-3762] - StandbyServerhandler catches IllegalStateException
    instead of IllegalRepositoryStateException
    [OAK-3763] - EmptyNodeState.equals() broken
    [OAK-3765] - Parallelized test runner does not wait for test
    completion
    [OAK-3775] - Inconsistency between Node.getPrimaryType and
    Node.isNodeType
    [OAK-3792] - Provide Simple Exception Name in Credentials
    Attribute for PW Expiry
    [OAK-3793] - The Explorer should expect loops in the segment graph
    [OAK-3794] - The Cold Standby should expect loops in the segment
    graph
    [OAK-3798] - NodeDocument.getNewestRevision() incorrect when there
    are previous documents
    [OAK-3802] - SessionMBean not getting registered due to
    MalformedObjectNameException
    [OAK-3817] - Hidden properties (one prefixed with ':') in lucene's
    analyzer configuration fail to construct analyzers
    [OAK-3821] - Lucene directory: improve exception messages

Documentation

    [OAK-3736] - Document changing OOTB index definitions
    [OAK-3808] - Fix broken link on 'Backward compatibility' - 'Query'
    section

Epic

    [OAK-144] - Implement observation

Improvement

    [OAK-3436] - Prevent missing checkpoint due to unstable topology
    from causing complete reindexing
    [OAK-3519] - Some improvement to SyncMBeanImpl
    [OAK-3529] - NodeStore API should expose an Instance ID
    [OAK-3576] - Allow custom extension to augment indexed lucene
    documents
    [OAK-3649] - Extract node document cache from Mongo and RDB
    document stores
    [OAK-3703] - Improve handling of IOException
    [OAK-3707] - Register composite commit hook with whiteboard
    [OAK-3721] - Reduce code duplication in MembershipProvider
    [OAK-3728] - Document indexes in the index itself
    [OAK-3745] - Introduce an exception in the Content Repository API
    to represent an invalid state of the repository
    [OAK-3773] - Include segment information in Segment.toString
    [OAK-3795] - FileStore#compact should throw ISE instead of IAE
    when no compaction strategy is set
    [OAK-3804] - Add tarmk revision recovery listing to oak-run
    [OAK-3805] - Add support for Metrics Histogram
    [OAK-3820] - Add inc and dec by specific size support in
    CounterStats
    [OAK-3829] - Expose BlobStore cache statistics
    [OAK-3831] - Allow relative property to be indexed but excluded
    from aggregation

New Feature

    [OAK-1736] - Support for Faceted Search
    [OAK-3806] - Collect and expose statistics related to BlobStore
    operations

Task

    [OAK-3747] - VersionGarbageCollectorIT: use name annotation for
    test parameters
    [OAK-3749] - Implement tooling for tracing a node through the
    revision history
    [OAK-3750] - BasicDocumentStoreTest: improve robustness of
    .removeWithCondition test
    [OAK-3755] - Remove the special in-place upgrade handling from
    oak-upgrade
    [OAK-3768] - Remove OrderedPropertyIndex support from trunk
    [OAK-3823] - Expose the count maintained by various stats
    [OAK-3824] - StatisticsProvider should provide a way to disable
    TimeSeries for certain metrics

Test

    [OAK-3754] - RepositoryStub does not dispose DocumentStore

In addition to the above-mentioned changes, this release contains
all changes included up to the Apache Jackrabbit Oak 1.2.x release.

For more detailed information about all the changes in this and other
Oak releases, please see the Oak issue tracker at

  https://issues.apache.org/jira/browse/OAK

Release Contents
----------------

This release consists of a single source archive packaged as a zip file.
The archive can be unpacked with the jar tool from your JDK installation.
See the README.md file for instructions on how to build this release.

The source archive is accompanied by SHA1 and MD5 checksums and a PGP
signature that you can use to verify the authenticity of your download.
The public key used for the PGP signature can be found at
http://www.apache.org/dist/jackrabbit/KEYS.

About Apache Jackrabbit Oak
---------------------------

Jackrabbit Oak is a scalable, high-performance hierarchical content
repository designed for use as the foundation of modern world-class
web sites and other demanding content applications.

The Oak effort is a part of the Apache Jackrabbit project. 
Apache Jackrabbit is a project of the Apache Software Foundation.

For more information, visit http://jackrabbit.apache.org/oak

About The Apache Software Foundation
------------------------------------

Established in 1999, The Apache Software Foundation provides organizational,
legal, and financial support for more than 140 freely-available,
collaboratively-developed Open Source projects. The pragmatic Apache License
enables individual and commercial users to easily deploy Apache software;
the Foundation's intellectual property framework limits the legal exposure
of its 3,800+ contributors.

For more information, visit http://www.apache.org/


Gmane