hsp | 23 Mar 18:13 2015
Picon

File persistence with NAS (NFS)

Hi;

We are using the api jackrabbit 2.8.
Recently the environment was migrated to use a NAS (Isilon).
So, the server was mapped to use the resource by NFS (SUSE SLES).
At first would be transparent to jackrabbit, so the configuration
(repository.xml/workspace.xml) was not changed.
The application got ok, but, when the jackrabbit is writing a file the
iowait parameter in server rises too much, and files bigger than 10MB wont
be saved because session timeout .
We have tested the speed in copy files from server to NAS directly and it
was in 30MBps, so why when the jackrabbit is sending the file to write in
NAS the speed goes down in that way (2 - 5 MBps... )

Apreciate some help in this scenario. Regards!

Just for information, that's our repository.xml:

--
View this message in context: http://jackrabbit.510166.n4.nabble.com/File-persistence-with-NAS-NFS-tp4662164.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.

AtulMaurya | 20 Mar 20:30 2015
Picon

Repository Inconsistency: failed to build path of xxx-xxx-xxx-xxx-xxx: xxx-xxx-xxx-xxx-yyy has no child entry for xxx-xxx-xxx-xxx-xxx

Repository Inconsistency: failed to build path of xxx-xxx-xxx-xxx-xxx:
xxx-xxx-xxx-xxx-yyy has no child entry for xxx-xxx-xxx-xxx-xxx

Hello,

We are facing this repository inconsistency with an existing repository
(version 1.5.5). The database size is around 1 TB. We have ran the
consistency check and it could not resolve the issue. However when I looked
at some some other forums it looks like running consistency check does not
resolve this inconsistency but we have to remove the corrupted nodes from
the repository. Could someone please suggest me the best approach to resolve
the issue. Following are the ones we are thinking of-

1. Running consistency check did not resolve the issue.
2. The utility to identify and repair/remove corrupted nodes. In this case
could you please share any existing utility or any references to start with.

Or any other approach that is more helpful. It's a very critical issue and
any help is really valuable.

Thanks in advance,
Atul

--
View this message in context: http://jackrabbit.510166.n4.nabble.com/Repository-Inconsistency-failed-to-build-path-of-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-xxx-yyy-has-no-chilx-tp4662145.html
Sent from the Jackrabbit - Users mailing list archive at Nabble.com.

Marcel Reutegger | 13 Mar 22:43 2015
Picon

[ANNOUNCE] Apache Jackrabbit Oak 1.0.12 released

The Apache Jackrabbit community is pleased to announce the release of
Apache Jackrabbit Oak 1.0.12. The release is available for download at:

  http://jackrabbit.apache.org/downloads.html

See the full release notes below for details about this release.

Release Notes -- Apache Jackrabbit Oak -- Version 1.0.12

Introduction
------------

Jackrabbit Oak is a scalable, high-performance hierarchical content
repository designed for use as the foundation of modern world-class
web sites and other demanding content applications.

Apache Jackrabbit Oak 1.0.12 is a patch release that contains fixes and
improvements over Oak 1.0. Jackrabbit Oak 1.0.x releases are considered
stable and targeted for production use.

The Oak effort is a part of the Apache Jackrabbit project.
Apache Jackrabbit is a project of the Apache Software Foundation.

Changes in Oak 1.0.12
---------------------

Bug Fixes
   [OAK-1709] - Diff cache entry too large
   [OAK-2294] - Corrupt repository after concurrent version operations
   [OAK-2346] -
(Continue reading)

Davide Giannella | 9 Mar 15:05 2015
Picon

[ANNOUNCE] Apache Jackrabbit Oak 1.1.7 released

The Apache Jackrabbit community is pleased to announce the release of
Apache Jackrabbit Oak 1.1.7. The release is available for download at:

  http://jackrabbit.apache.org/downloads.html

See the full release notes below for details about this release.

Release Notes -- Apache Jackrabbit Oak -- Version 1.1.7

Introduction
------------

Jackrabbit Oak is a scalable, high-performance hierarchical content
repository designed for use as the foundation of modern world-class
web sites and other demanding content applications.

Apache Jackrabbit Oak 1.1.7 is an unstable release cut directly from
Jackrabbit Oak trunk, with a focus on new features and other improvements.
For production use we recommend the latest stable 1.0.7 release.

The Oak effort is a part of the Apache Jackrabbit project.
Apache Jackrabbit is a project of the Apache Software Foundation.

Changes in Oak 1.1.7
---------------------

Sub-task

    [OAK-2456] - Periodic update of suggestor index from the full text
    index
(Continue reading)

De Georges, Adrien | 3 Mar 10:07 2015

Java Heap Space when checking in many nodes : how to handle it?

Hi everyone,

We are encountering OutOfMemoryError when checking in about 5000 nodes in a same transaction. Obviously,
the workaround is to allocate more heap space when starting JVM (-XmX option).
But, we would like to handle this exception properly in any case (to display a proper message in our software).
We noticed in ObservationDispatcher.226 (jackrabbit.core) that any Throwable is caught but never
thrown. That's why we cannot catch any issue on our side although everything is frozen due to the java heap space.

Does anyone has encounter such issue? Is there any solution to handle this.

Regards,
Adrien

Information in this e-mail and any attachments is confidential, and may not be copied or used by anyone
other than the addressee, nor disclosed to any third party without our permission. There is no intention
to create any legally binding contract or other binding commitment through the use of this electronic
communication unless it is issued in accordance with the Experian Limited standard terms and conditions
of purchase or other express written agreement between Experian Limited and the recipient. Although
Experian has taken reasonable steps to ensure that this communication and any attachments are free from
computer viruses, you are advised to take your own steps to ensure that they are actually virus free. 

Experian Ltd is authorised and regulated by the Financial Conduct Authority.
Companies Act information: Registered name: Experian Limited. Registered office: The Sir John Peace
Building, Experian Way, NG2 Business Park, Nottingham, NG80 1ZZ, United Kingdom. Place of
registration: England and Wales. Registered number: 653331.
Julien Garcia Gonzalez | 20 Feb 15:42 2015
Picon

CryptedCredential on TransientRepository ::

Hello,

I would like to connect by a crypted way to my transient repository, 
actually it's working with SimpleCredential.

But when I tried CryptedCredential I get this exception:

javax.jcr.LoginException: LoginModule ignored Credentials
     at 
org.apache.jackrabbit.core.RepositoryImpl.login(RepositoryImpl.java:1493)
     at 
org.apache.jackrabbit.core.TransientRepository.login(TransientRepository.java:381)
     at 
org.apache.jackrabbit.commons.AbstractRepository.login(AbstractRepository.java:123)

I'm doing this:

repository.login(newCryptedSimpleCredentials(newSimpleCredentials("admin","admin".toCharArray())));

Here is the security configuration in the xml:

<SecurityappName="Jackrabbit">
     <SecurityManagerclass="org.apache.jackrabbit.core.DefaultSecurityManager"workspaceName="security"/>
     <AccessManagerclass="org.apache.jackrabbit.core.security.DefaultAccessManager"/>
     <LoginModuleclass="org.apache.jackrabbit.core.security.simple.SimpleLoginModule">
         <paramname="anonymousId"value="anonymous"/>
         <paramname="adminId"value="admin"/>
     </LoginModule>
</Security>

(Continue reading)

Dirk Högemann | 19 Feb 09:54 2015

Versioning behaviour

Hello,

I have a question regarding JackRabbits versioning behaviour.
I have two ways to change content in my repository (full versioning):

1. checkout Node, change Properties, checkin Node: Results in a version
history like 1.1, 1.2, 1.3 and so on
2. session.importXml(the application logic determines the target UUID for
the imported xml): This results in versions like 2.0, 3.0

I thought the second kind kind of behaviour would only occur if I use
different workspaces - which is not the case (by my intention).

Anyway: What I need is a linear version history (1.1, 1.2, 1.3 OR
1.0..2.0...), not a mixed one.

Is it possible to force this behaviour? (for example by manipulating the
imported XML)

Best regards and thanks for any hints...
Dirk
Prasad Bodapati | 17 Feb 11:20 2015

Oracle Cluster configuration tablespace is variable is not resolved.

Hi,

I have been trying to configure Jackrabbit with Oracle. I get exception while schema check is performed for version table in cluster.

While the cluster configuration is being initialized.It calls the init() method in DataBaseJournal class. In there it calls checkLocalRevisionSchema(); That method in turns call CheckSchemaOperation .run() to check whether the LOCAL_REVISIONS table exists if not it creates it. At first time it does not have the table so it tries to create the table with query

create table ${schemaObjectPrefix}JOURNAL (REVISION_ID number(20,0) NOT NULL, JOURNAL_ID varchar(255), PRODUCER_ID varchar(255), REVISION_DATA blob) ${tablespace}

 

Before executing the query the variable c${schemOBjectPrefix} is replaced with PBVP_Journal but the  ${tablespace} is being replaced. I am not sure what I am doing wrong. I did

The whole repository.xml is attached.

 

My file system and persistence manager configuration looks like

 

<Versioning rootPath="${rep.home}/version">

<FileSystem class="org.apache.jackrabbit.core.fs.db.OracleFileSystem">

                                             <param name="driver" value="javax.naming.InitialContext"/>

                                             <param name="url" value="java:/jcr/repositoryDB"/>

                                             <param name="schemaObjectPrefix" value="PBVP_version_"/>

                                             <param name="schema" value="oracle"/>

                                             <param name="tablespace" value="default"/>

                              </FileSystem>

                              <PersistenceManager class="org.apache.jackrabbit.core.persistence.pool.OraclePersistenceManager">

                                             <param name="driver" value="javax.naming.InitialContext"/>

                                             <param name="url" value="java:/jcr/repositoryDB"/>

                                             <param name="databaseType" value="oracle"/>

                                             <param name="schemaObjectPrefix" value="PBVP_version_"/>

                                             <param name="bundleCacheSize" value="32"/>

                                             <param name="tableSpace" value="default"/>

                              </PersistenceManager>

</Versioning>

 

My cluster configuration looks like

 

 

<Cluster id="node1" syncDelay="2000">

                              <Journal class="org.apache.jackrabbit.core.journal.OracleDatabaseJournal">

                                             <param name="revision" value="${rep.home}/revision.log"/>

                                             <param name="driver" value="javax.naming.InitialContext"/>

                                             <param name="url" value="java:/jcr/repositoryDB"/>

                                             <param name="schemaObjectPrefix" value="PBVP_journal_"/>

                                             <param name="databaseType" value="oracle"/>

                                             <param name="schemaCheckEnabled" value="false"/>

                                             <param name="tablespace" value="default"/>

                              </Journal>

               </Cluster>

 

 

 

 

 

 

Thanks & Regards

Prasad Bodapati, Software Engineer

Pitney Bowes Software

6 Hercules Way, Leavesden Park, Watford, Herts WD25 7GS

D: +441923 279174 | M: +447543399223 www.pb.com/software

 

prasad.bodapati <at> pb.com

 

Every connection is a new opportunity™

 

 

 

Please consider the environment before printing or forwarding this email. If you do print this email, please recycle the paper.

 

This email message may contain confidential, proprietary and/or privileged information. It is intended only for the use of the intended recipient(s). If you have received it in error, please immediately advise the sender by reply email and then delete this email message. Any disclosure, copying, distribution or use of the information contained in this email message to or by anyone other than the intended recipient is strictly prohibited.

 



Torgeir Veimo | 11 Feb 14:31 2015
Picon

finding node with parent specified as uuid (oak 1.1.6)

Is there a way in oak, with xpath queries, to find a node with a
specific parent, given that the parent node is only identified using
the id?

In jackrabbit, I can do something like

//element(*)[../ <at> jcr:uuid = '7e13f9c0-8f79-418e-9f5b-312d1226ee40' ]

but this doesn't work with oak. (Or maybe I'm just missing some index
configuration?)

--

-- 
-Tor

Torgeir Veimo | 10 Feb 14:59 2015
Picon

no results from full-text index (oak 1.1.6)

My lucene full-text index is configured as per examples as;

        NodeBuilder index = IndexUtils.getOrCreateOakIndex(builder);
        index.child("lucene")
                .setProperty("jcr:primaryType",
"oak:QueryIndexDefinition", Type.NAME)
                .setProperty("type", "lucene")
                .setProperty("async", "async")

.setProperty(PropertyStates.createProperty("includePropertyTypes",
ImmutableSet.of(PropertyType.TYPENAME_STRING,
PropertyType.TYPENAME_BINARY), Type.STRINGS))

.setProperty(PropertyStates.createProperty("excludePropertyNames",
ImmutableSet.of("jcr:createdBy", "jcr:lastModifiedBy"), Type.STRINGS))
                .setProperty("reindex", true);

But it doesn't provide any results. From my understanding, this
definition will index any STRING and BINARY property type,

Trying a query like //*[jcr:contains(., 'test')] doesn't work, even
though I've got nodes with test in string properties.

--

-- 
-Tor

Torgeir Veimo | 10 Feb 14:22 2015
Picon

xpath -> sql2 conversion fails, oak 1.1.6

The XPATH query

/jcr:root/content/companies/sortlandsavisa-5449/positions/test-stilling-5619/applications/application-Y7y8[1]/element(*,ka:asset)[ <at> ka:assetType
= 'comment'] order by  <at> jcr:score

fails to convert properly to sql2 (with oak 1.1.6). Somehow "1(*)is
not null" is added as a selector.

javax.jcr.query.InvalidQueryException: java.text.ParseException:

/jcr:root/content/companies/sortlandsavisa-5449/positions/test-stilling-5619/applications/application-Y7y8[1]/element(*,ka:asset)[ <at> ka:assetType
= 'comment'] order by  <at> jcr:score converted to SQL-2 Query:

select b.[jcr:path] as [jcr:path], b.[jcr:score] as [jcr:score], b.*
from [nt:base] as a inner join [ka:asset] as b on ischildnode(b, a)
where 1(*)is not null and issamenode(a,
'/content/companies/sortlandsavisa-5449/positions/test-stilling-5619/applications/application-Y7y8')
and b.[ka:assetType] = 'comment' order by b.[jcr:score]

/* xpath: /jcr:root/content/companies/sortlandsavisa-5449/positions/test-stilling-5619/applications/application-Y7y8[1]/element(*,ka:asset)[ <at> ka:assetType
= 'comment'] order by  <at> jcr:score */;

expected: NOT, (

Is the XPATH query above wrong?

--

-- 
-Tor


Gmane