Zheng Lin Edwin Yeo | 6 Jul 06:17 2015

Problems with loading core when starting Solr 5.2.1


I've just migrated to Solr 5.2.1 with external ZooKeeper 3.4.6.

Whenever I tried to start Solr using these command, the Solr servers gets
started, but none of the cores is actually loaded.
- bin\solr.cmd start -cloud -z localhost:2181
- bin\solr.cmd -cloud -p 8983 -s server\solr -z localhost:2181

I can only get the core to be loaded when I use the following command
 - bin\solr.cmd -e cloud -z localhost:2181

However, this command is actually the step by step guide to create a new
collection, and I don't have to create a new collection each time I start

Is there anything wrong with my first two commands?

William Bell | 6 Jul 06:11 2015

LFU vs solr.FastLRUCache

Has anyone used  solr.LFUCache in Production to replace:

<filterCache class="solr.FastLRUCache"


      initialSize="4096" cleanupThread="true"



Bill Bell
billnbell <at> gmail.com
cell 720-256-8076
SHANKAR REDDY | 6 Jul 05:41 2015

Display pattern of child entity records as row

I have a  requirement like getting the list of the child tables along with
parent records as the below pattern.

"display_value_txt":["Analyser : Problems/Category/Author"],
"detail_txt":[{"IOBJNM": "0RS_AUTHOR","TXTLG":"Author","IOBJTP":"CHA"},
{"IOBJNM": "0RS_CCAT","TXTLG":" CheckCategory (inte","IOBJTP":" CHA"},
{"IOBJNM": "0RS_PRIORTY","TXTLG":"Priority CHA","IOBJTP":""},
{"IOBJNM": "PRIOSEL","TXTLG":"Priority Selection","IOBJTP":"KFG""}]

From the above detail_txt child entity ( Child Table). With the current and
default implementation, I am getting the response as below.

        "display_value_txt":["Analyser : Problems/Category/Author"],
(Continue reading)

Chaushu, Shani | 5 Jul 14:17 2015

EmbeddedSolrServer No such core: collection1

I'm using EmbeddedSolrServer   for testing the solr.
I went step by step in this instuctions (for solr 4)
I can see that the config loaded, but when I try to put document, the error I get is:
org.apache.solr.common.SolrException: No such core: collection1

I'm sure it's something in the solr.xml, but I couldn't find the issue,
Any thought?

in the solr.xml I have:

        <str name="host">${host:}</str>
        <int name="hostPort">${jetty.port:8983}</int>
        <str name="hostContext">${hostContext:solr}</str>
        <int name="zkClientTimeout">${zkClientTimeout:30000}</int>
        <bool name="genericCoreNodeNames">${genericCoreNodeNames:true}</bool>

    <solr persistent="true">
        <cores adminPath="collection1" defaultCoreName="collection1">
            <core name="collection1" instanceDir="collection1" />

    <shardHandlerFactory name="shardHandlerFactory"
        <int name="socketTimeout">${socketTimeout:0}</int>
(Continue reading)

david.w.smiley@gmail.com | 3 Jul 19:19 2015

Suggester with file source with SolrCloud

The Suggester “FileDictionaryFactory” resolves the sourceLocation parameter
against the SolrResourceLoader, and in SolrCloud that’s
ZkSolrResourceLoader.  The base SolrResourceLoader supports
a solr.allow.unsafe.resourceloading=true system property to be set which
basically allows you to refer to any file on the file system without being
limited to the instance directory of the current core.  The
ZkSolrResourceLoader is pretty straight-forward and doesn’t have this
option.  So effectively, if you’re using SolrCloud, your suggester input
file needs to go into ZooKeeper.  AFAIK, ZooKeeper’s data is all in-memory
and, I may be mistaken, but there are limits, like 2G for the whole
ZooKeeper state?  I’d *really* like to avoid putting this data into
ZooKeeper.  Does anyone have any suggestions or further input?

I think at this point I may have to approach this by creating a special
collection that exists purely for suggest so that I could then
use DocumentDictionaryFactory.

~ David
Erick Erickson | 3 Jul 18:57 2015

Re: autosuggest with solr.EdgeNGramFilterFactory no result found

OK, I think you took a wrong turn at the bakery....

The FST-based suggesters are intended to look at the
beginnings of fields. It is totally unnecessary to use
ngrams, the FST that gets built does that _for_ you.
Actually it builds an internal FST structure that does
this "en passant".

For getting whole fields that are anywhere in the input
field, you probably want to think about
AnalyzingInfixSuggester or FreeTextSuggester.

The important bit here is that you shouldn't have to do
so much work...

This might help:



On Fri, Jul 3, 2015 at 4:40 AM, Roland Szűcs
<roland.szucs <at> bookandwalk.com> wrote:
> I tried to setup an autosuggest feature with multiple dictionaries for
> title , author and publisher fields.
> I used the solr.EdgeNGramFilterFactory to optimize the performance of the
> auto suggest.
(Continue reading)

Roland Szűcs | 3 Jul 13:40 2015

autosuggest with solr.EdgeNGramFilterFactory no result found

I tried to setup an autosuggest feature with multiple dictionaries for
title , author and publisher fields.

I used the solr.EdgeNGramFilterFactory to optimize the performance of the
auto suggest.

I have a document in the index with title: Romana.

When I test the text analysis for auto suggest (on filed of
rom[72 6f 6d]061word1roma[72 6f 6d 61]061word1roman[72 6f 6d 61 6e]061word1
romana[72 6f 6d 61 6e 61]061word1
If I try to run http://localhost:8983/solr/bandw/suggest?q=Roma, I get:
<lst name="responseHeader">
<int name="status">0</int>
<int name="QTime">1</int>
<lst name="suggest">
<lst name="suggest_publisher">
<lst name="Roma">
<int name="numFound">0</int>
<arr name="suggestions"/>
<lst name="suggest_title">
<lst name="Roma">
<int name="numFound">0</int>
(Continue reading)

Zheng Lin Edwin Yeo | 3 Jul 10:07 2015

Migrating from Solr 5.1 to Solr 5.2.1


I'm trying to migrate from Solr 5.1 to Solr 5.2.1. However, I faced some
problems when I'm trying to migrate my index over, and when I'm trying to
link up the external ZooKeeper to Solr.
I'm using ZooKeeper 3.4.6

In Solr 5.1, I used this command to start Solr for both Shard1 and Shard2:

java -D64 -Dsolr.clustering.enabled=true -Xms512M -Xmx4096M
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapDumps
-DzkHost=localhost:2181,localhost:2182,localhost:2183 -jar start.jar

However, I get the following error in Solr 5.2.1:
*Error: Unable to access jarfile start.jar*

For ZooKeeper, I've been using the zkCli.bat file
under server\scripts\cloud-scripts:

zkcli.bat -zkhost localhost:2181 \ -cmd upconfig -confname collection1

However, I get the following error in Solr 5.2.1
*Error: Could not find or load main class org.apache.solr.cloud.ZkCLI*

Is there any changes to the code structure in Solr 5.2.1 as compared to the
older versions?

(Continue reading)

Pritam Kute | 3 Jul 08:56 2015

Problem in facet.contains

Hello All,

I am new user to solr and using solr 5.2.0 setup. I am trying to create
multiple types of facets on same field. I am filtering the facets by using "
*facet.contains*". The following is the data into field.

roles : {
     "0/Student Name/",
     "1/Student Name/1000/",
     "0/Center Name/",
     "1/Center Name/1000/"

I am trying to add facet field like following:

query.addFacetField("{!ex=role"+i+" key=role"+i+"

where, roleType is iterated and it contains values "Student Name", "Center
Name" etc. and value of i is 1.

But I am getting error as:

org.apache.solr.search.SyntaxError: Expected identifier at pos 63
str='{!key=role1 facet.contains=/Student Name/}roles'

It works nicely if there is no any space in the string. i.e. if I index doc
as "1/StudentName/1000/".

It would be great help if somebody helps me out in this issue. Please
(Continue reading)

Ronald Wood | 2 Jul 21:36 2015

Distributed queries hang in a non-SolrCloud environment, Solr 4.10.4

We are running into an issue when doing distributed queries on Solr 4.10.4. We do not use SolrCloud but
instead keep track of shards that need to be searched based on date ranges.

We have been running distributed queries without incident for several years now, but we only recently
upgraded to 4.10.4 from 4.8.1.

The query is relatively simple and involves 4 shards, including the aggregator itself.

For a while the server that is acting as the aggregator for the distributed query handles the requests fine,
but after an indefinite amount of usage (in the range of 2-4 hours) it starts hanging on all distributed
queries while serving non-distributed versions  (no shards list is included) of the same query quickly (9 ms).

CPU, Heap and System Memory Usage do not seem unusual compared to other servers.

I had initially suspect that distributed searches combined with faceting might be part of the issue, since
I had seen some long-running threads that seemed to spend a long time in the FastLRUCache when getting
facets for a single field. However, in the latest case of blocked queries, I am not seeing that.

We have two slaves that replicate from a master, and we were saw the issue recur after a while of client usage,
ruling out a hardware issue.

Does anyone have any suggestions for potential avenues of attack for getting to the bottom of this? Or are
there any known issues that could be implicated in this?

- Ronald S. Wood
Aki Balogh | 2 Jul 17:46 2015

AND for multiple faceted queries

I'm trying to specify multiple fq and get the intersection: (lines
separated for readability)

fq=(body:"crib bedding" OR title:"crib bedding")&
fq={!frange l=0 u=0}termfreq(body,"crib bedding")&
fq={!frange l=0 u=0}termfreq(title,"crib bedding")&

this should return 0 records, but it comes back with results. turns out, it
is returning records that match ANY of the fqs, not ALL of the fqs.

how can I force solr to return only records that match ALL?