Peter Sh | 25 Jun 23:36 2016

Streaming Expressions (/stream) StreamHandler java.lang.NullPointerException

I've got an exception below running
curl --data-urlencode
asc",qt="/export")' "http://localhost:8983/solr/EventsAndDCF/stream"
Solr responce:

My collection EventsAndDCF exists. and I succeed to run GET queries like:

Solr version: 6.0.1. Single node

2016-06-25 21:15:44.147 ERROR (qtp1514322932-16) [   x:EventsAndDCF]
o.a.s.h.StreamHandler java.lang.NullPointerException
at org.apache.solr.core.SolrCore.execute(
at org.apache.solr.servlet.HttpSolrCall.execute(
(Continue reading)

asteiner | 26 Jun 07:09 2016

limit stored field size


I have a field called content which I'm indexing and use for highlighting,
which means it has to be stored as well. 

<field name="content" type="text_general" indexed="true" stored="true"
multiValued="false" termVectors="true"/>

But this field may be too big, so I want to limit the stored size to X
characters (it is fine to highlight only the first X characters).

One solution is to create another field called content_snippet which will be
a copy field of content field by maxchars of X (10000 in my example), set
content as non-stored and set content_snippet as stored and indexed.
content_snippet must be indexed in order to highlight it. 

<field name="content" type="text_general" indexed="true" stored="false"
multiValued="false" termVectors="true"/>
<field name="content_snippet" omitNorms="true" type="text_general"
indexed="true" stored="true" multiValued="false"/>
<copyField source="content" dest="content_snippet" maxChars="10000" />

So as a result I have two indexed fields, which is redundant. My goal is to
decrease index size. Is there a way to limit the stored size within one
field without creating copy field?

View this message in context:
Sent from the Solr - User mailing list archive at

(Continue reading)

Deeksha Sharma | 26 Jun 00:07 2016

SolrCloud trying to upload documents and shards do not have storage anymore


I am currently using JSON Index Handler to upload documents to a specific collection on SolrCloud. Now what
I need to know is:

If I upload documents to SolrCloud collection and the machines hosting Shards for this collection have no
storage left, will Solr reject the commit request?

Roshan Kamble | 25 Jun 22:18 2016

Could not load collection for SolrCloud


I am using solr 6.0.0 in SolCloud mode with 3 nodes, one zookeeper and 3 shard and 2 replica per collection.

Getting below error for some insert/update when trying to insert documents to Solr.

And it has been observed that few shard are in either recovery or fail recovery state. (Atleast one shard is up)

org.apache.solr.common.SolrException: Could not load collection from ZK: MY_COLLECTION
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.client.solrj.impl.CloudSolrClient.request(
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.client.solrj.SolrRequest.process(
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.client.solrj.SolrClient.add(
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.client.solrj.SolrClient.add(
(Continue reading)

tkg_cangkul | 25 Jun 20:49 2016

integrate SOLR with OSM

hi i wanna try to integrate SOLR with OpenStreetMap (OSM). well the plan 
is i wanna index some cordinaate (long & lat) to SOLR and then the OSM 
will try to showing the map of that coordinate. is there any article 
about that? pls help. i'm still confuse about this.

thx before.

Roshan Kamble | 25 Jun 09:19 2016

SolrCloud persisting data is very slow


I am using Solr 6.0.0 in cloudMode (3 physical nodes + one zookeeper)  and have heavy insert/update/delete operations.

I am using CloudSolrClient and tried with all batch size from 100 to 1000.

But it has been observed that persist at Solr node is very slow. It takes around 20 secords to store 50-100 records.

Does anyone know how to improve the speed for these operations?

The information in this email is confidential and may be legally privileged. It is intended solely for the
addressee. Access to this email by anyone else is unauthorised. If you are not the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is
prohibited and may be unlawful.
Harsha JSN | 24 Jun 07:21 2016

Using n-grams vs AnalyzingInfixLookupFactory for suggestions in solr

   I have some doubts regarding usage of AnalyzingInfixLookupFactory as
 lookup implementation for suggestions.

1.) AnalyzingInfixLookupFactory constructs n-grams for the suggestion field
while building suggestions index. If the main index which is used for
search is already having n-grams for this field, is it still preferred to
choose  AnalyzingInfixLookupFactory or can we directly build suggestions
from the main index?

2.) Also, AnalyzingInfixLookupFactory returns duplicate records if the
suggestion field has same value in multiple documents. Instead if i search
suggestions from main index (n-grams) i can eliminate the duplicates by
grouping the results. But grouping can be a complex operation.Can you guide
the correct approach here?

3.) Choosing FuzzyLookupFactory looks beneficial, but we have to filter the
results over user context and also we need to provide infix search
capabilities for suggestions which we can't.

Can some one please help on this? Thanks in advance.

Shankar Ramalingam | 24 Jun 15:15 2016

Internode communication failed when enable basic authentication Solr 6.1.0

Hi Team,

Basic Authentication is enabled on Solr cloud and node1 is running on one
machine and node2 is runnin on second machine, zookeeper installed on
second machine.  Getting unathorized error when enable basic auth, error
mostly occure when machine trying access machine 2 solr and also while
starting solr also i can see the error message.

It would be grateful if you help me to resole the issue. I saw some jira
ticket stating that some internode communication issue and got fixed in
solr 6, but I am using solr 6 only and also getting isssue, Even-though am
 login admin user on solr UI, sometime getting Error 401 Unauthorized
request mostly request go to node1 to node2.

 [c:adm s:shard2 r:core_node2 x:adm_shard2_replica2  ]
Error from server at replica1:
Expected mime type application/octet-stream but got text/html. <html>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 401 Unauthorized request, Response code: 401</title>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /solr/adm_shard2_replica1/select. Reason:
<pre>   * Unauthorized request, Response code: 401*</pre></p>

(Continue reading)

John Blythe | 24 Jun 15:13 2016

frange and calculated values

hi all,

i'm querying a pricing benchmark data set with product level detail from a
customer's recent purchases. to help refine things, i'm attempting to keep
the low benchmark price within a 3x and 1/3x range of the currently paid

so, for instance, if i've been buying Foo at $100, I don't want any results
less than $33 or more than $300 in the 'benchmarkLow' field

i'm tripping over syntax, i think. i currently have:

{!frange l=div(100,3) u=prod(100,3) v='benchmarkLow'}

i've also tried it with the benchmarkLow outside the curly:

the benchmark field is a double, fwiw.

what am i missing here?

thanks for any info!
tjlp | 24 Jun 11:03 2016

回复:Re: Does Solr 6.0 support indexing and querying for HUNGARIAN, KOREAN, SLOVAK, VIETNAMESE and Traditional Chinese documents?

Hi, Alex,

Although in the list you provide, is there. But in
the source code of Solr 6.0 (include the Lucene source code), no package
is define.

Liu Peng

----- 原始邮件 -----
发件人:Alexandre Rafalovitch <arafalov <at>>
收件人:solr-user <solr-user <at>>, tjlp <at>
主题:Re: Does Solr 6.0 support indexing and querying for HUNGARIAN, KOREAN, SLOVAK, VIETNAMESE and
Traditional Chinese documents?
日期:2016年06月24日 13点58分

The full list is here: . I can see at least Hungarian.

On 23 Jun 2016 7:46 PM,  <tjlp <at>> wrote: Hi,

I am using Solr 6.0 to indexing document from different countries. I go through the reference guide of Solr
6.0. I can't find anything about HUNGARIAN, SLOVAK,  and VIETNAMESE language support. And For KOREAN and
Traditional Chinese, I can find the CJK tokenizer. Is CJK tokenizer enough?

(Continue reading)

Henrik Brautaset Aronsen | 24 Jun 09:45 2016

How do I use Spring Boot when Solr 6.1 (and thus Jetty 9.3) is on the classpath?

I have a Spring Boot project, and I am trying to upgrade from Solr 5.4 to
Solr 6.1. Solr 6.1 has a dependency to Jetty 9.3. Now Spring Boot
complains: it gives a NoClassDefFoundError:
org/eclipse/jetty/server/handler/ContextHandler$NoContext. ContextHandler
exists in Jetty 9.3, but not the inner class NoContext.

Is there a way of solving this?

java.lang.IllegalStateException: Failed to load ApplicationContext
(Continue reading)