Peter Sh | 25 Jun 23:36 2016
Picon

Streaming Expressions (/stream) StreamHandler java.lang.NullPointerException

I've got an exception below running
curl --data-urlencode
'expr=search(EventsAndDCF,q="*:*",fl="AccessPath",sort="AccessPath
asc",qt="/export")' "http://localhost:8983/solr/EventsAndDCF/stream"
Solr responce:
{"result-set":{"docs":[
{"EXCEPTION":null,"EOF":true}]}}

My collection EventsAndDCF exists. and I succeed to run GET queries like:
http://localhost:8983/solr/EventsAndDCF/export?fl=AccessPath&q=*:*&sort=AccessPath
desc&wt=json

Solr version: 6.0.1. Single node

2016-06-25 21:15:44.147 ERROR (qtp1514322932-16) [   x:EventsAndDCF]
o.a.s.h.StreamHandler java.lang.NullPointerException
at
org.apache.solr.client.solrj.io.stream.expr.StreamExpressionParser.generateStreamExpression(StreamExpressionParser.java:46)
at
org.apache.solr.client.solrj.io.stream.expr.StreamExpressionParser.parse(StreamExpressionParser.java:37)
at
org.apache.solr.client.solrj.io.stream.expr.StreamFactory.constructStream(StreamFactory.java:178)
at
org.apache.solr.handler.StreamHandler.handleRequestBody(StreamHandler.java:164)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
at
(Continue reading)

asteiner | 26 Jun 07:09 2016

limit stored field size

Hi

I have a field called content which I'm indexing and use for highlighting,
which means it has to be stored as well. 

<field name="content" type="text_general" indexed="true" stored="true"
multiValued="false" termVectors="true"/>

But this field may be too big, so I want to limit the stored size to X
characters (it is fine to highlight only the first X characters).

One solution is to create another field called content_snippet which will be
a copy field of content field by maxchars of X (10000 in my example), set
content as non-stored and set content_snippet as stored and indexed.
content_snippet must be indexed in order to highlight it. 

<field name="content" type="text_general" indexed="true" stored="false"
multiValued="false" termVectors="true"/>
<field name="content_snippet" omitNorms="true" type="text_general"
indexed="true" stored="true" multiValued="false"/>
<copyField source="content" dest="content_snippet" maxChars="10000" />

So as a result I have two indexed fields, which is redundant. My goal is to
decrease index size. Is there a way to limit the stored size within one
field without creating copy field?

--
View this message in context: http://lucene.472066.n3.nabble.com/limit-stored-field-size-tp4284356.html
Sent from the Solr - User mailing list archive at Nabble.com.

(Continue reading)

Deeksha Sharma | 26 Jun 00:07 2016

SolrCloud trying to upload documents and shards do not have storage anymore

Hi,

I am currently using JSON Index Handler to upload documents to a specific collection on SolrCloud. Now what
I need to know is:

If I upload documents to SolrCloud collection and the machines hosting Shards for this collection have no
storage left, will Solr reject the commit request?

?
Roshan Kamble | 25 Jun 22:18 2016

Could not load collection for SolrCloud

Hello,

I am using solr 6.0.0 in SolCloud mode with 3 nodes, one zookeeper and 3 shard and 2 replica per collection.

Getting below error for some insert/update when trying to insert documents to Solr.

And it has been observed that few shard are in either recovery or fail recovery state. (Atleast one shard is up)

org.apache.solr.common.SolrException: Could not load collection from ZK: MY_COLLECTION
        at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:969)
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:519)
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:189)
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.common.cloud.ClusterState.hasCollection(ClusterState.java:119)
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1111)
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:833)
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
~[solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 - nknize - 2016-04-01 14:41:50]
        at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
(Continue reading)

tkg_cangkul | 25 Jun 20:49 2016
Picon

integrate SOLR with OSM

hi i wanna try to integrate SOLR with OpenStreetMap (OSM). well the plan 
is i wanna index some cordinaate (long & lat) to SOLR and then the OSM 
will try to showing the map of that coordinate. is there any article 
about that? pls help. i'm still confuse about this.

thx before.

Roshan Kamble | 25 Jun 09:19 2016

SolrCloud persisting data is very slow

Hello,

I am using Solr 6.0.0 in cloudMode (3 physical nodes + one zookeeper)  and have heavy insert/update/delete operations.

I am using CloudSolrClient and tried with all batch size from 100 to 1000.

But it has been observed that persist at Solr node is very slow. It takes around 20 secords to store 50-100 records.

Does anyone know how to improve the speed for these operations?

Regards,
Roshan
________________________________
The information in this email is confidential and may be legally privileged. It is intended solely for the
addressee. Access to this email by anyone else is unauthorised. If you are not the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is
prohibited and may be unlawful.
Harsha JSN | 24 Jun 07:21 2016
Picon

Using n-grams vs AnalyzingInfixLookupFactory for suggestions in solr

Hi,
   I have some doubts regarding usage of AnalyzingInfixLookupFactory as
 lookup implementation for suggestions.

1.) AnalyzingInfixLookupFactory constructs n-grams for the suggestion field
while building suggestions index. If the main index which is used for
search is already having n-grams for this field, is it still preferred to
choose  AnalyzingInfixLookupFactory or can we directly build suggestions
from the main index?

2.) Also, AnalyzingInfixLookupFactory returns duplicate records if the
suggestion field has same value in multiple documents. Instead if i search
suggestions from main index (n-grams) i can eliminate the duplicates by
grouping the results. But grouping can be a complex operation.Can you guide
the correct approach here?

3.) Choosing FuzzyLookupFactory looks beneficial, but we have to filter the
results over user context and also we need to provide infix search
capabilities for suggestions which we can't.

Can some one please help on this? Thanks in advance.

Harsha.
Shankar Ramalingam | 24 Jun 15:15 2016
Picon

Internode communication failed when enable basic authentication Solr 6.1.0

Hi Team,

Basic Authentication is enabled on Solr cloud and node1 is running on one
machine and node2 is runnin on second machine, zookeeper installed on
second machine.  Getting unathorized error when enable basic auth, error
mostly occure when machine trying access machine 2 solr and also while
starting solr also i can see the error message.

It would be grateful if you help me to resole the issue. I saw some jira
ticket stating that some internode communication issue and got fixed in
solr 6, but I am using solr 6 only and also getting isssue, Even-though am
 login admin user on solr UI, sometime getting Error 401 Unauthorized
request mostly request go to node1 to node2.

 [c:adm s:shard2 r:core_node2 x:adm_shard2_replica2  ]
o.a.s.h.RequestHandlerBase
org.apache.solr.client.solrj.impl.HttpSolrClient$Re
                                                    moteSolrException:
Error from server at http://172.16.7.58:8983/solr/adm_shard2_ replica1:
Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 401 Unauthorized request, Response code: 401</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /solr/adm_shard2_replica1/select. Reason:
<pre>   * Unauthorized request, Response code: 401*</pre></p>
</body>
</html>

(Continue reading)

John Blythe | 24 Jun 15:13 2016
Gravatar

frange and calculated values

hi all,

i'm querying a pricing benchmark data set with product level detail from a
customer's recent purchases. to help refine things, i'm attempting to keep
the low benchmark price within a 3x and 1/3x range of the currently paid
price.

so, for instance, if i've been buying Foo at $100, I don't want any results
less than $33 or more than $300 in the 'benchmarkLow' field

i'm tripping over syntax, i think. i currently have:

{!frange l=div(100,3) u=prod(100,3) v='benchmarkLow'}

i've also tried it with the benchmarkLow outside the curly:
{!frange}benchmarkLow

the benchmark field is a double, fwiw.

what am i missing here?

thanks for any info!
tjlp | 24 Jun 11:03 2016

回复:Re: Does Solr 6.0 support indexing and querying for HUNGARIAN, KOREAN, SLOVAK, VIETNAMESE and Traditional Chinese documents?

Hi, Alex,

Although in the list you provide, org.apache.lucene.analysis.hu.HungarianAnalyzer is there. But in
the source code of Solr 6.0 (include the Lucene source code), no package org.apache.lucene.analysis.hu
is define.

Thanks
Liu Peng

----- 原始邮件 -----
发件人:Alexandre Rafalovitch <arafalov <at> gmail.com>
收件人:solr-user <solr-user <at> lucene.apache.org>, tjlp <at> sina.com
主题:Re: Does Solr 6.0 support indexing and querying for HUNGARIAN, KOREAN, SLOVAK, VIETNAMESE and
Traditional Chinese documents?
日期:2016年06月24日 13点58分

The full list is here: http://www.solr-start.com/info/analyzers . I can see at least Hungarian.
Regards,

    Alex
On 23 Jun 2016 7:46 PM,  <tjlp <at> sina.com> wrote: Hi,



I am using Solr 6.0 to indexing document from different countries. I go through the reference guide of Solr
6.0. I can't find anything about HUNGARIAN, SLOVAK,  and VIETNAMESE language support. And For KOREAN and
Traditional Chinese, I can find the CJK tokenizer. Is CJK tokenizer enough?



(Continue reading)

Henrik Brautaset Aronsen | 24 Jun 09:45 2016
Picon
Gravatar

How do I use Spring Boot when Solr 6.1 (and thus Jetty 9.3) is on the classpath?

I have a Spring Boot project, and I am trying to upgrade from Solr 5.4 to
Solr 6.1. Solr 6.1 has a dependency to Jetty 9.3. Now Spring Boot
complains: it gives a NoClassDefFoundError:
org/eclipse/jetty/server/handler/ContextHandler$NoContext. ContextHandler
exists in Jetty 9.3, but not the inner class NoContext.

Is there a way of solving this?

java.lang.IllegalStateException: Failed to load ApplicationContext
    at
org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:124)
    at
org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:83)
    at
org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:117)
    at
org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:83)
    at
org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:228)
    at
org.spockframework.spring.SpringTestContextManager.prepareTestInstance(SpringTestContextManager.java:49)
    at
org.spockframework.spring.SpringInterceptor.interceptSetupMethod(SpringInterceptor.java:42)
    at
org.spockframework.runtime.extension.AbstractMethodInterceptor.intercept(AbstractMethodInterceptor.java:28)
    at
org.spockframework.runtime.extension.MethodInvocation.proceed(MethodInvocation.java:87)
    at
org.spockframework.runtime.extension.MethodInvocation.proceed(MethodInvocation.java:88)
    at
(Continue reading)


Gmane