Felipe Vinturini | 23 Jul 00:26 2016
Picon
Gravatar

Solr Date Query "Intraday"

Hi all,

Is there a way to query solr between dates and query like "intraday",
between hours in those days? Something like: I want to search field "text"
with value: "test" and field "date" between 20160601 AND 20160610 and
between only hours of those days: 1PM AND 4PM?

I know I could loop over the dates, I just would like to know if there is
another way to do it in Solr. My Solr version is: 4.10.2.

Also, is there a "name" for these kind of queries?

Thanks a lot for your attention and help.

Regards,
Felipe.
tedsolr | 22 Jul 17:23 2016

Should streaming place load on the app server?

The streaming API looks like it's meant to be run from the client app server
- very similar to a standard Solr search. When I run a basic streaming
operation the memory consumption occurs on the app server jvm, not the solr
server jvm. The opposite of what I was expecting. 

(pseudo code)
Stream A = new CloudSolrStream();
Stream B = new CloudSolrStream();
Stream C = new HashJoinStream(A, B);
Stream D = new SortStream(C);
Stream E = new ReducerStream(D);
E.open();

The SortStream is processed in memory when open() is called. Can the
processing be pushed off to the Solr cluster? Is that what the Parallel
stream will do - using worker collections?

confused,
Ted

--
View this message in context: http://lucene.472066.n3.nabble.com/Should-streaming-place-load-on-the-app-server-tp4288466.html
Sent from the Solr - User mailing list archive at Nabble.com.

Alessandro Bon | 22 Jul 12:02 2016

Solr "replicateAfter optimize" is specified, but replication starts also on commits and master startup (tested on solr 5.5.2)

Hi everyone,
I am experiencing a replication issue on a master/slave configuration,

Issue: Full index replicas occur sometimes on master startup and after commits, despite only the <str
name="replicateAfter">optimize</str> directive is specified. In the case of replica on commit, it
occurs only for sufficiently big commits. Replica correctly starts again at the end of my indexing job,
after the optimization phase. As result of this behaviour I get incomplete indexes on slaves during the
indexing process.
Solr version: 5.5.2
Configuration:

<config>
    <abortOnConfigurationError>${solr.abortOnConfigurationError:true}</abortOnConfigurationError>

    <luceneMatchVersion>5.5.1</luceneMatchVersion>

    <dataDir>${solr.data.dir:}</dataDir>

    <directoryFactory name="DirectoryFactory"
                      class="${solr.directoryFactory:solr.StandardDirectoryFactory}"/>

    <indexConfig>
        <writeLockTimeout>1000</writeLockTimeout>
        <useCompoundFile>false</useCompoundFile>
        <ramBufferSizeMB>32</ramBufferSizeMB>
        <mergePolicyFactory class="org.apache.solr.index.TieredMergePolicyFactory">
           <int name="maxMergeAtOnce">10</int>
           <int name="segmentsPerTier">10</int>
        </mergePolicyFactory>
        <lockType>native</lockType>
(Continue reading)

Aristedes Maniatis | 22 Jul 09:22 2016
Picon
Gravatar

loading zookeeper data

Hi everyone

I'm not new to Solr, but I'm upgrading from Solr 4 to 5 and needing to use the new Zookeeper configuration
requirement. It is adding a lot of extra complexity to our deployment and I want to check that we are doing it right.

1. We are using Saltstack to push files to deployment servers. That makes it easy to put files anywhere I
want, run scripts, etc. If you don't know Salt, it is a lot like Puppet or other configuration management
tools. Salt is all python.

2. We use Jenkins to build and test

3. Deployment servers are all FreeBSD.

Now, in the old days, I could just push the right core configuration files to each Solr instance (we have
three cores), make sure one is the master and use cron to ensure the master updates. The other Solr slaves
all update nicely. The problem we want to escape is that this configuration causes outages and other
random issues each time the Solr master does a full reload. It shouldn't, but it does and hopefully the new
SolrCluster will be better.

Now, I can still deploy Solr and Zookeeper using Salt. All that works well and is easy. But how I do get the
configuration files from our development/test environment (built and tested with Jenkins) into
production? Obviously I want those config files in version control. And maybe Jenkins can zip up the 8
configuration files (per core) and push them to our artifact repository.

But then what? In the production cluster it seems I then need to

1. Grab the latest configuration bundle for each core and unpack them
2. Launch Java
3. Execute the Solr jars (from the production server since it must be the right version)
- with org.apache.solr.cloud.ZkCLI
(Continue reading)

Shyam R | 22 Jul 08:41 2016
Picon

Solr query - response status

All,

I see that SOLR returns status value as 0 for successful searches

org.apache.solr.core.SolrCore; [users_shadow_shard1_replica1] webapp=/solr
path=/user/ping params={} status=0 QTime=0

I do see that the status come's back as 400 whenever the search is invalid
( invoking query with parameters that are not available in the target
collection )

What are the legitimate values of status and reason for choosing 0?

Thanks
Shyam
--

-- 
Ph: 9845704792

Reference to SolrCore from SearchComponent

Hi There,

I'm in the process of creating a custom SearchComponent. This component will have a long running thread
performing an action to keep a list updated. As SearchComponents do not seem to have a destroy/close hook,
I was wondering if there is a way of getting a reference to the SolrCore the SearchComponent is
instantiated in and adding a CloseHook or similar? Is this possible?

Cheers,

Tom

Lloyds Banking Group plc. Registered Office: The Mound, Edinburgh EH1 1YZ. Registered in Scotland no.
SC95000. Telephone: 0131 225 4555. Lloyds Bank plc. Registered Office: 25 Gresham Street, London EC2V
7HN. Registered in England and Wales no. 2065. Telephone 0207626 1500. Bank of Scotland plc. Registered
Office: The Mound, Edinburgh EH1 1YZ. Registered in Scotland no. SC327000. Telephone: 03457 801 801.
Cheltenham & Gloucester plc. Registered Office: Barnett Way, Gloucester GL4 3RL. Registered in England
and Wales 2299428. Telephone: 0345 603 1637

Lloyds Bank plc, Bank of Scotland plc are authorised by the Prudential Regulation Authority and regulated
by the Financial Conduct Authority and Prudential Regulation Authority.

Cheltenham & Gloucester plc is authorised and regulated by the Financial Conduct Authority.

Halifax is a division of Bank of Scotland plc. Cheltenham & Gloucester Savings is a division of Lloyds Bank plc.

HBOS plc. Registered Office: The Mound, Edinburgh EH1 1YZ. Registered in Scotland no. SC218813.

This e-mail (including any attachments) is private and confidential and may contain privileged
material. If you have received this e-mail in error, please notify the sender and delete it (including any
attachments) immediately. You must not copy, distribute, disclose or use any of the information in it or
(Continue reading)

Nick Vasilyev | 21 Jul 19:48 2016
Picon

Solr Rounding Issue On Float fields.

Hi, I am running into a weird rounding issue on Solr 5.2.1. I have a float field (also tried tfloat), I am indexing 154035.26 into it (confirmed in the data),  but at query time, I get back 154035.27 (.01 more). Additionally when I query for the document and include this number in the q parameter, it comes up with both values, .26 and .27. 

I've fed the values through the analyzer and I get this bizarre behavior per the screenshot below. The field is a single value float or tfloat field. 

Any help would be much appreciated, thanks in advance


Koorosh Vakhshoori | 21 Jul 19:35 2016

Any option to NOT return stack trace in Solr response?

Hi all,
  Got a Solr 5.2.1 installation. I am getting following error response when calling the TERMS component. Now
the error is not the point, I know what is going on in this instance. However, to address security concerns,
I am trying to have Solr truncate the stack trace in the response. Of course I would still want Solr to log the
error in its log file. What I was wondering, if there is a flag or option I can set in solrconfig.xml globally
or under TERMS to omit the trace or just return ' java.lang.NullPointerException'? I have looked at the
source code and don't see anything relevant. However, I may have missed something. Appreciated any
suggestion and pointers.

<response>
<lst name="responseHeader">
<int name="status">500</int>
<int name="QTime">5</int>
</lst>
<lst name="error">
<str name="trace">
java.lang.NullPointerException at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:322)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) at
org.apache.solr.core.SolrCore.execute(SolrCore.java:2067) at
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654) at
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450) at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227) at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196) at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.filters.CorsFilter.handleNonCORS(CorsFilter.java:439) at
org.apache.catalina.filters.CorsFilter.doFilter(CorsFilter.java:178) at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219) at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:136) at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) at
org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:610)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:526) at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1078)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:655)
at
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:222)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1566) at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1523) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745)
</str>
<int name="code">500</int>
</lst>
</response>

Regards,

Koorosh
Timothy Potter | 21 Jul 18:28 2016
Picon

Streaming expression - workers is zero somehow?

I'm working with 6.1.0 release and I have a single SolrCloud instance
with 1 shard / 1 replica. Somehow I'm triggering this, which from what
I can see, means workers == 0, but how? Shouldn't workers default to 1

I should mention that my streaming expression doesn't include any
workers, i.e. it is simply:

val hashJoinExpr =
  s"""
     | hashJoin(
     |    search(${ratingsCollection},
     |           q="*:*",
     |           fl="movie_id,user_id,rating",
     |           sort="movie_id asc",
     |           qt="/export",
     |           partitionKeys="movie_id"),
     |    hashed=search(${moviesCollection},
     |                  q="*:*",
     |                  fl="movie_id,title",
     |                  sort="movie_id asc",
     |                  qt="/export",
     |                  partitionKeys="movie_id"),
     |    on="movie_id"
     |  )
   """.stripMargin

2016-07-21 10:08:44,596 [qtp2125832297-1073] ERROR RequestHandlerBase
- java.io.IOException: java.lang.RuntimeException:
java.lang.ArithmeticException: / by zero
    at org.apache.solr.search.HashQParserPlugin$HashQuery.createWeight(HashQParserPlugin.java:130)
    at org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:752)
    at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:735)
    at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
    at org.apache.solr.search.DocSetUtil.createDocSetGeneric(DocSetUtil.java:102)
    at org.apache.solr.search.DocSetUtil.createDocSet(DocSetUtil.java:91)
    at org.apache.solr.search.SolrIndexSearcher.getDocSetNC(SolrIndexSearcher.java:1386)
    at org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:1064)
    at org.apache.solr.search.SolrIndexSearcher.getProcessedFilter(SolrIndexSearcher.java:1234)
    at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1751)
    at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1627)
    at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:643)
    at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529)
    at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
    at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
    at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)
    at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)
    at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)
    at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
    at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
    at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:109)
    at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:399)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:518)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
    at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.lang.ArithmeticException: / by zero
    at org.apache.solr.search.HashQParserPlugin$HashQuery$SegmentPartitioner.run(HashQParserPlugin.java:215)
    at org.apache.solr.search.HashQParserPlugin$HashQuery.createWeight(HashQParserPlugin.java:127)
    ... 42 more
Caused by: java.lang.ArithmeticException: / by zero
    at org.apache.solr.search.HashQParserPlugin$HashQuery$SegmentPartitioner.run(HashQParserPlugin.java:210)
    ... 43 more

Rallavagu | 21 Jul 17:18 2016
Picon

solr.NRTCachingDirectoryFactory

Solr 5.4.1 with embedded jetty with cloud enabled

We have a Solr deployment (approximately 3 million documents) with both 
write and search operations happening. We have a requirement to have 
updates available immediately (NRT). Configured with default 
"solr.NRTCachingDirectoryFactory" for directory factory. Considering the 
fact that every time there is an update, caches are invalidated and 
re-built I assume that "solr.NRTCachingDirectoryFactory" would memory 
map index files so "reading from disk" will be as simple and quick as 
reading from memory hence would not incur any significant performance 
degradation. Am I right in my assumption? We have allocated significant 
amount of RAM (48G total physical memory, 12G heap, Total index disk 
size is 15G) but not sure if I am seeing the optimal QTimes (for 
searches). Any inputs are welcome. Thanks in advance.

Bhaumik Joshi | 21 Jul 15:56 2016

Issue in SolrInputDocument

Hi,

I am getting below error while converting json to my object. I am using Gson class (gson-2.2.4.jar) to
generate json from object and object from json.
gson fromJson() method throws below error.
Note: This was working fine with solr-solrj-5.2.0.jar but it causing issue when i uses
solr-solrj-6.1.0.jar. As i checked SolrInputDocument class has changed in solr-solrj-5.5.0.

java.lang.IllegalArgumentException: Can not set org.apache.solr.common.SolrInputDocument field
com.test.common.MySolrMessage.body to com.google.gson.internal.LinkedTreeMap
at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:167)
at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:171)
at sun.reflect.UnsafeObjectFieldAccessorImpl.set(UnsafeObjectFieldAccessorImpl.java:81)
at java.lang.reflect.Field.set(Field.java:764)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:108)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:185)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.read(TypeAdapterRuntimeTypeWrapper.java:40)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:81)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:1)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:106)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:185)
at com.google.gson.Gson.fromJson(Gson.java:825)
at com.google.gson.Gson.fromJson(Gson.java:790)
at com.google.gson.Gson.fromJson(Gson.java:739)
at com.google.gson.Gson.fromJson(Gson.java:711)

public class MySolrMessage<T extends SolrInputDocument> implements IMessage
{
    private static final long serialVersionUID = 1L;
    private T body = null;
    private String collection;
    private int action;
    private int errorCode;
    private long msgId;
//few parameterized constructor
//getter and setter method of all above attributes
}

public interface IMessage extends Serializable
{
    public long getMsgId();
    public void setMsgId(long id);
    public Object getBody();
    public void setBody(Object o);
    public void setErrorCode(int ec);
    public int getErrorCode();
}

public class Request {
LinkedList msgList = new LinkedList();

public Request() {
}

public Request(LinkedList l) {
this.msgList = l;
}

public LinkedList getMsgList() {
return this.msgList;
}
}

 <at> JsonAutoDetect(JsonMethod.FIELD)
 <at> JsonSerialize(include = JsonSerialize.Inclusion.NON_NULL)
public class Request2
{
     <at> JsonProperty
     <at> JsonDeserialize(as=LinkedList.class,contentAs = MySolrMessage.class)
    LinkedList<MySolrMessage<SolrInputDocument>> msgList = new LinkedList<MySolrMessage<SolrInputDocument>>();

    public Request()
    {

    }

    public Request(LinkedList<MySolrMessage<SolrInputDocument>> l)
    {
        this.msgList = l;
    }

    public LinkedList<MySolrMessage<SolrInputDocument>> getMsgList()
    {
        return this.msgList;
    }
}

public class Test {

public static void main(String[] args) {
SolrInputDocument solrDocument = new SolrInputDocument();
solrDocument.addField("id", "1234");
solrDocument.addField("name", "test");
MySolrMessage<SolrInputDocument> asm = new MySolrMessage(solrDocument, "collection1", 1);
IMessage message = asm;
List<IMessage> msgList = new ArrayList<IMessage>();
msgList.add(message);
LinkedList ex = new LinkedList();
ex.addAll(msgList);
Request request = new Request(ex);
try
{
String json = "";
Gson gson = (new GsonBuilder()).serializeNulls().create();
gson.setASessionId((String) null);
json = gson.toJson(request);
Gson gson2 = new Gson();
Request2 retObj = gson2.fromJson(json, Request2.class); //this will gives the above error.
}
catch (Exception e)
{
   e.printStackTrace();
}
}
}

Any idea?

Thanks & Regards,

Bhaumik Joshi

Gmane