tedsolr | 2 Sep 23:12 2015

Merging documents from a distributed search

I've read from  http://heliosearch.org/solrs-mergestrategy/
<http://heliosearch.org/solrs-mergestrategy/>   that the AnalyticsQuery
component only works for a single instance of Solr. I'm planning to
"migrate" to the SolrCloud soon and I have a custom AnalyticsQuery module
that collapses what I consider to be duplicate documents, keeping stats like
a "count" of the dupes. For my purposes "dupes" are determined at run time
and vary by the search request. Once a collection has multiple shards I will
not be able to prevent "dupes" from appearing across those shards. A custom
merge strategy should allow me to merge my stats, but I don't see how I can
drop duplicate docs at that point.

If shard1 returns docs A & B and shard2 returns docs B & C (letters denoting
what I consider to be unique docs), can my implementation of a merge
strategy return only docs A, B, & C, rather than A, B, B, & C?

thanks! 
solr 5.2.1

--
View this message in context: http://lucene.472066.n3.nabble.com/Merging-documents-from-a-distributed-search-tp4226802.html
Sent from the Solr - User mailing list archive at Nabble.com.

Mark Fenbers | 2 Sep 23:03 2015
Picon

which solrconfig.xml

Hi,  I've been fiddling with Solr for two whole days since 
downloading/unzipping it.  I've learned a lot by reading 4 documents and 
the web site.  However, there are a dozen or so instances of 
solrconfig.xml in various $HOME/solr-5.3.0 subdirectories.  The 
documents/tutorials say to edit the solrconfig.xml file for various 
configuration details, but they never say which one of these dozen to 
edit.  Moreover, I cannot determine which version is being used once I 
start solr, so that I would know which instance of this file to 
edit/customize.

Can you help??

Thanks!
Mark

Siamak Rowshan | 2 Sep 22:07 2015

Rules for pre-processing queries

Hi all, I need to refine my search results by adding parameters to search query parameters. For example, if
user enters "ipad", I want to add a filter query such as ("category=tablets") to refine the search
results. I thought a more general solution would be to define rules, that examine the query parameter
values, and can alter or add to the query parameters. Short of writing custom code, are there any features
within Solr or add-on tools that can do something like this?

Regards,
Mak


Siamak Rowshan | Software Engineer
Softmart | 450 Acorn Lane Downingtown, PA 19335
P   | 888-763-8627
siamak.rowshan <at> softmart.com

****************************************************
EEO Employer/Protected Veteran/Disabled
The information in this e-mail is confidential and may be legally privileged. It is intended solely for the
addressee. Access to this e-mail by anyone else is unauthorized. Softmart Sales Terms & Conditions
available at www.softmart.com/terms.

****************************************************



Renee Sun | 2 Sep 21:30 2015

is there any way to tell delete by query actually deleted anything?

I run this curl trying to delete some messages :

curl
'http://localhost:8080/solr/mycore/update?commit=true&stream.body=<delete><id>abacd</id></delete>'
| xmllint --format -

or

curl
'http://localhost:8080/solr/mycore/update?commit=true&stream.body=<delete><query>myfield:mycriteria</query></delete>'
| xmllint --format -

the results I got is like:

  % Total    % Received % Xferd  Average Speed   Time    Time     Time 
Current
                                 Dload  Upload   Total   Spent    Left 
Speed
148   148    0   148    0     0  11402      0 --:--:-- --:--:-- --:--:--
14800
<?xml version="1.0" encoding="UTF-8"?>
<response>
  <lst name="responseHeader">
    <int name="status">0</int>
    <int name="QTime">10</int>
  </lst>
</response>

Is there an easy way for me to get the actually deleted document number? I
mean if the query did not hit any documents, I want to know that nothing got
(Continue reading)

Shawn Heisey | 2 Sep 20:02 2015

Re: concept and choice: custom sharding or auto sharding?

On 9/2/2015 9:19 AM, scott chu wrote:
> Mail
> Do you mean I only have to put 10M documents in one index and copy
> it to many slaves in a classic Solr master-slave architecture to
> provide querying serivce on internet, and it won't have obvious
> downgrade of query performance? But I did have add 1M document into
> one index on master and provide 2 slaves to serve querying service on
> internet, the query performance is kinda sad. Why do you say: "at 10M
> documents there's rarely a need to shard at all?" Do I provide too few
> slaves? What amount of documents is suitable for a need for shard in
> SolrCloud?

Lucene has exactly one hard and unbreakable limit, and it is the number
of documents you can have in a single index (core/shard for Solr).  That
limit is just over 2.1 billion documents.  The actual limiting factor is
the maximum value of an integer in Java.  Because deleted documents are
counted when this limit is considered, you shouldn't go over 1 billion
active documents per shard, but the *practical* recommendation for shard
size is much lower than that.

For various reasons, some of which are very technical and boring, the
general advice is to not exceed about 100 million documents per shard. 
Some setups can handle more docs per shard, some require a lot less. 
There are no quick answers or hard rules.  You may have been given this
URL before:

https://lucidworks.com/blog/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/

There are sometimes reasons to shard a very small index.  This is the
correct path when the index is not very busy and you want to take
(Continue reading)

scott chu | 2 Sep 18:07 2015

Re: Re: Re: Re: concept and choice: custom sharding or auto sharding?

 
solr-user,妳好
 
Sorry ,wrong again. Auto sharding is not implicit router.
----- Original Message -----
From: scott chu
Date: 2015-09-02, 23:50:20
Subject: Re: Re: Re: concept and choice: custom sharding or auto sharding?

 
solr-user,妳好
 
Thanks! I'll go back to check my old environment and that article is really helpful.
 
BTW, I think I got wrong about compositeID. In the reference guide, it said compositeID needs numShards. That means what I describe in question 5 seems wrong cause I intend to plan one shard one whole year news article and I thought SolrCloud will create new shard for me itself when I add new year's articles. But since compositeID needs to specify numShards first, there's no way I can know how many years I will put in SolrCloud in advance . IT looks like if I want to use SolrCloud afte all, I may have to use auto sharding (i.e. implicit router).
----- Original Message -----
Date: 2015-09-02, 23:30:53
Subject: Re: Re: concept and choice: custom sharding or auto sharding?

bq: Why do you say: "at 10M documents there's rarely a need to shard at all?"

Because I routinely see 50M docs on a single node and I've seen over 300M docs
on a single node with sub-second responses. So if you're saying that
you see poor
performance at 1M docs then I suspect there's something radically
wrong with your
setup. Too little memory, very bad query patterns, whatever. If my
suspicion is true,
then sharding will just mask the underlying problem.

You need to quantify your performance concerns. It's one thing to say
"my node satisfies 50 queries-per-second with 500ms response time" and
another to say "My queries take 5,000 ms".

In the first case, you do indeed need to add more servers to increase QPS if
you need 500 QPS. And adding more slaves is the best way to do that.
In the second, you need to understand the slowdown because sharding
will be a band-aid.

This might help:
https://wiki.apache.org/solr/SolrPerformanceProblems

Best,
Erick



On Wed, Sep 2, 2015 at 8:19 AM, scott chu <scott.chu <at> udngroup.com> wrote:
>
> solr-user,妳好
>
> Do you mean I only have to put 10M documents in one index and copy it to

> many slaves in a classic Solr master-slave architecture to provide querying
> serivce on internet, and it won't have obvious downgrade of query
> performance? But I did have add 1M document into one index on master and

> provide 2 slaves to serve querying service on internet, the query
> performance is kinda sad. Why do you say: "at 10M documents there's rarely a
> need to shard at all?" Do I provide too few slaves? What amount of documents
> is suitable for a need for shard in SolrCloud?
>
> ----- Original Message -----
>
> From: Erick Erickson
> To: solr-user
> Date: 2015-09-02, 23:00:29
> Subject: Re: concept and choice: custom sharding or auto sharding?
>
> Frankly, at 10M documents there's rarely a need to shard at all.
> Why do you think you need to? This seems like adding
> complexity for no good reason. Sharding should only really
> be used when you have too many documents to fit on a single
> shard as it adds some overhead, restricts some
> possibilities (cross-core join for instance, a couple of
> grouping options don't work in distributed mode etc.).
>
> You can still run SolrCloud and have it manage multiple
> _replicas_ of a single shard for HA/DR.
>
> So this seems like an XY problem, you're asking specific
> questions about shard routing because you think it'll
> solve some problem without telling us what the problem
> is.
>
> Best,
> Erick
>
> On Wed, Sep 2, 2015 at 7:47 AM, scott chu <scott.chu <at> udngroup.com> wrote:
>> I post a question on Stackoverflow
>> http://stackoverflow.com/questions/32343813/custom-sharding-or-auto-sharding-on-solrcloud:
>> However, since this is a mail-list, I repost the question below to request
>> for suggestion and more subtle concept of SolrCloud's behavior on document
>> routing.
>> I want to establish a SolrCloud clsuter for over 10 millions of news
>> articles. After reading this article in Apache Solr Refernce guide: Shards
>> and Indexing Data in SolrCloud, I have a plan as follows:
>> Add prefix ED2001! to document ID where ED means some newspaper source and
>> 2001 is the year part in published date of news article, i.e. I want to put
>> all news articles of specific news paper source published in specific year
>> to a shard.
>> Create collection with router.name set to compositeID.
>> Add documents?
>> Query Collection?
>> Practically, I got some questions:
>> How to add doucments based on this plan? Do I have to specify special
>> parameters when updating the collection/core?
>> Is this called "custom sharding"? If not, what is "custom sharding"?
>> Is auto sharding a better choice for my case since there's a
>> shard-splitting feature for auto sharding when the shard is too big?
>> Can I query without _router_ parameter?
>> EDIT <at> 2015/9/2:
>> This is how I think SolrCloud will do: "The amount of news articles of
>> specific newspaper source of specific year tends to be around a fix number,
>> e.g. Every year ED has around 80,000 articles, so each shard's size won't
>> increase dramatically. For the next year's news articles of ED, I only have
>> to add prefix 'ED2016!' to document ID, SolrCloud will create a new shard
>> for me (which contains all ED2016 articles), and later the Leader will
>> spread the replica of this new shard to other nodes (per replica per node
>> other than leader?)". Am I right? If yes, it seems no need for
>> shard-splitting.
>
>
> -----
> 未在此訊息中找到病毒。
> 已透過 AVG 檢查 - www.avg.com
> 版本: 2015.0.6086 / 病毒庫: 4409/10562 - 發佈日期: 09/02/15
>
>
>
>
>


-----
未在此訊息中找到病毒。
已透過 AVG 檢查 - www.avg.com
版本: 2015.0.6086 / 病毒庫: 4409/10562 - 發佈日期: 09/02/15



 


 

Maulin Rathod | 2 Sep 18:05 2015
Picon

Solr Join support in Multiple Shard

As per this link (http://wiki.apache.org/solr/Join) Solr Join is supported
only for cores in single shard. Is there any plan to support Join Across
cores in Multiple Shard?
Erick Erickson | 2 Sep 17:30 2015
Picon

Re: Re: concept and choice: custom sharding or auto sharding?

bq: Why do you say: "at 10M documents there's rarely a need to shard at all?"

Because I routinely see 50M docs on a single node and I've seen over 300M docs
on a single node with sub-second responses. So if you're saying that
you see poor
performance at 1M docs then I suspect there's something radically
wrong with your
setup. Too little memory, very bad query patterns, whatever. If my
suspicion is true,
then sharding will just mask the underlying problem.

You need to quantify your performance concerns. It's one thing to say
"my node satisfies 50 queries-per-second with 500ms response time" and
another to say "My queries take 5,000 ms".

In the first case, you do indeed need to add more servers to increase QPS if
you need 500 QPS. And adding more slaves is the best way to do that.
In the second, you need to understand the slowdown because sharding
will be a band-aid.

This might help:
https://wiki.apache.org/solr/SolrPerformanceProblems

Best,
Erick

On Wed, Sep 2, 2015 at 8:19 AM, scott chu <scott.chu <at> udngroup.com> wrote:
>
> solr-user,妳好
>
> Do you mean I only have to put 10M documents in one index and copy it to
> many slaves in a classic Solr master-slave architecture to provide querying
> serivce on internet, and it won't have obvious downgrade of query
> performance? But I did have add 1M document into one index on master and
> provide 2 slaves to serve querying service on internet, the query
> performance is kinda sad. Why do you say: "at 10M documents there's rarely a
> need to shard at all?" Do I provide too few slaves? What amount of documents
> is suitable for a need for shard in SolrCloud?
>
> ----- Original Message -----
>
> From: Erick Erickson
> To: solr-user
> Date: 2015-09-02, 23:00:29
> Subject: Re: concept and choice: custom sharding or auto sharding?
>
> Frankly, at 10M documents there's rarely a need to shard at all.
> Why do you think you need to? This seems like adding
> complexity for no good reason. Sharding should only really
> be used when you have too many documents to fit on a single
> shard as it adds some overhead, restricts some
> possibilities (cross-core join for instance, a couple of
> grouping options don't work in distributed mode etc.).
>
> You can still run SolrCloud and have it manage multiple
> _replicas_ of a single shard for HA/DR.
>
> So this seems like an XY problem, you're asking specific
> questions about shard routing because you think it'll
> solve some problem without telling us what the problem
> is.
>
> Best,
> Erick
>
> On Wed, Sep 2, 2015 at 7:47 AM, scott chu <scott.chu <at> udngroup.com> wrote:
>> I post a question on Stackoverflow
>> http://stackoverflow.com/questions/32343813/custom-sharding-or-auto-sharding-on-solrcloud:
>> However, since this is a mail-list, I repost the question below to request
>> for suggestion and more subtle concept of SolrCloud's behavior on document
>> routing.
>> I want to establish a SolrCloud clsuter for over 10 millions of news
>> articles. After reading this article in Apache Solr Refernce guide: Shards
>> and Indexing Data in SolrCloud, I have a plan as follows:
>> Add prefix ED2001! to document ID where ED means some newspaper source and
>> 2001 is the year part in published date of news article, i.e. I want to put
>> all news articles of specific news paper source published in specific year
>> to a shard.
>> Create collection with router.name set to compositeID.
>> Add documents?
>> Query Collection?
>> Practically, I got some questions:
>> How to add doucments based on this plan? Do I have to specify special
>> parameters when updating the collection/core?
>> Is this called "custom sharding"? If not, what is "custom sharding"?
>> Is auto sharding a better choice for my case since there's a
>> shard-splitting feature for auto sharding when the shard is too big?
>> Can I query without _router_ parameter?
>> EDIT  <at>  2015/9/2:
>> This is how I think SolrCloud will do: "The amount of news articles of
>> specific newspaper source of specific year tends to be around a fix number,
>> e.g. Every year ED has around 80,000 articles, so each shard's size won't
>> increase dramatically. For the next year's news articles of ED, I only have
>> to add prefix 'ED2016!' to document ID, SolrCloud will create a new shard
>> for me (which contains all ED2016 articles), and later the Leader will
>> spread the replica of this new shard to other nodes (per replica per node
>> other than leader?)". Am I right? If yes, it seems no need for
>> shard-splitting.
>
>
> -----
> 未在此訊息中找到病毒。
> 已透過 AVG 檢查 - www.avg.com
> 版本: 2015.0.6086 / 病毒庫: 4409/10562 - 發佈日期: 09/02/15
>
>
>
>
>

Erick Erickson | 2 Sep 17:00 2015
Picon

Re: concept and choice: custom sharding or auto sharding?

Frankly, at 10M documents there's rarely a need to shard at all.
Why do you think you need to? This seems like adding
complexity for no good reason. Sharding should only really
be used when you have too many documents to fit on a single
shard as it adds some overhead, restricts some
possibilities (cross-core join for instance, a couple of
grouping options don't work in distributed mode etc.).

You can still run SolrCloud and have it manage multiple
_replicas_ of a single shard for HA/DR.

So this seems like an XY problem, you're asking specific
questions about shard routing because you think it'll
solve some problem without telling us what the problem
is.

Best,
Erick

On Wed, Sep 2, 2015 at 7:47 AM, scott chu <scott.chu <at> udngroup.com> wrote:
> I post a question on Stackoverflow http://stackoverflow.com/questions/32343813/custom-sharding-or-auto-sharding-on-solrcloud:
> However, since this is a mail-list, I repost the question below to request for suggestion and more subtle
concept of SolrCloud's behavior on document routing.
> I want to establish a SolrCloud clsuter for over 10 millions of news articles. After reading this article
in Apache Solr Refernce guide: Shards and Indexing Data in SolrCloud, I have a plan as follows:
> Add prefix ED2001! to document ID where ED means some newspaper source and 2001 is the year part in
published date of news article, i.e. I want to put all news articles of specific news paper source
published in specific year to a shard.
> Create collection with router.name set to compositeID.
> Add documents?
> Query Collection?
> Practically, I got some questions:
> How to add doucments based on this plan? Do I have to specify special parameters when updating the collection/core?
> Is this called "custom sharding"? If not, what is "custom sharding"?
> Is auto sharding a better choice for my case since there's a shard-splitting feature for auto sharding
when the shard is too big?
> Can I query without _router_ parameter?
> EDIT  <at>  2015/9/2:
> This is how I think SolrCloud will do: "The amount of news articles of specific newspaper source of
specific year tends to be around a fix number, e.g. Every year ED has around 80,000 articles, so each
shard's size won't increase dramatically. For the next year's news articles of ED, I only have to add
prefix 'ED2016!' to document ID, SolrCloud will create a new shard for me (which contains all ED2016
articles), and later the Leader will spread the replica of this new shard to other nodes (per replica per
node other than leader?)". Am I right? If yes, it seems no need for shard-splitting.

Zheng Lin Edwin Yeo | 2 Sep 11:53 2015
Picon

String bytes can be at most 32766 characters in length?

Hi,

I would like to check, is the string bytes must be at most 32766 characters
in length?

I'm trying to do a copyField of my rich-text documents content to a field
with fieldType=string to try out my getting distinct result for content, as
there are several documents with the exact same content, and we only want
to list one of them during searching.

However, I get the following errors in some of the documents when I tried
to index them with the copyField. Some of my documents are quite large in
size, and there is a possibility that it exceed 32766 characters. Is there
any other ways to overcome this problem?

org.apache.solr.common.SolrException: Exception writing document id
collection1_polymer100 to the index; possible analysis error.
at
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:167)
at
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
at
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
at
org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at
org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.doAdd(ExtractingDocumentLoader.java:122)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.addDoc(ExtractingDocumentLoader.java:127)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:235)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:497)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Document contains at least
one immense term in field="signature" (whose UTF8 encoding is longer than
the max length 32766), all of which were skipped.  Please correct the
analyzer to not produce such terms.  The prefix of the first immense term
is: '[32, 60, 112, 62, 60, 98, 114, 62, 32, 32, 32, 60, 98, 114, 62, 56,
48, 56, 32, 72, 97, 110, 100, 98, 111, 111, 107, 32, 111, 102]...',
original message: bytes can be at most 32766 in length; got 49960
at
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:670)
at
org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:344)
at
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:300)
at
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:232)
at
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:458)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1363)
at
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:239)
at
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
... 38 more
Caused by:
org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes
can be at most 32766 in length; got 49960
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:154)
at
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:660)
... 45 more

Regards,
Edwin
Long Yan | 2 Sep 11:17 2015
Picon

Strange behavior of solr

Hey,
I have created a core with
bin\solr create -c mycore

I want to index the csv sample files from solr-5.2.1

If I index film.csv under solr-5.2.1\example\films\, solr can only index this file until the line
"2046,Wong Kar-wai,Romance Film|Fantasy|Science Fiction|Drama,,/en/2046_2004,2004-05-20"

But if I at first index books.csv under solr-5.2.1\example\exampledocs and then index film.csv, solr can
index all lines in film.csv

Why?

Regards
Long Yan


Gmane