Donni Khan | 30 Mar 09:39 2015

Text clustering with SVD

Hallo Mahout users,

I'm working on text clustering, I would like to reduce the features to
enhance the clustering process.
I would like to use  the Singular Value Decomposition before cluatering
process. I will be thankfull if anyone has used this before, Is it a good
idea for clustering?
Is there any other method in mahout to reduce the text features before
clustring?
Is anyone has idea how can I apply SVD by using Java code?

Thanks in advance,
Donni
Hersheeta Chandankar | 30 Mar 09:18 2015
Picon

Latent Semantic Analysis for Document Categorization

Hi Ted,

Thank you for a quick reply.
It would be of great help if you could please explain what kind of 'linking
information between documents' I should look for.
Pat Ferrel | 27 Mar 14:45 2015

Need help on Windows

We are trying to put together the next release of Mahout but need a volunteer who uses is on Windows. The
Script used to launch Hadoop Mapreduce jobs and Spark jobs need to be undated for the latest Windows and to
include new CLI bits that run on Spark. The issues is about updating mahout.cmd launcher script.

If you can help check out this Jira https://issues.apache.org/jira/browse/MAHOUT-1589 <https://issues.apache.org/jira/browse/MAHOUT-1589>

Thanks
Apache Mahout 
Hersheeta Chandankar | 26 Mar 13:55 2015
Picon

Latent Semantic Analysis for Document Categorization

Hi,

I'm working on a document categorization project wherein I have some
crawled text documents on different topics which I want to categorize into
pre-decided categories like travel,sports,education etc.
Currently the approach I've used is of building a NaiveBayes Classification
model in mahout which has given good accuracy result of 70%-75%. But I
would still like to improve the accuracy by retrieving the semantic
dependencies between words of the documents.
I've read about Latent Semantic Analysis(LSA) which creates a term-document
matrix and subjects it to mathematical transformation called Singular Value
Decomposition(SVD).
I'd thought of firstly subjecting the raw documents to LSA followed by
k-means clustering on LSA output and then giving the clustered output as
input to the NaiveBayes Classifier.
But on trying out LSA in Mahout the end result seemed to be in numerical
format and which after clustering were not acceptable by the NaiveBayes
classifier.

Is my expirimental approach wrong? Has anybody worked on a similar issue
like this?
Could someone help me with the implementation of LSA or suggest any other
approach for semantic analysis of text documents.

Thanks
-Hersheeta
Ted Dunning | 26 Mar 02:03 2015
Picon

Re: Fw: Mahout dataset Vectorization

This is an old question that I just dredged up in my email.

There is still a question about your format here.  When you say "IPs" do
you mean that you have a list of IP addresses?

Or is this a server web-log?  Does that mean that the destination IP is
implicit.  If so, you might be able to see a weak signal due to time
proximity of different IP addresses, but I can't see that you would see
much else.  Time proximity might give you a hint about wide-spread attacks.

On Wed, Feb 18, 2015 at 6:49 AM, Raghuveer <alwaysraghu <at> yahoo.com> wrote:

>
> Hi,
>
> I was going through mahout ppts online and came accross your email ID. I
> have few issues when i want to analyse my dataset.
>
> i am trying to find how i can make use of my dataset to present some
> relations. I have a dataset of the sort
>
> IPs,timestamp,bytes_tranferred
>
> what are the different relationships i can derive from this set so that i
> can present some meaningful values using mahout. Currently am planning to
> use this set to represent which client (in IPs column) had more traffic for
> a given time. So i will have to group IPs together i guess. Are there any
> better ideas and how can i do it using JAVA code It would be really helpful
> if you can show me a sample for this issue. Kindly suggest.
>
(Continue reading)

Jayani Withanawasam | 24 Mar 11:12 2015
Picon

Error in TF-IDF vector creation - java.lang.IllegalStateException

Hi,

I'm trying to get text classification working in Mahout 1.0 on Hadoop fully
distribution mode (Ubuntu 12.04/ Hadoop 2.6)

There, I get the following error during TF vector creation.

Command:
*mahout seq2sparse -i 20news-seq -o 20news-vectors  -lnorm -nv  -wt tfidf*

Exception in thread "main" java.lang.IllegalStateException: Job failed!
    at
org.apache.mahout.vectorizer.common.PartialVectorMerger.mergePartialVectors(PartialVectorMerger.java:131)
    at
org.apache.mahout.vectorizer.DictionaryVectorizer.createTermFrequencyVectors(DictionaryVectorizer.java:206)
    at
org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles.run(SparseVectorsFromSequenceFiles.java:274)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at
org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles.main(SparseVectorsFromSequenceFiles.java:56)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
(Continue reading)

James Parker | 19 Mar 15:11 2015
Picon

Mahout ALS RecommenderJob error

Hi,
I try ALS recommender using Java code.  This is the steps i follow:
1.  I create DatasetSplitter Job

2.  I create ParallelALSFactorizationJob  Job

3. I create FactorizationEvaluator   Job

4.  I create RecommenderJob Job

Everything  is ok for 1 to 3. But when i run the RecommenderJob , i have
the following error
-------------------------------------------------------------------------------------------------------------------------------
 java.lang.Exception: java.lang.RuntimeException:
java.lang.ClassCastException:
org.apache.mahout.math.map.OpenIntObjectHashMap cannot be cast to
org.apache.mahout.common.Pair
    at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
.
.
----------------------------------------------------------------------------------------------------------------------------

 I'm suprised to see that if i run steps 1 to 3 using my java code, and the
last step using the command line "mahout recommendfactorized ........"
with  the files generates with my java code steps 1 to 3), recommendations
are correctly generated.

Thanks you very much

(Continue reading)

mw | 18 Mar 17:54 2015

using trainDocTopicModel to approximate p(topic|document)

Hello,

i am trying to use a topicmodel to approximate p(topic|document) like this

TopicModel model = new TopicModel(hadoopConf, conf.getEta(), 
conf.getAlpha(), dict trainingThreads, modelWeight,
                     models);
Vector docTopics = new DenseVector(new 
double[model.getNumTopics()]).assign(1.0/model.getNumTopics());
Matrix docTopicModel = new SparseRowMatrix(model.getNumTopics(), 
document.size());
int maxIters = 5000;
for(int i = 0; i < maxIters; i++) {
      model.trainDocTopicModel(document, docTopics, docTopicModel);
}

For some reason the values in docTopics dont change much.
Does that indicate that the current document does not fit into the model?
Could there be another explanation?

Best,
Max

mw | 17 Mar 09:46 2015

Importing tfidf from training set

Hello,

i am running lda on a training set to create a topic model.
For calculating p(topic|document) on unseen data i need to import the 
inverse document frequency from the training set.
Is there a way to do that in mahout?

Best,
Max

Jeff Isenhart | 12 Mar 04:24 2015

demo spark-itemsimilarity; empty output

I am trying to run the example found here:
http://mahout.apache.org/users/recommender/intro-cooccurrence-spark.html

The data (demoItems.csv added to hdfs) is just copied from the example:
u1,purchase,iphoneu1,purchase,ipadu2,purchase,nexus......
But when I runĀ 
mahout spark-itemsimilarity -i demoItems.csv -o output2 -fc 1 -ic 2
I get empty _SUCCESS and part-00000 filesĀ 
output2/indicator-matrix

Any ideas?
Hartwig Anzt | 11 Mar 15:33 2015
Picon

error hard to digest

Dear Mahout-Users,

I run ALS on a dataset, and it works out for feature space sizes 
10,20,30. When I try to run it on a feature space size of 50, I receive 
the error below (sorry for the long mail, I wanted to include all 
output). It looks like the first iterations are fine, and it then 
breaks? It is not a memory problem, I've checked that...

Great thanks for help!

Hartwig




+++++++++++++++++++++++++++++++++
Warning: $HADOOP_HOME is deprecated.

Warning: $HADOOP_HOME is deprecated.

15/03/09 11:27:20 INFO common.AbstractJob: Command line arguments: 
{--alpha=[50], --endPhase=[2147483647], --implicitFeedback=[true], 
--input=[/mnt/sparse_matrices/mtx/recommendation_systems/rec-feedback.edges], 
--lambda=[0.1], --numFeatures=[50], --numIterations=[3], 
--numThreadsPerSolver=[16], --output=[output_16], --startPhase=[0], 
--tempDir=[tmp_16]}
15/03/09 11:27:21 INFO util.NativeCodeLoader: Loaded the native-hadoop 
library
15/03/09 11:27:21 INFO input.FileInputFormat: Total input paths to 
process : 1
(Continue reading)


Gmane