Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/solr/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Lucidworks Hadoop Solr-将文本拆分为段落_Hadoop_Solr_Lucidworks_Bigdata - Fatal编程技术网

Lucidworks Hadoop Solr-将文本拆分为段落

Lucidworks Hadoop Solr-将文本拆分为段落,hadoop,solr,lucidworks,bigdata,Hadoop,Solr,Lucidworks,Bigdata,我正在使用这个项目:和 我试着把一些文本分成段落,然后搜索单词。但作为回报,我接受了这句话。有可能做那样的事吗 我用的是: hadoop jar solr-hadoop-job-2.2.5.jar com.lucidworks.hadoop.ingest.IngestJob -Dlww.commit.on.close=true -Dcom.lucidworks.hadoop.ingest.RegexIngestMapper.regex="(?sm)^.*?\.\s*$" -Dcom.lu

我正在使用这个项目:和 我试着把一些文本分成段落,然后搜索单词。但作为回报,我接受了这句话。有可能做那样的事吗

我用的是:

hadoop jar solr-hadoop-job-2.2.5.jar com.lucidworks.hadoop.ingest.IngestJob 
-Dlww.commit.on.close=true 
-Dcom.lucidworks.hadoop.ingest.RegexIngestMapper.regex="(?sm)^.*?\.\s*$"  
-Dcom.lucidworks.hadoop.ingest.RegexIngestMapper.groups_to_fields=0=match1_ss
-cls com.lucidworks.hadoop.ingest.RegexIngestMapper -c test2 -i /usr/local/hadoop/input
-s http://127.0.1.1:8983/solr -of com.lucidworks.hadoop.io.LWMapRedOutputFormat