Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Spark 2.1.1:如何在Spark 2.1.1中已经训练好的LDA模型上预测看不见的文档中的主题?_Apache Spark_Machine Learning_Pyspark_Lda - Fatal编程技术网

Apache spark Spark 2.1.1:如何在Spark 2.1.1中已经训练好的LDA模型上预测看不见的文档中的主题?

Apache spark Spark 2.1.1:如何在Spark 2.1.1中已经训练好的LDA模型上预测看不见的文档中的主题?,apache-spark,machine-learning,pyspark,lda,Apache Spark,Machine Learning,Pyspark,Lda,我正在客户评论数据集上用pyspark(spark 2.1.1)培训一个LDA模型。现在基于这个模型,我想预测新的看不见的文本中的主题 我使用下面的代码来制作模型 from pyspark import SparkConf, SparkContext from pyspark.sql import SparkSession from pyspark.sql import SQLContext, Row from pyspark.ml.feature import CountVectorizer

我正在客户评论数据集上用pyspark(spark 2.1.1)培训一个LDA模型。现在基于这个模型,我想预测新的看不见的文本中的主题

我使用下面的代码来制作模型

from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext, Row
from pyspark.ml.feature import CountVectorizer
from pyspark.ml.feature import HashingTF, IDF, Tokenizer, CountVectorizer, StopWordsRemover
from pyspark.mllib.clustering import LDA, LDAModel
from pyspark.ml.clustering import DistributedLDAModel, LocalLDAModel
from pyspark.mllib.linalg import Vector, Vectors
from pyspark.sql.functions import *
import pyspark.sql.functions as F


path = "D:/sparkdata/sample_text_LDA.txt"
sc = SparkContext("local[*]", "review")
spark = SparkSession.builder.appName('Basics').getOrCreate()
df = spark.read.csv("D:/sparkdata/customers_data.csv", header=True, inferSchema=True)

data = df.select("Reviews").rdd.map(list).map(lambda x: x[0]).zipWithIndex().map(lambda words: Row(idd= words[1], words = words[0].split(" "))).collect()

docDF = spark.createDataFrame(data)
remover = StopWordsRemover(inputCol="words",
outputCol="stopWordsRemoved")
stopWordsRemoved_df = remover.transform(docDF).cache()
Vector = CountVectorizer(inputCol="stopWordsRemoved", outputCol="vectors")
model = Vector.fit(stopWordsRemoved_df)
result = model.transform(stopWordsRemoved_df)
corpus = result.select("idd", "vectors").rdd.map(lambda x: [x[0],Vectors.fromML(x[1])]).cache()

# Cluster the documents topics using LDA
ldaModel = LDA.train(corpus, k=3,maxIterations=100,optimizer='online')
topics = ldaModel.topicsMatrix()
vocabArray = model.vocabulary
print(ldaModel.describeTopics())
wordNumbers = 10  # number of words per topic
topicIndices = sc.parallelize(ldaModel.describeTopics(maxTermsPerTopic = wordNumbers))
def topic_render(topic):  # specify vector id of words to actual words
   terms = topic[0]
   result = []
   for i in range(wordNumbers):
       term = vocabArray[terms[i]]
       result.append(term)
   return result

topics_final = topicIndices.map(lambda topic: topic_render(topic)).collect()

for topic in range(len(topics_final)):
   print("Topic" + str(topic) + ":")
   for term in topics_final[topic]:
       print (term)
   print ('\n')

现在我有了一个数据框架,其中有一个专栏有新的客户评论,我想预测它们属于哪个主题集群。 我已经搜索了答案,主要推荐以下方法,如下所示

但是,我得到以下错误:

“LDAModel”对象没有属性“toLocal”。 它也没有topicDistribution属性

那么,spark 2.1.1不支持这些属性吗


那么,还有其他方法可以从看不见的数据中推断主题吗?

您需要对新数据进行预处理:

#导入新的数据集以通过预先训练的LDA传递
data_new=pd.read_csv('YourNew.csv',encoding=“ISO-8859-1”);
data_new=data_new.dropna()
data_text_new=data_new[['您的目标列']]
data_text_new['index']=data_text_new.index
文档\u新建=数据\u文本\u新建
#documents\u new=documents.dropna(子集=['Preprocessed Document'])
#通过lemmatization和stopwork函数处理新数据集
已处理的文档\u new=文档\u new['Preprocessed Document'].map(预处理)
#创建单个单词的词典并筛选词典
dictionary\u new=gensim.corpora.dictionary(已处理的\u docs\u new[:])
字典\u新。过滤器\u极端值(不低于=15,不高于=0.5,保持=100000)
#定义bow_语料库

bow_corpus_new=[dictionary_new.doc2bow(doc)for doc in processed_docs_new]
我希望所有这些在spark中或多或少都能起到同样的作用。对吗?我主要是问spark中是否有“topicDistributions()”和“toLocal”属性可用于返回文档主题的分发,如果没有,还有什么替代方法?您的答案很好,但我不知道它是否适用于pyspark 2.1.1。@适用于您的UsmanKhan替代方案包括mallet wrapper和gensims LDA-代码的后面部分使用pandas进行主题分配Spark只训练您需要使用dataframe进行分配的模型I'm get
ldamodel不是可下标
错误,spark 2.4.4
newDocuments: RDD[(Long, Vector)] = ...
topicDistributions = distLDA.toLocal.topicDistributions(newDocuments)