如何使用Java在Apache Spark中正确生成语句的TF-IDF向量?

如何使用Java在Apache Spark中正确生成语句的TF-IDF向量?,java,apache-spark,apache-spark-mllib,tf-idf,Java,Apache Spark,Apache Spark Mllib,Tf Idf,我有这个密码 public class TfIdfExample { public static void main(String[] args){ JavaSparkContext sc = SparkSingleton.getContext(); SparkSession spark = SparkSession.builder() .config("spark.sql.warehouse

我有这个密码

public class TfIdfExample {
        public static void main(String[] args){
            JavaSparkContext sc = SparkSingleton.getContext();
            SparkSession spark = SparkSession.builder()
                    .config("spark.sql.warehouse.dir", "spark-warehouse")
                    .getOrCreate();
            JavaRDD<List<String>> documents = sc.parallelize(Arrays.asList(
                    Arrays.asList("this is a sentence".split(" ")),
                    Arrays.asList("this is another sentence".split(" ")),
                    Arrays.asList("this is still a sentence".split(" "))), 2);


            HashingTF hashingTF = new HashingTF();
            documents.cache();
            JavaRDD<Vector> featurizedData = hashingTF.transform(documents);
            // alternatively, CountVectorizer can also be used to get term frequency vectors

            IDF idf = new IDF();
            IDFModel idfModel = idf.fit(featurizedData);

            featurizedData.cache();

            JavaRDD<Vector> tfidfs = idfModel.transform(featurizedData);
            System.out.println(tfidfs.collect());
            KMeansProcessor kMeansProcessor = new KMeansProcessor();
            JavaPairRDD<Vector,Integer> result = kMeansProcessor.Process(tfidfs);
            result.collect().forEach(System.out::println);
        }
    }
k意味着工作之后我会得到它

((1048576,[489554,540177,736740,894973],[0.28768207245178085,0.0,0.0,0.0]),1)
((1048576,[489554,540177,736740,894973],[0.28768207245178085,0.0,0.0,0.0]),0)
((1048576,[489554,540177,736740,894973],[0.28768207245178085,0.0,0.0,0.0]),1)
((1048576,[455491,540177,736740,894973],[0.6931471805599453,0.0,0.0,0.0]),1)
((1048576,[489554,540177,560488,736740,894973],[0.28768207245178085,0.0,0.6931471805599453,0.0,0.0]),1)
((1048576,[455491,540177,736740,894973],[0.6931471805599453,0.0,0.0,0.0]),0)
((1048576,[455491,540177,736740,894973],[0.6931471805599453,0.0,0.0,0.0]),1)
((1048576,[489554,540177,560488,736740,894973],[0.28768207245178085,0.0,0.6931471805599453,0.0,0.0]),0)
((1048576,[489554,540177,560488,736740,894973],[0.28768207245178085,0.0,0.6931471805599453,0.0,0.0]),1)
但我认为它不正确,因为tf idf必须有另一种观点。
我认为
mllib
已经准备好了这方面的方法,但是我测试了文档示例,没有收到我需要的东西。我还没有找到Spark的自定义解决方案。也许有人能帮我解答我做错了什么?可能是我没有正确地使用mllib函数?

TF-IDF之后您得到的是一个

为了更好地理解这些值,让我从TF向量开始:

(1048576,[489554,540177,736740,894973],[1.0,1.0,1.0,1.0])
(1048576,[455491,540177,736740,894973],[1.0,1.0,1.0,1.0])
(1048576,[489554,540177,560488,736740,894973],[1.0,1.0,1.0,1.0,1.0])
例如,对应于第一句的TF向量是
1048576
=2^20
)分量向量,其中4个非零值对应于
489554540177736740
894973
,所有其他值都是零,因此不存储在稀疏向量表示中

特征向量的维数等于散列到的桶数:
1048576=2^20
bucket

对于这种大小的语料库,您应该考虑减少桶的数量:

HashingTF hashingTF = new HashingTF(32);
建议使用2的幂来最小化哈希冲突的数量

接下来,应用IDF权重:

(1048576,[489554,540177,736740,894973],[0.28768207245178085,0.0,0.0,0.0])
(1048576,[455491,540177,736740,894973],[0.6931471805599453,0.0,0.0,0.0])
(1048576,[489554,540177,560488,736740,894973],[0.28768207245178085,0.0,0.6931471805599453,0.0,0.0])
如果我们再看一下第一个句子,我们得到了3个零——这是意料之中的,因为术语“this”、“is”和“句子”出现在语料库的每个文档中,因此将等于零


为什么零值仍然在(稀疏)向量中?因为在当前实现中,只有值乘以IDF

谢谢,但你的意思是我假设打印输出被截断了,我是从控制台复制粘贴的。因为我认为tf idf没有返回真正的向量。我制作了
新的HashingTF(32)和ID变小。但我不明白为什么在第二个元组的一些值中,我得到了0.0,我运行了您的示例,实际上这些值应该等于零。我在解释中添加了更多细节/链接-如果有帮助,请告诉我。一个问题,
(1048576[489554540177736740894973],[0.28768207245178085,0.0,0.0,0.0,0.0])
在这个向量中
[0.28768207245178085,0.0,0.0,0.0]
将idf应用于tf之后,SparseVector由
(大小、索引、值)给出,因此,向量的实际分量在此列表中
[0.28768207245178085,0.0,0.0,0.0]
。这是在IDF之后,是的,但获取带有
值的向量的速度有多快?
(1048576,[489554,540177,736740,894973],[0.28768207245178085,0.0,0.0,0.0])
(1048576,[455491,540177,736740,894973],[0.6931471805599453,0.0,0.0,0.0])
(1048576,[489554,540177,560488,736740,894973],[0.28768207245178085,0.0,0.6931471805599453,0.0,0.0])