Python 利用奇异值分解绘制词向量来度量相似度

Python 利用奇异值分解绘制词向量来度量相似度,python,matplotlib,nlp,svd,Python,Matplotlib,Nlp,Svd,这是我用来计算近邻计数的单词共现矩阵的代码。我在网上找到了下面的代码,它使用SVD import numpy as np la = np.linalg words = ['I','like','enjoying','deep','learning','NLP','flying','.'] ### A Co-occurence matrix which counts how many times the word before and after a particular word app

这是我用来计算近邻计数的单词共现矩阵的代码。我在网上找到了下面的代码,它使用SVD

 import numpy as np
 la = np.linalg
 words = ['I','like','enjoying','deep','learning','NLP','flying','.']
 ### A Co-occurence matrix which counts how many times the word before and after a particular word appears ( ie, like appears after I 2 times)
 arr = np.array([[0,2,1,0,0,0,0,0],[2,0,0,1,0,1,0,0],[1,0,0,0,0,0,1,0],[0,0,0,1,0,0,0,1],[0,1,0,0,0,0,0,1],[0,0,1,0,0,0,0,8],[0,2,1,0,0,0,0,0],[0,0,1,1,1,0,0,0]])
 u, s, v = la.svd(arr, full_matrices=False)
 import matplotlib.pyplot as plt
 for i in xrange(len(words)):
     plt.text(u[i,2], u[i,3], words[i])

在最后一行代码中,U的第一个元素用作x坐标,U的第二个元素用作y坐标来投影单词,以查看相似度。这种方法背后的直觉是什么?为什么他们把每行中的第一和第二个元素(每行代表每个单词)作为x和y来代表一个单词?请帮忙

根据SVD的定义,从
la.SVD
方法获得的
s
矩阵是一个包含降序奇异值的对角矩阵。选择
u
的前两列可确保选择原始矩阵中最重要的组件

这个过程也称为降维。请阅读(第11.3.3节)和


在绘图中,向左平移轴,您将看到所有单词

代码的来源在哪里,你能发布链接吗?@alvas-我的朋友写的代码是他项目工作的一部分。但它正在发挥作用。我无法直观地理解他们选择U[row,1]和U[row,2]作为x和y坐标的方式和原因。我很难理解这一点,这对我帮助很大:
import numpy as np
import matplotlib.pyplot as plt
la = np.linalg
words = ["I", "like", "enjoy", "deep", "learning", "NLP", "flying", "."]
X = np.array([[0,2,1,0,0,0,0,0], [2,0,0,1,0,1,0,0], [1,0,0,0,0,0,1,0], [0,1,0,0,1,0,0,0], [0,0,0,1,0,0,0,1], [0,1,0,0,0,0,0,1], [0,0,1,0,0,0,0,1], [0,0,0,0,1,1,1,0]])
U, s, Vh = la.svd(X, full_matrices = False)

#plot
for i in range(len(words)):
    plt.text(U[i,0], U[i,1], words[i])
plt.show()