Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/330.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/sorting/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/batch-file/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python PySpark-如何根据坐标矩阵中表示的相似性获取top-k ID?_Python_Sorting_Pyspark_Cosine Similarity - Fatal编程技术网

Python PySpark-如何根据坐标矩阵中表示的相似性获取top-k ID?

Python PySpark-如何根据坐标矩阵中表示的相似性获取top-k ID?,python,sorting,pyspark,cosine-similarity,Python,Sorting,Pyspark,Cosine Similarity,我有一个数据dict(键表示项(1,2,3..是项的ID),它们的值('712907','742068')表示用户)。我将其转换为数据帧: data_dict = {0: ['712907','742068','326136','667386'], 1: ['667386','742068','742068'], 2: ['326136', '663056', '742068','742068'], 3: ['32613

我有一个数据dict(键表示项(1,2,3..是项的ID),它们的值('712907','742068')表示用户)。我将其转换为数据帧:

data_dict = {0: ['712907','742068','326136','667386'],
             1: ['667386','742068','742068'],
             2: ['326136', '663056', '742068','742068'],
            3: ['326136', '663056', '742068'],4: ['326116','742068','663056', '742068'],5: ['326136','326136','663056', '742068']}
df= pd.DataFrame.from_dict(data_dict, orient='index')
def f(x):
    d = {}
    for i in range(len(x)):
        d[str(i)] = int(x[i])
    return d
dfspark = sv_rdd.map(lambda x: Row(**f(x))).toDF()
我根据用户('712907','742068','326136',对数据框中的项目进行分组,请参见下图

dframe = pd.get_dummies(df.stack()).sum(level=0)
sv = sparse.csr_matrix(dframe.as_matrix())

注意,上面的数据帧(dframe)只是一个小例子,实际的dframe大小是(309235 x 81566)。因此,我想使用spark计算sv(稀疏矩阵)中行(1,2,3…)之间的余弦相似性。 以下是我迄今为止取得的成就:

from pyspark.sql import SQLContext
from pyspark.sql.types import Row
sc = pyspark.SparkContext(appName="cosinesim")
sqlContext = SQLContext(sc)
sv_rdd = sc.parallelize(sv.toarray())
使用,我将rdd转换为数据帧:

data_dict = {0: ['712907','742068','326136','667386'],
             1: ['667386','742068','742068'],
             2: ['326136', '663056', '742068','742068'],
            3: ['326136', '663056', '742068'],4: ['326116','742068','663056', '742068'],5: ['326136','326136','663056', '742068']}
df= pd.DataFrame.from_dict(data_dict, orient='index')
def f(x):
    d = {}
    for i in range(len(x)):
        d[str(i)] = int(x[i])
    return d
dfspark = sv_rdd.map(lambda x: Row(**f(x))).toDF()
在此之后,我添加了一个新的“id”列:

row_with_index = Row(*["id"] + dfspark.columns)

def make_row(columns):
    def _make_row(row, uid):
        row_dict = row.asDict()
        return row_with_index(*[uid] + [row_dict.get(c) for c in columns])
    return _make_row

f = make_row(dfspark.columns)

dfidx = (dfspark.rdd
    .zipWithIndex()
    .map(lambda x: f(*x))
    .toDF(StructType([StructField("id", LongType(), False)] + dfspark.schema.fields)))
最后,通过转置矩阵计算行之间的相似性:

pred = IndexedRowMatrix(dfidx.rdd.map(lambda row: IndexedRow(row.id,row[1:])))
pred1 = pred.toBlockMatrix().transpose().toIndexedRowMatrix()
pred_sims = pred1.columnSimilarities()
如何根据余弦相似性(pred_sims)为每个项目0,1,2,3,4获取top-k ID? 我将CoordinateMatrix转换为数据帧,但不确定如何访问每个ID的top-k项

columns = ['from', 'to', 'sim']
vals = pred_sims.entries.map(lambda e: (e.i, e.j, e.value)).collect()
dfsim = sqlContext.createDataFrame(vals, columns)
dfsim.show()


您可以使用窗口函数按每个项目的相似性排序,然后使用row_count()

从pyspark.sql.window导入窗口
window=window.partitionBy(dfsim['from']).orderBy(dfsim['sim'].desc())
dfsim.select('*',func.row_number().over(window.alias('row_number'))\

.filter(func.col('row_number')如果您试图解决的问题实际上只是挑选每个id的前k个项目,那么问题的第一部分似乎无关紧要-您可以从dfsim开始提问吗?或者我遗漏了什么吗?…第一部分仅显示了如何生成大型稀疏矩阵(csr_矩阵)然后将其转换为rdd以计算行之间的余弦相似性。“pred_sims”是坐标矩阵的上三角形。我想根据它们的相似性来获得每个项的前k项。。