Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/75.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
我正在处理一个DTM,我想做k-means、heirarchical和k-medoids聚类。我是否应该首先规范化DTM?_R_Normalization_K Means_Hierarchical Clustering - Fatal编程技术网

我正在处理一个DTM,我想做k-means、heirarchical和k-medoids聚类。我是否应该首先规范化DTM?

我正在处理一个DTM,我想做k-means、heirarchical和k-medoids聚类。我是否应该首先规范化DTM?,r,normalization,k-means,hierarchical-clustering,R,Normalization,K Means,Hierarchical Clustering,数据显示,AllBooks共有590个观测值,包含8266个变量。以下是我的代码: AllBooks = read_csv("AllBooks_baseline_DTM_Unlabelled.csv") dtms = as.matrix(AllBooks) dtms_freq = as.matrix(rowSums(dtms) / 8266) dtms_freq1 = dtms_freq[order(dtms_freq),] sd = sd(dtms_freq) mean = mean(dtms

数据显示,AllBooks共有590个观测值,包含8266个变量。以下是我的代码:

AllBooks = read_csv("AllBooks_baseline_DTM_Unlabelled.csv")
dtms = as.matrix(AllBooks)
dtms_freq = as.matrix(rowSums(dtms) / 8266)
dtms_freq1 = dtms_freq[order(dtms_freq),]
sd = sd(dtms_freq)
mean = mean(dtms_freq)
这告诉我我的平均值是:0.01242767 我的标准偏差是:0.01305608

因此,由于我的标准差很低,这意味着数据在文档大小方面的可变性很低。所以我不需要规范化DTM?通过归一化,我的意思是使用R中的尺度函数,它减去数据的平均值,然后除以标准偏差

换句话说,我的大问题是:什么时候我应该为集群目的标准化数据(特别是文档术语矩阵)

下面是一些数据输出:

dput(head(AllBooks,10))
budding = c(0, 
    0, 0, 0, 0, 0, 0, 0, 0, 0), enjoyer = c(0, 0, 0, 0, 0, 0, 
    0, 0, 0, 0), needs = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), sittest = c(0, 
    0, 0, 0, 0, 0, 0, 0, 0, 0), eclipsed = c(0, 0, 0, 0, 0, 0, 
    0, 0, 0, 0), engagement = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 
    exuberant = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), abandons = c(0, 
    0, 0, 0, 0, 0, 0, 0, 0, 0), well = c(0, 0, 0, 0, 0, 0, 0, 
    0, 0, 0), cheerfulness = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 
    hatest = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), state = c(0, 0, 
    0, 0, 0, 0, 0, 0, 0, 0), stained = c(0, 0, 0, 0, 0, 0, 0, 
    0, 0, 0), production = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), whitened = c(0, 
    0, 0, 0, 0, 0, 0, 0, 0, 0), revered = c(0, 0, 0, 0, 0, 0, 
    0, 0, 0, 0), developed = c(0, 0, 0, 2, 0, 0, 0, 0, 0, 0), 
    regarded = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), enactments = c(0, 
    0, 0, 0, 0, 0, 0, 0, 0, 0), aromatical = c(0, 0, 0, 0, 0, 
    0, 0, 0, 0, 0), admireth = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0
    ), foothold = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), shots = c(0, 
    0, 0, 0, 0, 0, 0, 0, 0, 0), turner = c(0, 0, 0, 0, 0, 0, 
    0, 0, 0, 0), inversion = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 
    lifeless = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), postponement = c(0, 
    0, 0, 0, 0, 0, 0, 0, 0, 0), stout = c(0, 0, 0, 0, 0, 0, 0, 
    0, 0, 0), taketh = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), kettle = c(0, 
    0, 0, 0, 0, 0, 0, 0, 0, 0), erred = c(0, 0, 0, 0, 0, 0, 0, 
    0, 0, 0), thinkest = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), modern = c(0, 
    0, 0, 0, 0, 0, 0, 0, 0, 0), reigned = c(0, 0, 0, 0, 0, 0, 
    0, 0, 0, 0), sparingly = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 
    visual = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), thoughts = c(0, 
    0, 0, 0, 0, 0, 0, 0, 0, 0), illumines = c(0, 0, 0, 0, 0, 
    0, 0, 0, 0, 0), attire = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0), 
    explains = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0)), class = c("tbl_df", 
"tbl", "data.frame"), row.names = c(NA, -10L))

您可以从以下链接查看完整数据:

您有一个稀疏的数据集,其中大多数数据集由零控制,因此标准偏差非常低。如果您的一些非零计数非常大,您可以对其进行缩放,例如一些是100秒,而另一些是1秒和2秒

在稀疏数据上使用kmeans可能不是一个好主意,因为您不太可能找到有意义的中心。可能有一些选项可用,检查。也有基于图形的方法,例如

下面是一个简单的方式来聚集和可视化:

x = read.csv("AllBooks_baseline_DTM_Unlabelled.csv")
# remove singleton columns
x = x[rowMeans(x)>0,colSums(x>0)>1]
library(Rtsne)
library(ggplo2)

pca = prcomp(x,scale=TRUE,center=TRUE)
TS = Rtsne(pca$x[,1:30])
ggplot(data.frame(Dim1=TS$Y[,1],Dim2=TS$Y[,2],C=factor(clus)),
aes(x=Dim1,y=Dim2,col=C))+geom_point()
将其视为二进制并按二进制距离分层:

hc=hclust(dist(x,method="binary"),method="ward.D")
clus = cutree(hc,5)
计算PCA并可视化:

x = read.csv("AllBooks_baseline_DTM_Unlabelled.csv")
# remove singleton columns
x = x[rowMeans(x)>0,colSums(x>0)>1]
library(Rtsne)
library(ggplo2)

pca = prcomp(x,scale=TRUE,center=TRUE)
TS = Rtsne(pca$x[,1:30])
ggplot(data.frame(Dim1=TS$Y[,1],Dim2=TS$Y[,2],C=factor(clus)),
aes(x=Dim1,y=Dim2,col=C))+geom_point()

集群5似乎非常不同,它们的不同之处在于:

names(tail(sort(colMeans(x[clus==5,]) - colMeans(x[clus!=5,])),10))
 [1] "wisdom" "thee"   "lord"   "things" "god"    "hath"   "thou"   "man"   
 [9] "thy"    "shall" 

您有一个稀疏的数据集,其中大部分由零控制,因此标准偏差非常低。如果您的一些非零计数非常大,您可以对其进行缩放,例如一些是100秒,而另一些是1秒和2秒

在稀疏数据上使用kmeans可能不是一个好主意,因为您不太可能找到有意义的中心。可能有一些选项可用,检查。也有基于图形的方法,例如

下面是一个简单的方式来聚集和可视化:

x = read.csv("AllBooks_baseline_DTM_Unlabelled.csv")
# remove singleton columns
x = x[rowMeans(x)>0,colSums(x>0)>1]
library(Rtsne)
library(ggplo2)

pca = prcomp(x,scale=TRUE,center=TRUE)
TS = Rtsne(pca$x[,1:30])
ggplot(data.frame(Dim1=TS$Y[,1],Dim2=TS$Y[,2],C=factor(clus)),
aes(x=Dim1,y=Dim2,col=C))+geom_point()
将其视为二进制并按二进制距离分层:

hc=hclust(dist(x,method="binary"),method="ward.D")
clus = cutree(hc,5)
计算PCA并可视化:

x = read.csv("AllBooks_baseline_DTM_Unlabelled.csv")
# remove singleton columns
x = x[rowMeans(x)>0,colSums(x>0)>1]
library(Rtsne)
library(ggplo2)

pca = prcomp(x,scale=TRUE,center=TRUE)
TS = Rtsne(pca$x[,1:30])
ggplot(data.frame(Dim1=TS$Y[,1],Dim2=TS$Y[,2],C=factor(clus)),
aes(x=Dim1,y=Dim2,col=C))+geom_point()

集群5似乎非常不同,它们的不同之处在于:

names(tail(sort(colMeans(x[clus==5,]) - colMeans(x[clus!=5,])),10))
 [1] "wisdom" "thee"   "lord"   "things" "god"    "hath"   "thou"   "man"   
 [9] "thy"    "shall" 

我添加了完整的数据链接。但你是说,因为我的矩阵非常稀疏,所以我应该缩放它?我是说,你可以缩放它,这不是问题。问题是全数据链路上添加的zerosI数量。但你是说,因为我的矩阵非常稀疏,所以我应该缩放它?我是说,你可以缩放它,这不是问题。问题是零的数量