Python,类型错误,在StandardScaler转换后使用对象而不是浮点
我的代码是Python,类型错误,在StandardScaler转换后使用对象而不是浮点,python,scikit-learn,typeerror,Python,Scikit Learn,Typeerror,我的代码是 a, b = train_df1.iloc[:,1:7].values, train_df1.iloc[:,0].values c = test_df1.iloc[:,0:6].values from sklearn.preprocessing import StandardScaler std = StandardScaler() a_t= std.fit_transform(a) c_t = std.transform(c) 我有两个数据帧train_df1和test_df1
a, b = train_df1.iloc[:,1:7].values, train_df1.iloc[:,0].values
c = test_df1.iloc[:,0:6].values
from sklearn.preprocessing import StandardScaler
std = StandardScaler()
a_t= std.fit_transform(a)
c_t = std.transform(c)
我有两个数据帧train_df1和test_df1
我用这些创建了a,b,c
这里的问题是a和b分别是float64和int64类型
但是c是对象类型,这就是为什么它在下一个代码中显示类型错误的原因
如何将c更改为浮点类型,以避免后续代码的类型错误
错误消息是
TypeError:float()参数必须是字符串或数字,而不是“方法”
它是在运行最后一行代码之后出现的
编辑
出[64]:
Survived Pclass Sex Age SibSp Parch Fare Embarked
0 0 3 0 22.0 1 0 7.2500 0
1 1 1 1 38.0 1 0 71.2833 1
2 1 3 1 26.0 0 0 7.9250 0
test_df1.head(3)
出[65]:
Pclass Sex Age SibSp Parch Fare Embarked
0 3 0 34.5 0 0 7.8292 2
1 3 1 47.0 1 0 7 0
2 2 0 62.0 0 0 9.6875 2
因为您只显示了一小部分代码,所以我无法在ide中对代码进行调试。 所以,我使用了你问题中的一个数据框,并对数据进行了缩放 这是我们的数据框架:
Survived Pclass Sex Age SibSp Parch Fare Embarked
0 0 3 0 22.0 1 0 7.2500 0
1 1 1 1 38.0 1 0 71.2833 1
2 1 3 1 26.0 0 0 7.9250 0
以下是代码(带注释供您参考):
所以,最后我缩放了这些值
注意:考虑到很少的信息和代码,我假设了各种各样的事情,但我希望这能帮助您解决问题。
c=test\u df1.iloc[:,0:6].values.astype(np.float64)
?我尝试了c=float(test\u df1.iloc[:,0:6}.values).但毫无进展,你有预处理分类数据吗?我尝试了你建议的roganjosh。但同样的错误突然出现。我的预处理分类数据是什么意思?nagasivam
Survived Pclass Sex Age SibSp Parch Fare Embarked
0 0 3 0 22.0 1 0 7.2500 0
1 1 1 1 38.0 1 0 71.2833 1
2 1 3 1 26.0 0 0 7.9250 0
# SWAMI KARUPPASWAMI THUNNAI
import pandas
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
if __name__ == "__main__":
data_set = pandas.read_csv("data.csv")
a = data_set.iloc[:,1:7].values # a will get the values of 1st six columns
b = data_set.iloc[:,7].values # b will get the values of 7th columns
# since the data set seems to be preprocessed (considering the small amount of data)
# we will create training set and testing set
a_train, a_test, b_train, b_test = train_test_split(a, b, test_size = 0.2, random_state = 0)
# test data size = 20 % and pseudo random generator is set to 0
scaler = StandardScaler()
# now we are about to scale the data
a_train = scaler.fit_transform(a_train) # scale the training set
# use the mean and standard deviation of training set to scale the testing set
a_test = scaler.transform(a_test)