Python 动物分类的Sklearn模型

Python 动物分类的Sklearn模型,python,csv,machine-learning,scikit-learn,Python,Csv,Machine Learning,Scikit Learn,编辑: 所以我设法用所有的建议来纠正错误。但是现在在model.predict部分给了我这个问题 Expected 2D array, got 1D array instead: array=[ 12 15432 40 20 33 40000 12800 20 19841 0 0]. Reshape your data either using array.reshape(-1, 1) if your data has a single featur

编辑: 所以我设法用所有的建议来纠正错误。但是现在在model.predict部分给了我这个问题

Expected 2D array, got 1D array instead:
array=[   12 15432    40    20    33 40000 12800    20 19841     0     0].
Reshape your data either using array.reshape(-1, 1) if your data has a 
single feature or array.reshape(1, -1) if it contains a single sample.
这是我正在使用的新代码

'''
This method is to handel the training and testing of the models
'''
def testTrainModel(model, xTrain, yTrain, xTest, yTest):
    print("Start Method")
    print("Traing Model")
    model.fit(xTrain, yTrain)
    print("Model Trained")
    print("testing models")
    results = model.predict(xTest)

    print(model.__class__," Prediction Report")
    print(classification_report(results,yTest))
    print("Confusion Matrix")
    print(confusion_matrix(results,yTest))
    print("Accuracy is ", accuracy_score(results, yTest)*100)
    lables =["Hunter", "Scavenger"]
    plotConfusionMatrix(confusion_matrix(results,yTest),
                    lables,
                     title='Confusion matrix')

#Data set Preprocess data
dataframe = pd.read_csv("animalData.csv", dtype = 'category')
print(dataframe.head())
dataframe = dataframe.drop(["Name"], axis = 1)
cleanup = {"Class": {"Primary Hunter" : 0, "Primary Scavenger": 1 }}
dataframe.replace(cleanup, inplace = True)
print(dataframe.head())

#array = dataframe.values
#Data splt
# Seperating the data into dependent and independent variables
X = dataframe.iloc[:, :-1].values
y = dataframe.iloc[:,-1].values
#Get training and testoing data

#Set up the models Put model nicknake and model
models = []
models.append(('LogReg', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('DecTree', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC()))

#Create all the models
logReg = LogisticRegression()
lda = LinearDiscriminantAnalysis()
knn = KNeighborsClassifier()
decsTree = DecisionTreeClassifier()
nb = GaussianNB()
svm = SVC()

#Test value
trex = [12,15432,40,20,33,40000,12800,20,19841,0,0,0]
testTrainModel(logReg,X, y, trex[:-1], trex[-1:])
testTrainModel(lda,X, y, trex[:-1], trex[-1:])
testTrainModel(knn,X, y, trex[:-1], trex[-1:])
testTrainModel(decsTree,X, y, trex[:-1], trex[-1:])
testTrainModel(nb,X, y, trex[:-1], trex[-1:])
testTrainModel(svm,X, y, trex[:-1], trex[-1:])
旧版: 我在这里要做的是使用一个动物的特征列表,比如牙齿和大小,然后使用一些内置的模型,如SVN KNN ect,和我所做的cvs数据集。但它一直在说它不能将字符串转换为float,当我取出cvs中的所有字符串时,它确实起作用,但我不知道这是否是我想要的,因为我想把每只动物描绘成猎人或食腐动物。我真的不知道我在这里做错了什么,因为我是python新手。也许有人能帮上忙,看看我的代码,告诉我我做错了什么。此外,任何改善这一点的建议都将被愉快地接受

因此,我的代码如下所示:

import pandas as pd
import numpy as np
import itertools
import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report


def plotConfusionMatrix(cm, classes,
                      normalize=False,
                      title='Confusion matrix',
                      cmap=plt.cm.Blues):

plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)

fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
    plt.text(j, i, format(cm[i, j], fmt),
             horizontalalignment="center",
             color="white" if cm[i, j] > thresh else "black")

plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()

'''
This method is to handel the training and testing of the models
'''
def testTrainModel(model, xTrain, yTrain, xTest, yTest):
    print("Start Method")
    print("Traing Model")
    model.fit(xTrain, yTrain)
    print("Model Trained")
    print("testing models")
    results = model.predict(xTest)

    print(model.__class__," Prediction Report")
    print(classification_report(results,yTest))
    print("Confusion Matrix")
    print(confusion_matrix(results,yTest))
    print("Accuracy is ", accuracy_score(results, yTest)*100)
    lables =["Hunter", "Scavenger"]
    plotConfusionMatrix(confusion_matrix(results,yTest),
                        lables,
                         title='Confusion matrix')




#T-Rex, 12, 15432,  40, 20, 33, 40000,  12800,  20, 19841,  0,  0,


#Data set
dataframe = pd.read_csv("animalData.csv")
print(dataframe.head())
#array = dataframe.values
#Data splt
# Seperating the data into dependent and independent variables
X = dataframe.iloc[:, :-1].values
y = dataframe.iloc[:,-1].values
#Get training and testoing data

seed = 7 #prepare configuration for cross validation test harness

#Set up the models Put model nicknake and model
models = []
models.append(('LogReg', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('DecTree', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC()))


#store the results
results = []
names =[]
scoring = 'accuracy'
#print the results

for name, model in models:
    kfold = model_selection.KFold(n_splits=9, random_state=seed)
    cv_results = model_selection.cross_val_score(model, X, y, cv=kfold,             scoring=scoring)
    results.append(cv_results)
    names.append(name)
    msg = "Model:%s:\n Cross Validation Score Mean:%f - StdDiv:(%f)" % (name, cv_results.mean(), cv_results.std())
    print(msg)

#plot the data
figure1 = plt.figure()
figure1.suptitle("Algorithm Comparision")
ax = figure1.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()

#Create all the models
logReg = LogisticRegression()
lda = LinearDiscriminantAnalysis()
knn = KNeighborsClassifier()
decsTree = DecisionTreeClassifier()
nb = GaussianNB()
svm = SVC()

#Test value
trex = ["T-Rex",12,15432,40,20,33,40000,12800,20,19841,0,0,"Primary Hunter"]
testTrainModel(logReg,X, y, trex[:-1], trex[-1:])
testTrainModel(lda,X, y, trex[:-1], trex[-1:])
testTrainModel(knn,X, y, trex[:-1], trex[-1:])
testTrainModel(decsTree,X, y, trex[:-1], trex[-1:])
testTrainModel(nb,X, y, trex[:-1], trex[-1:])
testTrainModel(svm,X, y, trex[:-1], trex[-1:])
现在这是做了很多,我想我得到了所有的权利,但可能是我的数据是错误的

这是测试csv文件

姓名、牙齿长度、体重、长度、高度、速度、卡路里摄入量、咬合力、猎物速度、预睡、视力、嗅觉、等级 鳄鱼,42400,23,1.6,825003700,30881,0,0,主要猎手 狮子,2.7416,9.8,3.9,507236650,351300,0,0,主要猎人 熊,3.6600,7,3.35,4020000975,0,0,0,0,初级清除剂 老虎,3260,12,3,4072361050,37160,0,0,初级猎人 鬣狗,0.27160,5,2,3750001100,20,40,0,0,初级清道夫 捷豹,2220,5.5,2.5,4050001350,15300,0,0,初级猎人 猎豹,1.5154,4.9,2.9,702200475,56185,0,0,主要猎手 科莫多龙,0.4150,8.5,1,131994240,24110,0,0,主要清除剂

在此方面的任何帮助都将不胜感激

堆栈跟踪

  File "<ipython-input-10-691557e6b9ae>", line 1, in <module>
runfile('E:/TestPythonCode/Classifier.py', wdir='E:/TestPythonCode')

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 678, in runfile
execfile(filename, namespace)

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 106, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

  File "E:/TestPythonCode/Classifier.py", line 110, in <module>
cv_results = model_selection.cross_val_score(model, X, y, cv=kfold, scoring=scoring)

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\model_selection\_validation.py", line 342, in cross_val_score
pre_dispatch=pre_dispatch)

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\model_selection\_validation.py", line 206, in cross_validate
for train, test in cv.split(X, y, groups))

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\externals\joblib\parallel.py", line 779, in __call__
while self.dispatch_one_batch(iterator):

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\externals\joblib\parallel.py", line 625, in dispatch_one_batch
self._dispatch(tasks)

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\externals\joblib\parallel.py", line 588, in _dispatch
job = self._backend.apply_async(batch, callback=cb)

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 111, in apply_async
result = ImmediateResult(func)

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 332, in __init__
self.results = batch()

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\externals\joblib\parallel.py", line 131, in __call__
return [func(*args, **kwargs) for func, args, kwargs in self.items]

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\externals\joblib\parallel.py", line 131, in <listcomp>
return [func(*args, **kwargs) for func, args, kwargs in self.items]

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\model_selection\_validation.py", line 458, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\linear_model\logistic.py", line 1216, in fit
    order="C")

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\utils\validation.py", line 573, in check_X_y
    ensure_min_features, warn_on_dtype, estimator)

  File "C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-    packages\sklearn\utils\validation.py", line 433, in check_array
    array = np.array(array, dtype=dtype, order=order, copy=copy)

ValueError: could not convert string to float: 'KomodoDragon'
文件“”,第1行,在
运行文件('E:/TestPythonCode/Classifier.py',wdir='E:/TestPythonCode')
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\spyder\u kernels\customize\spydercurustomize.py”,第678行,在runfile中
execfile(文件名、命名空间)
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\spyder\u kernels\customize\spydercurustomize.py”,执行文件第106行
exec(编译(f.read(),文件名,'exec'),命名空间)
文件“E:/TestPythonCode/Classifier.py”,第110行,在
cv\u结果=模型选择。交叉值(模型,X,y,cv=kfold,评分=评分)
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\model\u selection\\u validation.py”,第342行,交叉值中
预调度=预调度)
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\model\u selection\\u validation.py”,第206行,交叉验证
对于列车,在等速分段(X、y、组)中进行试验
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\externals\joblib\parallel.py”,第779行,在调用中__
而self.dispatch\u一批(迭代器):
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\externals\joblib\parallel.py”,第625行,分批发送
自我分配(任务)
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\externals\joblib\parallel.py”,第588行,在调度中
作业=self.\u后端.apply\u异步(批处理,回调=cb)
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\externals\joblib\\u parallel\u backends.py”,第111行,应用异步
结果=立即结果(func)
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\externals\joblib\\u parallel\u backends.py”,第332行,在初始化中__
self.results=batch()
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\externals\joblib\parallel.py”,第131行,在调用中__
返回[func(*args,**kwargs),用于self.items中的func、args、kwargs]
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\externals\joblib\parallel.py”,第131行,在
返回[func(*args,**kwargs),用于self.items中的func、args、kwargs]
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\model\u selection\\u validation.py”,第458行,在“拟合”和“评分”中
估计值拟合(X_序列、y_序列、**拟合参数)
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\linear\u model\logistic.py”,第1216行
order=“C”)
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site packages\sklearn\utils\validation.py”,第573行,检查
确保\u最小\u功能,警告\u数据类型,估计器)
文件“C:\Users\matth\Anaconda3\envs\TensorfGPU2\lib\site-packages\sklearn\utils\validation.py”,第433行,在检查数组中
array=np.array(array,dtype=dtype,order=order,copy=copy)
ValueError:无法将字符串转换为浮点:“Komododagon”

如果您使用的是numpy.ndarry,则同时使用string元素和float元素是无效的。例如: 本机python列表:

mylist=[1,3,'Komododagon']

没关系,但当您尝试将list mylist转换为ndarry对象时,如:

mylist = np.array(mylist, dtype=float)
将发生错误

无法将字符串转换为浮点:“Komododagon”


您可以使用一种热编码来处理此问题

请您也粘贴堆栈跟踪,就像它发生在哪一行。请对包含单词和字符串的列执行一次热编码。我该怎么做?@MNM
print(data.info())
打印
string
object
类型的列。转换它们<代码>标签编码
一个热编码
Scikit学习分类器仅适用于数字数据。因此,如果列中有字符串数据,则需要使用一些预处理对其进行转换。预处理根据您拥有的数据类型而有所不同。例如,在文本数据上,可以进行标记化,然后进行tf idf或频率计数。对于顺序数据(单词定义某种顺序-高、中、低),将它们适当地编码为数字(3=高、2=中、1=低)。对于没有顺序且仅表示类别的数据(如您的情况),您应该