Python 3.x TypeError:在SVM训练期间,只能将大小为1的数组转换为Python标量

Python 3.x TypeError:在SVM训练期间,只能将大小为1的数组转换为Python标量,python-3.x,numpy,machine-learning,svm,Python 3.x,Numpy,Machine Learning,Svm,我有一个奇怪的错误 我的标签如下: labels: [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] 嵌入类似于: data["embeddings"]: [array([ 0.05140932, 0.05402263, ... , 0.02575628], dtype=float32), array([

我有一个奇怪的错误

我的标签如下:

labels: [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]

嵌入类似于:

data["embeddings"]:
[array([ 0.05140932,  0.05402263, ... ,  0.02575628], dtype=float32), array([ 0.05858443, -0.05192663, ... , 0.01924052, 0.1784615 ,  -0.12531035, -0.04654732], dtype=float32)]

标签和嵌入件的长度相同。上述嵌入件仅为整体的[0:2]部分

recognizer = SVC(C=1.0, kernel="linear", probability=True)
recognizer.fit(data["embeddings"],labels)
recognizer.fit()生成以下错误:

TypeError: only size-1 arrays can be converted to Python scalars

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "train_embeddings.py", line 52, in <module>
    recognizer.fit(data["features"],labels)
  File "/usr/local/lib/python3.8/dist-packages/sklearn/svm/_base.py", line 146, in fit
    X, y = check_X_y(X, y, dtype=np.float64,
  File "/usr/local/lib/python3.8/dist-packages/sklearn/utils/validation.py", line 747, in check_X_y
    X = check_array(X, accept_sparse=accept_sparse,
  File "/usr/local/lib/python3.8/dist-packages/sklearn/utils/validation.py", line 531, in check_array
    array = np.asarray(array, order=order, dtype=dtype)
  File "/muho/.local/lib/python3.8/site-packages/numpy/core/_asarray.py", line 83, in asarray
    return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.

但这也没用。我不知道这里有什么或者为什么应该是标量。

您的变量
数据[“嵌入”]
似乎就是问题所在。它的元素是1长度数组,因为不能以列表的形式访问值,而只能访问元组(值、数据类型) 使用:


数据['embeddings']
需要是一个numpy数组(数字数据类型),或者它可以转换成这样的东西,
labels: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
data_2 = [x[0] for x in data["embeddings"]]
recognizer.fit(data_2, labels)