神经网络->;ValueError:无法将NumPy数组转换为张量(不支持的对象类型float)

神经网络->;ValueError:无法将NumPy数组转换为张量(不支持的对象类型float),numpy,neural-network,tensor,valueerror,feature-scaling,Numpy,Neural Network,Tensor,Valueerror,Feature Scaling,我试着用回归建立一个神经网络模型。我猜e错误属于特征缩放,试图从分类转换为数字(对性别列“男/女”进行OneHotEncoding) 这是我的密码 import numpy as np import pandas as pd import tensorflow as tf import os data1 = pd.read_csv(r"C:\Users\Cucu\Desktop\Mall_Customers.csv") x = data1.iloc [: , 1:-1].

我试着用回归建立一个神经网络模型。我猜e错误属于特征缩放,试图从分类转换为数字(对性别列“男/女”进行OneHotEncoding)

这是我的密码

import numpy as np 
import pandas as pd
import tensorflow as tf
import os
data1 = pd.read_csv(r"C:\Users\Cucu\Desktop\Mall_Customers.csv") 
x = data1.iloc [: , 1:-1].values #SEPARATE INDEPENDENT VARIABLES
print(x)
y = data1.iloc [: , -1].values #SEPARATE DEPENDENT VARIABLES
print(y)
#AFTER PRINTING, WE WILL HAVE A SIMPLER SHEET


#ENCODING CATEGORICAL DATA
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer( transformers = [('encoder' , OneHotEncoder() , [0] )], remainder = 'passthrough')
x = np.array(ct.fit_transform(x))
print (x)


#LET'S SPLIT INTO TRAIN SET & TEST SET
from sklearn.model_selection import train_test_split
x_train , x_test , y_train , y_test = train_test_split(x , y , test_size = 0.2 , random_state = 1 )
print(x_train)
print("------")
print(x_test)
print("------")
print(y_train)
print("------")
print(y_test)


#APPLYING FEATURE SCALING
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train[: , 2:] = sc.fit_transform(x_train[: , 2:])
x_test[: , 2:] = sc.transform(x_test[: , 2:])
print(x_train)
#must be updated-------------------------


#INITIALZING THE ANN
ann = tf.keras.models.Sequential()

#BULDING THE ANN
ann.add(tf.keras.layers.Dense(units = 6 , activation = 'relu'))
#--units(the number of neurons--hyper-parameter--must be optimized)



#ADDING THE INPUT LAYER &THE FIRST HIDDEN LAYER
ann.add(tf.keras.layers.Dense(units = 6 , activation = 'relu'))

#ADDING THE OUPUT LAYER
ann.add(tf.keras.layers.Dense(units = 1 , activation = 'relu'))  #ReLU used in output for regression task



#TRAINING THE ANN
#1)compiling the ann
ann.compile(optimizer = 'adam' , loss = 'mean_squared_error' )
#mean_squared_error is specific loss fuction for regresion 


#2)training the ann on the training set
ann.fit(x_train , y_train , batch_size = 32 , epochs = 100)
错误是:

File "D:\.spyder-py3\ANN.py", line 73, in <module>
    ann.fit(x_train , y_train , batch_size = 32 , epochs = 100)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 819, in fit
    use_multiprocessing=use_multiprocessing)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 235, in fit
    use_multiprocessing=use_multiprocessing)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 593, in _process_training_inputs
    use_multiprocessing=use_multiprocessing)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 706, in _process_inputs
    use_multiprocessing=use_multiprocessing)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py", line 357, in __init__
    dataset = self.slice_inputs(indices_dataset, inputs)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py", line 383, in slice_inputs
    dataset_ops.DatasetV2.from_tensors(inputs).repeat()

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 566, in from_tensors
    return TensorDataset(tensors)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py", line 2765, in __init__
    element = structure.normalize_element(element)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\data\util\structure.py", line 113, in normalize_element
    ops.convert_to_tensor(t, name="component_%d" % i))

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1314, in convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\framework\tensor_conversion_registry.py", line 52, in _default_conversion_function
    return constant_op.constant(value, dtype, name=name)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 258, in constant
    allow_broadcast=True)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 266, in _constant_impl
    t = convert_to_eager_tensor(value, ctx, dtype)

  File "C:\Users\Cucu\anaconda3\envs\lucky3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)

ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float).
文件“D:\.spyder-py3\ANN.py”,第73行,在
ann.fit(x列,y列,批量大小=32,历代=100)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\keras\engine\training.py”,第819行
使用多处理=使用多处理)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\keras\engine\training\u v2.py”,第235行
使用多处理=使用多处理)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\keras\engine\training\u v2.py”,第593行,进程\u training\u输入
使用多处理=使用多处理)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\keras\engine\training\u v2.py”,第706行,在进程输入中
使用多处理=使用多处理)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\keras\engine\data\u adapter.py”,第357行,在\uu init中__
数据集=self.slice\u输入(索引\u数据集,输入)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\keras\engine\data\u adapter.py”,第383行,在切片输入中
dataset_ops.DatasetV2.from_张量(输入).repeat()
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\data\ops\dataset\u ops.py”,第566行,在from\u tensors中
返回张量数据集(张量)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\data\ops\dataset\u ops.py”,第2765行,在\uu init中__
元素=结构。规范化元素(元素)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\data\util\structure.py”,第113行,在normalize\u元素中
运算。将\u转换为\u张量(t,name=“组件%d”%i))
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\framework\ops.py”,第1314行,在convert\u to\u tensor中
ret=conversion\u func(值,dtype=dtype,name=name,as\u ref=as\u ref)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\framework\tensor\u conversion\u registry.py”,第52行,在默认转换函数中
返回常量\运算常量(值,数据类型,名称=名称)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\framework\constant\u op.py”,第258行,常量
允许(广播=真)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\framework\constant\u op.py”,第266行,在常量impl中
t=转换为张量(值、ctx、数据类型)
文件“C:\Users\Cucu\anaconda3\envs\lucky3\lib\site packages\tensorflow\u core\python\framework\constant\u op.py”,第96行,在convert\u to\u eager\u tensor中
返回ops.TENSOR(值,ctx.device\u名称,数据类型)
ValueError:无法将NumPy数组转换为张量(不支持的对象类型float)。
我试着用谷歌搜索自己,但没有找到任何解决办法。任何建议和帮助都将不胜感激