Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 更正二维分类数据的Conv1D图层形状_Python_Tensorflow_Machine Learning_Keras - Fatal编程技术网

Python 更正二维分类数据的Conv1D图层形状

Python 更正二维分类数据的Conv1D图层形状,python,tensorflow,machine-learning,keras,Python,Tensorflow,Machine Learning,Keras,我的数据集(进行二进制分类的网络流量数据集)- X_列和y_列的形状是(45447,25)(45447,25) 我正在制作的模型- # fit and evaluate a model import tensorflow as tf def evaluate_model(X_train, y_train,X_test,y_test): X_train = X_train.reshape(45447,25,1) y_train=y_train.reshape(45447,1) ve

我的数据集(进行二进制分类的网络流量数据集)-

X_列和y_列的形状是(45447,25)(45447,25)

我正在制作的模型-

# fit and evaluate a model
import tensorflow as tf
def evaluate_model(X_train, y_train,X_test,y_test):

  X_train = X_train.reshape(45447,25,1)
  y_train=y_train.reshape(45447,1)
  verbose=0
  epochs=10
  batch_size = 32
  n_timesteps = X_train.shape[0]
  n_features= X_train.shape[1]
  print(n_timesteps,n_features)
  n_outputs = 1
  model = Sequential()
  model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(n_timesteps,n_features)))
  model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
  model.add(Dropout(0.5))
  model.add(MaxPooling1D(pool_size=2))
  model.add(Flatten())
  model.add(Dense(100, activation='relu'))
  model.add(Dense(n_outputs, activation='softmax'))
  model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
  # fit network
  #train_data = tf.data.Dataset.from_tensor_slices((X_train, y_train))
  #valid_data = tf.data.Dataset.from_tensor_slices((X_test, y_test))

  model.fit(X_train, y_train,epochs=10, batch_size=32, verbose=0)
      # evaluate model
  #_, accuracy = model.evaluate(X_test, y_test, batch_size=batch_size, verbose=0)
  

# summarize scores
def summarize_results(scores):
    print(scores)
    m, s = mean(scores), std(scores)
    print('Accuracy: %.3f%% (+/-%.3f)' % (m, s))

# run an experiment
def run_experiment(repeats=10):
    # load data
    
    # repeat experiment
    scores = list()
    for r in range(repeats):
        score = evaluate_model(X_train, y_train,X_test,y_test)
        score = score * 100.0
        print('>#%d: %.3f' % (r+1, score))
        scores.append(score)
    # summarize results
    summarize_results(scores)

# run the experiment
run_experiment()
我试过的-

1) 我试着把熊猫的数据做成numpy

2) 将二维阵列重塑为三维阵列-

X_train = X_train.reshape(45447,25,1)
y_train=y_train.reshape(45447,1,1)
  • 将数据转换为tf对象-

    train_data=tf.data.Dataset.from_tensor_切片((X_train,y_train)) valid_data=tf.data.Dataset.from_张量_切片((X_检验,y_检验))


  • 我仍然无法运行我的模型。它不断出现形状错误。请帮助我理解要提供给模型的形状。

    我认为您必须从输入形状中删除批次,并添加1作为长度或特征:

    model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(n_features , 1)))
    
    对于输出层,使用一个具有softmax激活的单元密集单元没有意义,请将输出单元的数量更改为:

     n_outputs = your_number_of_categories
    

    input\u shape
    应该是
    X\u train.shape[1:][/code>。损失函数应为稀疏的分类精度,因为标签是以整数形式提供的

    def evaluate_model(X_train, y_train,X_test,y_test):
    
      X_train = X_train.reshape(45447,25,1)
      y_train=y_train.reshape(45447,1)
      verbose=0
      epochs=10
      batch_size = 32
      n_outputs = 2
      model = Sequential()
      model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=X_train.shape[1:]))
      model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
      model.add(Dropout(0.5))
      model.add(MaxPooling1D(pool_size=2))
      model.add(Flatten())
      model.add(Dense(100, activation='relu'))
      model.add(Dense(n_outputs, activation='softmax'))
      model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
      # fit network
      model.fit(X_train, y_train,epochs=10, batch_size=32, verbose=0)
      # evaluate model
      _, accuracy = model.evaluate(X_test, y_test, batch_size=batch_size, verbose=0)