Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/xamarin/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Tensorflow ';数据帧';对象没有属性';列车';_Tensorflow - Fatal编程技术网

Tensorflow ';数据帧';对象没有属性';列车';

Tensorflow ';数据帧';对象没有属性';列车';,tensorflow,Tensorflow,请帮帮我,我的失踪在哪里?为什么我总是遇到这个错误: “DataFrame”对象没有属性“train” # -*- coding: utf-8 -*- import tensorflow as tf from tensorflow.contrib import rnn import numpy as np import matplotlib.pyplot as plt import pandas as pd dataset = pd.read_csv("all.csv") x =

请帮帮我,我的失踪在哪里?为什么我总是遇到这个错误:

“DataFrame”对象没有属性“train”

# -*- coding: utf-8 -*-
 import tensorflow as tf
 from tensorflow.contrib import rnn
 import numpy as np
 import matplotlib.pyplot as plt
 import pandas as pd

 dataset = pd.read_csv("all.csv")
 x = dataset.iloc[:, 1:51].values
 y = dataset.iloc[:, 51].values

 time_steps=5
 num_units=128
 n_input=50
 learning_rate=0.001
 n_classes=2
 batch_size=5

 #weights and biases of appropriate shape to accomplish above task
 out_weights=tf.Variable(tf.random_normal([num_units,n_classes]))
 out_bias=tf.Variable(tf.random_normal([n_classes]))

 #defining placeholders
 #input image placeholder
 x=tf.placeholder("float",[None,time_steps,n_input])
 #input label placeholder
 y=tf.placeholder("float",[None,n_classes])

 #processing the input tensor from [batch_size,n_steps,n_input] to 
 "time_steps" 
 number of [batch_size,n_input] tensors
 input=tf.unstack(x ,time_steps,1)

 #defining the network
 lstm_layer=rnn.BasicLSTMCell(num_units,forget_bias=1)
 outputs,_=rnn.static_rnn(lstm_layer,input,dtype="float32")

 #converting last output of dimension [batch_size,num_units] to 
 [batch_size,n_classes] by out_weight multiplication
 prediction=tf.matmul(outputs[-1],out_weights)+out_bias

 #loss_function
损失=tf.reduce\u mean(tf.nn.softmax\u cross\u熵与logits(logits=prediction,labels=y)) #优化 opt=tf.train.AdamOptimizer(学习率=学习率)。最小化(损失)

#模型评估
正确的预测=tf.equal(tf.argmax(预测,1),tf.argmax(y,1))
准确度=tf.reduce_平均值(tf.cast(正确的预测,tf.float32))
#初始化变量
init=tf.global_variables_initializer()
使用tf.Session()作为sess:
sess.run(初始化)
iter=1

当iter如错误所述时,“数据帧”对象没有名为“下一批”的属性/方法


您可能遵循了使用Tensorflow辅助方法加载MNIST数据库的教程。但是pandas返回的对象与您期望的“DataSet”类不同。

谢谢您的回复,这对我很有帮助。
 #model evaluation
 correct_prediction=tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
 accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

 #initialize variables
 init=tf.global_variables_initializer()
 with tf.Session() as sess:
     sess.run(init)
     iter=1
     while iter<800:
         batch_x,batch_y=dataset.train.next_batch(batch_size=batch_size)

         batch_x=batch_x.reshape((batch_size,time_steps,n_input))

         sess.run(opt, feed_dict={x: batch_x, y: batch_y})

         if iter %10==0:
            acc=sess.run(accuracy,feed_dict={x:batch_x,y:batch_y})
            los=sess.run(loss,feed_dict={x:batch_x,y:batch_y})
            print("For iter ",iter)
            print("Accuracy ",acc)
            print("Loss ",los)
            print("__________________")

         iter=iter+1