Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在Tensorflow中运行LSTM时资源耗尽错误或OOM_Tensorflow_Error Handling_Deep Learning_Recurrent Neural Network - Fatal编程技术网

在Tensorflow中运行LSTM时资源耗尽错误或OOM

在Tensorflow中运行LSTM时资源耗尽错误或OOM,tensorflow,error-handling,deep-learning,recurrent-neural-network,Tensorflow,Error Handling,Deep Learning,Recurrent Neural Network,我正在使用以下代码在Tensorflow中培训我的LSTM网络: import pandas as pd import numpy as np import pickle import matplotlib.pyplot as plt from scipy import stats import tensorflow as tf import seaborn as sns from pylab import rcParams from sklearn import metrics from sk

我正在使用以下代码在Tensorflow中培训我的LSTM网络:

import pandas as pd
import numpy as np
import pickle
import matplotlib.pyplot as plt
from scipy import stats
import tensorflow as tf
import seaborn as sns
from pylab import rcParams
from sklearn import metrics
from sklearn.model_selection import train_test_split

%matplotlib inline

sns.set(style='whitegrid', palette='muted', font_scale=1.5)

rcParams['figure.figsize'] = 14, 8

RANDOM_SEED = 42

columns = ['user','activity','timestamp', 'x-axis', 'y-axis', 'z-axis']
df = pd.read_csv('data/WISDM_ar_v1.1_raw.txt', header = None, names = columns)
df = df.dropna()

df.head()

df.info()

##df['activity'].value_counts().plot(kind='bar', title='Training examples by activity type');
##df['user'].value_counts().plot(kind='bar', title='Training examples by user');

def plot_activity(activity, df):
    data = df[df['activity'] == activity][['x-axis', 'y-axis', 'z-axis']][:200]
    axis = data.plot(subplots=True, figsize=(16, 12), 
                     title=activity)
    for ax in axis:
        ax.legend(loc='lower left', bbox_to_anchor=(1.0, 0.5))


##plot_activity("Sitting", df)
##plot_activity("Standing", df)
##plot_activity("Walking", df)
##plot_activity("Jogging", df)


N_TIME_STEPS = 200
N_FEATURES = 3
step = 20
segments = []
labels = []
for i in range(0, len(df) - N_TIME_STEPS, step):
    xs = df['x-axis'].values[i: i + N_TIME_STEPS]
    ys = df['y-axis'].values[i: i + N_TIME_STEPS]
    zs = df['z-axis'].values[i: i + N_TIME_STEPS]
    label = stats.mode(df['activity'][i: i + N_TIME_STEPS])[0][0]
    segments.append([xs, ys, zs])
    labels.append(label)

np.array(segments).shape

reshaped_segments = np.asarray(segments, dtype= np.float32).reshape(-1, N_TIME_STEPS, N_FEATURES)
labels = np.asarray(pd.get_dummies(labels), dtype = np.float32)

reshaped_segments.shape
labels[0]

X_train, X_test, y_train, y_test = train_test_split(
        reshaped_segments, labels, test_size=0.2, random_state=RANDOM_SEED)

len(X_train)
len(X_test)

N_CLASSES = 6
N_HIDDEN_UNITS = 64


def create_LSTM_model(inputs):
    W = {
        'hidden': tf.Variable(tf.random_normal([N_FEATURES, N_HIDDEN_UNITS])),
        'output': tf.Variable(tf.random_normal([N_HIDDEN_UNITS, N_CLASSES]))
    }
    biases = {
        'hidden': tf.Variable(tf.random_normal([N_HIDDEN_UNITS], mean=1.0)),
        'output': tf.Variable(tf.random_normal([N_CLASSES]))
    }

    X = tf.transpose(inputs, [1, 0, 2])
    X = tf.reshape(X, [-1, N_FEATURES])
    hidden = tf.nn.relu(tf.matmul(X, W['hidden']) + biases['hidden'])
    hidden = tf.split(hidden, N_TIME_STEPS, 0)

    # Stack 2 LSTM layers
    lstm_layers = [tf.contrib.rnn.BasicLSTMCell(N_HIDDEN_UNITS, forget_bias=1.0) for _ in range(2)]
    lstm_layers = tf.contrib.rnn.MultiRNNCell(lstm_layers)

    outputs, _ = tf.contrib.rnn.static_rnn(lstm_layers, hidden, dtype=tf.float32)

    # Get output for the last time step
    lstm_last_output = outputs[-1]

    return tf.matmul(lstm_last_output, W['output']) + biases['output']


tf.reset_default_graph()

X = tf.placeholder(tf.float32, [None, N_TIME_STEPS, N_FEATURES], name="input")
Y = tf.placeholder(tf.float32, [None, N_CLASSES])


pred_Y = create_LSTM_model(X)

pred_softmax = tf.nn.softmax(pred_Y, name="y_")

loss = -tf.reduce_sum(Y * tf.log(pred_softmax))
optimizer = tf.train.GradientDescentOptimizer(learning_rate = LEARNING_RATE).minimize(loss)

correct_prediction = tf.equal(tf.argmax(pred_softmax,1), tf.argmax(Y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

cost_history = np.empty(shape=[1],dtype=float)
saver = tf.train.Saver()

session = tf.Session()
session.run(tf.global_variables_initializer())

batch_size = 10
total_batches = X_train.shape[0] // batch_size


for epoch in range(8):
        for b in range(total_batches):    
            offset = (b * batch_size) % (y_train.shape[0] - batch_size)
            batch_x = X_train[offset:(offset + batch_size), :]
            batch_y = y_train[offset:(offset + batch_size), :]
            _, c = session.run([optimizer, loss],feed_dict={X: batch_x, Y : batch_y})
            cost_history = np.append(cost_history,c)
        print("Epoch: ",epoch," Training Loss: ",c," Training Accuracy: ",\
              session.run(accuracy, feed_dict={X: X_train, Y: y_train}))
我使用的数据集来自:

WISDM_ar_txtv1.1_原始

但是,当我运行它时,我会得到一个ResourceExpensed或OOM错误:

回溯(最近一次调用上次):文件 “C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\client\session.py”, 第1350行,在通话中 返回fn(*args)文件“C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\client\session.py”, 第1329行,在运行中 状态,运行元数据)文件“C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\framework\errors\u impl.py”, 第473行,在退出中 c_api.TF_GetCode(self.status.status))tensorflow.python.framework.errors_impl.ResourceExhustederRor:OOM 当使用形状[8784000,64]和类型float on分配张量时 /作业:本地主机/副本:0/任务:0/设备:GPU:0由分配器GPU\U 0\U bfc执行
[[Node:MatMul=MatMul[T=DT\u FLOAT,transpose\u a=false, 转置b=false, _device=“/job:localhost/replica:0/task:0/device:GPU:0”](重塑、变量/读取)]提示:如果要查看已分配张量的列表 当OOM发生时,在RUNOOPTIONS上添加报告\u张量\u分配\u OOM 获取当前分配信息

[[Node:add\u 1/\u 15=\u Recvclient\u terminated=false, recv_device=“/job:localhost/replica:0/task:0/device:CPU:0”, send_device=“/job:localhost/replica:0/task:0/device:GPU:0”, 发送\u设备\u化身=1,张量\u name=“edge\u 9637\u添加\u 1”, 张量类型=DT浮点数, _device=“/job:localhost/replica:0/task:0/device:CPU:0”]]提示:如果要在OOM发生时查看已分配张量的列表,请添加 报告当前运行选项的时差分配 分配信息

在处理上述异常期间,发生了另一个异常:

回溯(最近一次调用):文件“”,第9行,在 运行文件“C:\Users\Chaine\AppData\Local\Programs\Python35\lib\site packages\tensorflow\Python\client\session.py”, 第895行,运行中 运行元数据(ptr)文件“C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\client\session.py”, 第1128行,正在运行 feed_dict_tensor,options,run_metadata)文件“C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\client\session.py”, 第1344行,正在运行 选项,运行元数据)文件“C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\client\session.py”, 第1363行,在通话中 提升类型(e)(节点定义、操作、消息)tensorflow.python.framework.errors\u impl.ResourceExhausterRor:OOM 当使用形状[8784000,64]和类型float on分配张量时 /作业:本地主机/副本:0/任务:0/设备:GPU:0由分配器GPU\U 0\U bfc执行
[[Node:MatMul=MatMul[T=DT\u FLOAT,transpose\u a=false, 转置b=false, _device=“/job:localhost/replica:0/task:0/device:GPU:0”](重塑、变量/读取)]提示:如果要查看已分配张量的列表 当OOM发生时,在RUNOOPTIONS上添加报告\u张量\u分配\u OOM 获取当前分配信息

[[Node:add\u 1/\u 15=\u Recvclient\u terminated=false, recv_device=“/job:localhost/replica:0/task:0/device:CPU:0”, send_device=“/job:localhost/replica:0/task:0/device:GPU:0”, 发送\u设备\u化身=1,张量\u name=“edge\u 9637\u添加\u 1”, 张量类型=DT浮点数, _device=“/job:localhost/replica:0/task:0/device:CPU:0”]]提示:如果要在OOM发生时查看已分配张量的列表,请添加 报告当前运行选项的时差分配 分配信息

由op“MatMul”引起,定义于:文件“”,第1行,中 文件 “C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\idlelib\run.py”, 第130行,主 ret=method(*args,**kwargs)文件“C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\idlelib\run.py”, 第357行,运行代码 exec(代码,self.locals)文件“”,文件“”中的第1行,创建模型文件“”中的第13行 “C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\ops\math\u ops.py”, 2022号线,位于matmul a、 b,transpose_a=transpose_a,transpose_b=transpose_b,name=name)文件 “C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\ops\gen\u math\u ops.py”, 第2799行,in_mat_mul name=name)文件“C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\framework\op\u def\u library.py”, 第787行,输入应用操作帮助程序 op_def=op_def)文件“C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\framework\ops.py”, 创建操作中的第3160行 op_def=op_def)文件“C:\Users\Chaine\AppData\Local\Programs\Python\Python35\lib\site packages\tensorflow\Python\framework\ops.py”, 第1625行,在init self._traceback=self._graph._extract_stack()35; pylint:disable=protected access

ResourceExhaustedError(回溯见上文):分配时的OOM 形状为[8784000,64]且类型为浮点数的张量 /作业:本地主机/副本:0/任务:0/设备:GPU:0由分配器GPU\U 0\U bfc执行
[[Node:MatMul=MatMul[T=DT\u FLOAT,transpose\u a=false, 转置b=false, _device=“/job:localhost/replica:0/task:0/device:GPU:0”](重塑、变量/读取)]提示:如果要查看已分配张量的列表 当OOM发生时,在RUNOOPTIONS上添加报告\u张量\u分配\u OOM 获取当前分配信息

[[Node:add\u 1/\u 15=\u Recvclient\u terminated=false, recv_device=“/job:localhost/replica:0/task:0/device:CPU:0”, send_device=“/job:localhost/replica:0/task:0/device:GPU:0”, 发送\u设备\u化身=1,张量\u name=“edge\u 9637\u添加\u 1”, 张量类型=DT浮点数, _德维
session.run([loss]), 
pred_Y = create_LSTM_model(X)