Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/332.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 分段故障运行层.conv2d_Python_Tensorflow - Fatal编程技术网

Python 分段故障运行层.conv2d

Python 分段故障运行层.conv2d,python,tensorflow,Python,Tensorflow,我有一个简单的测试脚本来说明我遇到的问题。我试图使用tensorflow实现CNN,但在更改输入大小时,我开始遇到分段错误。在测试脚本中,如果n_H=3000,我可以成功运行它。但是当我设置n_H=4000时,我得到了一个分段错误。此外,如果通过设置_conv=False在没有layars.conv2d的情况下运行它,脚本将成功运行。有人知道我的问题是什么吗 我在一个有12个CPU的主机上运行这个。我真的不理解tensorflow关于“使用默认的inter-op设置创建新线程池:2”的消息。我不

我有一个简单的测试脚本来说明我遇到的问题。我试图使用tensorflow实现CNN,但在更改输入大小时,我开始遇到分段错误。在测试脚本中,如果n_H=3000,我可以成功运行它。但是当我设置n_H=4000时,我得到了一个分段错误。此外,如果通过设置_conv=False在没有layars.conv2d的情况下运行它,脚本将成功运行。有人知道我的问题是什么吗

我在一个有12个CPU的主机上运行这个。我真的不理解tensorflow关于“使用默认的inter-op设置创建新线程池:2”的消息。我不知道这是否与我的问题有关

以下是我遇到分段错误时的输出:

$ python test.py
(100, 4000, 100, 1) (100, 8)
2018-10-10 11:57:23.825704: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2
2018-10-10 11:57:23.827653: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Fatal Python error: Segmentation fault

Thread 0x00007fea9af1a740 (most recent call first):
  File "/home/seng/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1350 in _call_tf_sessionrun
  File "/home/seng/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1263 in _run_fn
  File "/home/seng/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1278 in _do_call
  File "/home/seng/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1272 in _do_run
  File "/home/seng/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1100 in _run
  File "/home/seng/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 877 in run
  File "test.py", line 43 in <module>
Segmentation fault (core dumped)

通常,分段错误意味着您的主机没有足够的RAM来运行脚本。 更改n_H值时,即使没有向卷积层添加任何参数,也会向dense2层添加许多参数。此外,您还向conv1层添加了许多操作,因为输入要大得多


conv1层中包含的ops和1600个参数很可能会使RAM饱和,并使脚本无法运行。尝试使用“htop”或任何其他跟踪器跟踪RAM使用情况

我的主机中有45 GB的RAM。在执行过程中,我的测试脚本在出现分段错误之前消耗了大约20GB的内存,因此还有大量的内存。我知道有很多参数,但是张量输入大小(10040001001)有那么大吗?只是一个更新。当我遇到分段错误时,我正在Centos 7.5主机上运行脚本。然后我在OSX主机上运行了我的脚本,它成功地运行了!为什么我的Centos 7主机会出现故障?
import faulthandler; faulthandler.enable()
import numpy as np
import tensorflow as tf

seed = 42
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)

num_examples = 100
n_H = 4000
with_conv = True

# generate training data
X_gen = np.random.randn(num_examples*n_H*100).reshape(num_examples, n_H, 100, 1)
Y_gen = np.random.randn(num_examples*8).reshape(num_examples, 8)    
X_train = X_gen[0:num_examples, 0:n_H, ...]
Y_train = Y_gen[0:num_examples, ...]
print(X_train.shape, Y_train.shape)

# create placeholders
X = tf.placeholder(tf.float32, shape=(None, n_H, 100, 1))
Y = tf.placeholder(tf.float32, shape=(None, 8))

# build graph
if (with_conv):
    conv1 = tf.layers.conv2d(X, filters=64, kernel_size=[5, 5],strides = 1, padding='valid',activation = tf.nn.relu)    
    pool1 = tf.layers.max_pooling2d(conv1, pool_size=[2, 2], strides=2, padding='valid')
else:
    pool1 = tf.layers.max_pooling2d(X, pool_size=[2, 2], strides=2, padding='valid')    
pool1_flat = tf.layers.flatten(pool1)
dense2 = tf.layers.dense(pool1_flat, units=256, activation=tf.nn.relu)
H = tf.layers.dense(dense2, units=8, activation=tf.nn.relu)

# compute cost
cost = tf.reduce_mean(tf.square(Y - H))

# initialize variables
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    a = sess.run(cost, feed_dict = {X: X_train, Y:Y_train})
    print(a)