Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/android/193.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow 2.0培训,使用多处理=使用多处理错误_Python_Numpy_Tensorflow_Keras - Fatal编程技术网

Python Tensorflow 2.0培训,使用多处理=使用多处理错误

Python Tensorflow 2.0培训,使用多处理=使用多处理错误,python,numpy,tensorflow,keras,Python,Numpy,Tensorflow,Keras,当我用(96,96,3)个图像转换成numpy数组并保存到.npy文件来训练下面的卷积神经网络时,发生了下面的错误 我不知道我哪里出错了。解决错误时需要帮助,可能处于丢失功能不确定状态 自动编码器的体系结构 model = models.Sequential() model.add(layers.Conv2D(input_shape= (96,96,3), filters= 64, kernel_size= (3,3), strides= 2, padding= 'same', activat

当我用(96,96,3)个图像转换成numpy数组并保存到.npy文件来训练下面的卷积神经网络时,发生了下面的错误

我不知道我哪里出错了。解决错误时需要帮助,可能处于丢失功能不确定状态

自动编码器的体系结构

model = models.Sequential()

model.add(layers.Conv2D(input_shape= (96,96,3), filters= 64, kernel_size= (3,3), strides= 2, padding= 'same', activation= tf.keras.layers.LeakyReLU(alpha= 0.3), name= 'conv_layer_1', dtype= tf.float32))
model.add(layers.Conv2D(filters= 128, kernel_size= (3,3), strides= 2, padding = 'same', activation= tf.keras.layers.LeakyReLU(alpha= 0.3), name= 'conv_layer_2', dtype= tf.float32))
model.add(layers.Conv2D(filters= 64, kernel_size= (3,3), strides= 2, padding = 'same', activation= tf.keras.layers.LeakyReLU(alpha= 0.3), name= 'deconv_layer_1', dtype= tf.float32))
model.add(layers.Conv2D(filters= 1, kernel_size= (3,3), strides= 2, padding = 'same', activation= tf.keras.layers.LeakyReLU(alpha= 0.3), name= 'deconv_layer_2', dtype= tf.float32))

model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate = 0.01), loss = tf.keras.losses.mean_squared_error)
model.summary()

model.fit(np.array(x_train).reshape(10, 3, 96, 96), epochs=1, use_multiprocessing = True)
[多处理=错误相同错误]

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-53-77429e1864b4> in <module>
----> 1 model.fit(x_train, epochs=1, use_multiprocessing = False)

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    817         max_queue_size=max_queue_size,
    818         workers=workers,
--> 819         use_multiprocessing=use_multiprocessing)
    820 
    821   def evaluate(self,

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    340                 mode=ModeKeys.TRAIN,
    341                 training_context=training_context,
--> 342                 total_epochs=epochs)
    343             cbks.make_logs(model, epoch_logs, training_result, ModeKeys.TRAIN)
    344 

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
    126         step=step, mode=mode, size=current_batch_size) as batch_logs:
    127       try:
--> 128         batch_outs = execution_function(iterator)
    129       except (StopIteration, errors.OutOfRangeError):
    130         # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in execution_function(input_fn)
     96     # `numpy` translates Tensors to values in Eager mode.
     97     return nest.map_structure(_non_none_constant_value,
---> 98                               distributed_function(input_fn))
     99 
    100   return execution_function

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds)
    566         xla_context.Exit()
    567     else:
--> 568       result = self._call(*args, **kwds)
    569 
    570     if tracing_count == self._get_tracing_count():

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in _call(self, *args, **kwds)
    604       # In this case we have not created variables on the first call. So we can
    605       # run the first trace but we should fail if variables are created.
--> 606       results = self._stateful_fn(*args, **kwds)
    607       if self._created_variables:
    608         raise ValueError("Creating variables on a non-first call to a function"

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py in __call__(self, *args, **kwargs)
   2360     """Calls a graph function specialized to the inputs."""
   2361     with self._lock:
-> 2362       graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
   2363     return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
   2364 

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   2701 
   2702       self._function_cache.missed.add(call_context_key)
-> 2703       graph_function = self._create_graph_function(args, kwargs)
   2704       self._function_cache.primary[cache_key] = graph_function
   2705       return graph_function, args, kwargs

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   2591             arg_names=arg_names,
   2592             override_flat_arg_shapes=override_flat_arg_shapes,
-> 2593             capture_by_value=self._capture_by_value),
   2594         self._function_attributes,
   2595         # Tell the ConcreteFunction to clean up its graph once it goes out of

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    976                                           converted_func)
    977 
--> 978       func_outputs = python_func(*func_args, **func_kwargs)
    979 
    980       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    437         # __wrapped__ allows AutoGraph to swap in a converted function. We give
    438         # the function a weak reference to itself to avoid a reference cycle.
--> 439         return weak_wrapped_fn().__wrapped__(*args, **kwds)
    440     weak_wrapped_fn = weakref.ref(wrapped_fn)
    441 

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in distributed_function(input_iterator)
     83     args = _prepare_feed_values(model, input_iterator, mode, strategy)
     84     outputs = strategy.experimental_run_v2(
---> 85         per_replica_function, args=args)
     86     # Out of PerReplica outputs reduce or pick values to return.
     87     all_outputs = dist_utils.unwrap_output_dict(

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py in experimental_run_v2(self, fn, args, kwargs)
    761       fn = autograph.tf_convert(fn, ag_ctx.control_status_ctx(),
    762                                 convert_by_default=False)
--> 763       return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    764 
    765   def reduce(self, reduce_op, value, axis):

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py in call_for_each_replica(self, fn, args, kwargs)
   1817       kwargs = {}
   1818     with self._container_strategy().scope():
-> 1819       return self._call_for_each_replica(fn, args, kwargs)
   1820 
   1821   def _call_for_each_replica(self, fn, args, kwargs):

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py in _call_for_each_replica(self, fn, args, kwargs)
   2162         self._container_strategy(),
   2163         replica_id_in_sync_group=constant_op.constant(0, dtypes.int32)):
-> 2164       return fn(*args, **kwargs)
   2165 
   2166   def _reduce_to(self, reduce_op, value, destinations):

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
    290   def wrapper(*args, **kwargs):
    291     with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED):
--> 292       return func(*args, **kwargs)
    293 
    294   if inspect.isfunction(func) or inspect.ismethod(func):

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py in train_on_batch(model, x, y, sample_weight, class_weight, reset_metrics, standalone)
    431       y,
    432       sample_weights=sample_weights,
--> 433       output_loss_metrics=model._output_loss_metrics)
    434 
    435   if reset_metrics:

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py in train_on_batch(model, inputs, targets, sample_weights, output_loss_metrics)
    310           sample_weights=sample_weights,
    311           training=True,
--> 312           output_loss_metrics=output_loss_metrics))
    313   if not isinstance(outs, list):
    314     outs = [outs]

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py in _process_single_batch(model, inputs, targets, output_loss_metrics, sample_weights, training)
    251               output_loss_metrics=output_loss_metrics,
    252               sample_weights=sample_weights,
--> 253               training=training))
    254       if total_loss is None:
    255         raise ValueError('The model cannot be run '

/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py in _model_loss(model, inputs, targets, output_loss_metrics, sample_weights, training)
    165 
    166         if hasattr(loss_fn, 'reduction'):
--> 167           per_sample_losses = loss_fn.call(targets[i], outs[i])
    168           weighted_losses = losses_utils.compute_weighted_loss(
    169               per_sample_losses,

IndexError: list index out of range
---------------------------------------------------------------------------
索引器回溯(最后一次最近调用)
在里面
---->1模型拟合(x_序列,历元=1,使用多处理=False)
/opt/conda/lib/python3.6/site-packages/tensorflow\u core/python/keras/engine/training.py in-fit(self、x、y、批大小、历元、冗余、回调、验证拆分、验证数据、洗牌、类权重、样本权重、初始历元、每历元步数、验证步骤、验证频率、最大队列大小、工作人员、使用多处理、**kwargs)
817最大队列大小=最大队列大小,
818名工人=工人,
-->819使用\多处理=使用\多处理)
820
821 def评估(自我,
/opt/conda/lib/python3.6/site-packages/tensorflow\u core/python/keras/engine/training\u v2.py(self、model、x、y、批大小、历元、冗余、回调、验证拆分、验证数据、洗牌、类权重、样本权重、初始历元、每个历元的步骤、验证步骤、验证频率、最大队列大小、工作人员、使用多处理、**kwargs)
340模式=ModeKeys.TRAIN,
341培训上下文=培训上下文,
-->342个总记录=记录)
343 cbks.生成日志(模型、历元日志、训练结果、模式键.训练)
344
/运行单历元中的opt/conda/lib/python3.6/site-packages/tensorflow\u core/python/keras/engine/training\u v2.py(模型、迭代器、执行函数、数据集大小、批处理大小、策略、每历元的步骤、样本数、模式、训练上下文、总历元)
126步骤=步骤,模式=模式,大小=当前批次大小)作为批次日志:
127尝试:
-->128批处理输出=执行函数(迭代器)
129除了(StopIteration,errors.OutOfRangeError):
130#TODO(kaftan):关于tf函数和错误的文件错误。OutOfRangeError?
/opt/conda/lib/python3.6/site-packages/tensorflow\u core/python/keras/engine/training\u v2\u utils.py in execution\u函数(输入\u fn)
96#`numpy`以渴望模式将张量转换为值。
97返回nest.map\u结构(\u非\u无\u常量\u值,
--->98分布函数(输入函数)
99
100返回执行函数
/opt/conda/lib/python3.6/site-packages/tensorflow_-core/python/eager/def_-function.py in uuuuuu调用(self,*args,**kwds)
566 xla_context.Exit()
567其他:
-->568结果=自调用(*args,**kwds)
569
570如果跟踪计数==self.\u获取跟踪计数():
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in_调用(self,*args,**kwds)
604#在这种情况下,我们没有在第一次调用时创建变量。所以我们可以
605#运行第一个跟踪,但如果创建了变量,我们将失败。
-->606结果=self.\u stateful\u fn(*args,**kwds)
607如果自创建变量:
608 raise VALUERROR(“在非首次调用函数时创建变量”)
/opt/conda/lib/python3.6/site-packages/tensorflow\u core/python/eager/function.py in\uuuuu调用(self,*args,**kwargs)
2360“调用专用于输入的图形函数。”“”
2361带自锁:
->2362图形函数,args,kwargs=self.\u可能定义函数(args,kwargs)
2363返回图形\函数。\过滤\调用(args,kwargs)\ pylint:disable=受保护访问
2364
/opt/conda/lib/python3.6/site-packages/tensorflow\u core/python/eager/function.py in\u maybe\u define\u function(self、args、kwargs)
2701
2702 self.\u function\u cache.missed.add(调用上下文键)
->2703图形函数=自身。创建图形函数(args、kwargs)
2704 self.\u function\u cache.primary[cache\u key]=图形函数
2705返回图_函数,args,kwargs
/opt/conda/lib/python3.6/site-packages/tensorflow\u core/python/eager/function.py in\u create\u graph\u function(self、args、kwargs、override\u flat\u arg\u shapes)
2591 arg_name=arg_name,
2592覆盖平面形状=覆盖平面形状,
->2593按值捕获=自身。_按值捕获),
2594自我功能属性,
2595#告诉concrete函数在退出时清理其图形
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py func(name,python_func,args,kwargs,signature,func_graph,autograph,autograph_options,add_control_依赖项,arg_名称,op_返回_值,集合,捕获_值,覆盖平面__arg_形状)
976(转换函数)
977
-->978 func_outputs=python_func(*func_args,**func_kwargs)
979
980不变量:`func#u outputs`只包含张量、复合传感器、,
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args,**kwds)
437#uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu。我们给予
438#函数对自身进行弱引用以避免引用循环。
-->439返回弱_-wrapped_-fn()
440弱包裹的=weakref.ref(包裹的)
441
/分布式函数中的opt/conda/lib/python3.6/site-packages/tensorflow\u core/python/keras/engine/training\u v2\u utils.py(输入迭代器)
83 args=\u prepare\u feed\u值(模型、输入\u迭代器、模式、策略)
84输出=策略。实验运行(
--->85个复制副本(函数,args=args)
86     #
model.fit(np.array(x_train).reshape(10, 3, 96, 96), np.array(x_train).reshape(10, 3, 96, 96), epochs=1, use_multiprocessing = True)