Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/335.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 属性错误:';张量';对象没有属性';尺寸';预训练伯特_Python_Tensorflow_Keras_Bert Language Model_Pre Trained Model - Fatal编程技术网

Python 属性错误:';张量';对象没有属性';尺寸';预训练伯特

Python 属性错误:';张量';对象没有属性';尺寸';预训练伯特,python,tensorflow,keras,bert-language-model,pre-trained-model,Python,Tensorflow,Keras,Bert Language Model,Pre Trained Model,这就是我定义的模型: def build_model(): input_layer = keras.layers.Input(name="Input", shape=(MAX_LEN), dtype='int64') bert = BertForPreTraining.from_pretrained('digitalepidemiologylab/covid-twitter-bert-v2')(input_layer) bert = bert[0][:,0,:]

这就是我定义的模型:

def build_model():
  input_layer = keras.layers.Input(name="Input", shape=(MAX_LEN), dtype='int64')
  bert = BertForPreTraining.from_pretrained('digitalepidemiologylab/covid-twitter-bert-v2')(input_layer)
  bert = bert[0][:,0,:]
  x = keras.layers.Bidirectional(keras.layers.LSTM(256, name="LSTM", activation='tanh', dropout=0.3), name="Bidirectional_LSTM")(bert)
  x = keras.layers.Dense(64, 'relu')(x)
  output_layer = keras.layers.Dense(1, 'sigmoid', name="Output")(x)

  model = keras.Model(inputs=input_layer, outputs=output_layer)

  model.compile(loss=loss,
                optimizer=optimizer)
  return model
跑步时
model=build\u model()

这就是我得到的错误

AttributeError                            Traceback (most recent call last)
<ipython-input-57-671884cecb64> in <module>()
----> 1 model = build_model()

4 frames
<ipython-input-56-ef0d67347557> in build_model()
      1 def build_model():
      2   input_layer = keras.layers.Input(name="Input", shape=(MAX_LEN), dtype='int64')
----> 3   bert = BertForPreTraining.from_pretrained('digitalepidemiologylab/covid-twitter-bert-v2')(input_layer)
      4   bert = bert[0][:,0,:]
      5   x = keras.layers.Bidirectional(keras.layers.LSTM(256, name="LSTM", activation='tanh', dropout=0.3), name="Bidirectional_LSTM")(bert)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, next_sentence_label, output_attentions, output_hidden_states, return_dict, **kwargs)
    938             output_attentions=output_attentions,
    939             output_hidden_states=output_hidden_states,
--> 940             return_dict=return_dict,
    941         )
    942 

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    720             result = self._slow_forward(*input, **kwargs)
    721         else:
--> 722             result = self.forward(*input, **kwargs)
    723         for hook in itertools.chain(
    724                 _global_forward_hooks.values(),

/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict)
    793             raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
    794         elif input_ids is not None:
--> 795             input_shape = input_ids.size()
    796         elif inputs_embeds is not None:
    797             input_shape = inputs_embeds.size()[:-1]

AttributeError: 'Tensor' object has no attribute 'size'
AttributeError回溯(最近一次调用)
在()
---->1模型=构建模型()
4帧
内置_模型()
1 def build_model():
2 input_layer=keras.layers.input(name=“input”,shape=(MAX_LEN),dtype='int64')
---->3 bert=BertForPreTraining.from_pretrainied('digitalepidentialb/covid-twitter-bert-v2')(输入层)
4伯特=伯特[0][:,0,:]
5 x=keras.layers.Bidirectional(keras.layers.LSTM(256,name=“LSTM”,activation=“tanh”,dropout=0.3),name=“Bidirectional_LSTM”)(bert)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in_call_impl(self,*input,**kwargs)
720结果=self.\u slow\u forward(*输入,**kwargs)
721其他:
-->722结果=自转发(*输入,**kwargs)
723用于itertools.chain中的挂钩(
724 _全局_向前_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/modeling\u bert.py in forward(self、input\u id、attention\u mask、token\u type\u id、position\u id、head\u mask、input\u embedded、label、next\u句子\u label、output\u attentions、output\u hidden\u states、return\u dict、**kwargs)
938输出注意=输出注意,
939输出隐藏状态=输出隐藏状态,
-->940返回指令=返回指令,
941         )
942
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in_call_impl(self,*input,**kwargs)
720结果=self.\u slow\u forward(*输入,**kwargs)
721其他:
-->722结果=自转发(*输入,**kwargs)
723用于itertools.chain中的挂钩(
724 _全局_向前_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/modeling\u bert.py in forward(self、input\u id、attention\u mask、token\u type\u id、position\u id、head\u mask、input\u embedded、encoder\u hidden\u states、encoder\u attention\u mask、output\u hidden\u states、return\u dict)
793 raise VALUE ERROR(“不能同时指定输入\u ID和输入\u嵌入”)
794 elif输入_ID不是无:
-->795 input_shape=input_ID.size()
796 elif输入_嵌入不是无:
797 input_shape=inputs_嵌入.size()[:-1]
AttributeError:“Tensor”对象没有属性“size”