Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/cmake/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python CNN和keras给出了图形断开错误_Python_Tensorflow_Keras_Cnn_Dot Product - Fatal编程技术网

Python CNN和keras给出了图形断开错误

Python CNN和keras给出了图形断开错误,python,tensorflow,keras,cnn,dot-product,Python,Tensorflow,Keras,Cnn,Dot Product,我正在试验对双CNN使用多输入 但是,我正在断开连接图形:无法获取“嵌入”层的张量张量值(“嵌入1\u输入:0”,shape=(?,40),dtype=float32)。访问以下以前的层时没有问题:[] 我没有找到它的修复方法,因为我知道有一些层的shape=(?,50)为什么会有这样的维度 我尝试在第二次嵌入后添加展平层,但我得到了一个错误。即使辍学了 我还尝试从L_分支模型中消除最大池 尝试不使用重塑,只扩展输入dims,因为我得到了第二个Conv1D的错误,表示层期望dims=3,但得到d

我正在试验对双CNN使用多输入

但是,我正在断开连接
图形:无法获取“嵌入”层的张量张量值(“嵌入1\u输入:0”,shape=(?,40),dtype=float32)。访问以下以前的层时没有问题:[]

我没有找到它的修复方法,因为我知道有一些层的
shape=(?,50)
为什么会有这样的维度

我尝试在第二次嵌入后添加展平层,但我得到了一个错误。即使辍学了

我还尝试从L_分支模型中消除最大池

尝试不使用重塑,只扩展输入dims,因为我得到了第二个Conv1D的错误,表示层期望dims=3,但得到dims=2

latent = Conv1D(50, activation='relu', kernel_size=nb_classes, input_shape=(1, input_size, 1))(merged)
File "/root/anaconda3/envs/oea/lib/python3.7/site-packages/keras/engine/base_layer.py", line 414, in __call__
self.assert_input_compatibility(inputs)
File "/root/anaconda3/envs/oea/lib/python3.7/site-packages/keras/engine/base_layer.py", line 311, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv1d_2: expected ndim=3, found ndim=2`

我也没有得到第一个模型中使用了哪些输入,第二个模型中使用了哪些输入?

您必须这样定义您的新模型:
final\u model=model([L\u branch.input,R\u branch.input],out)

我提供了一个类似于您的网络结构的示例

vocab_size1 = 5000
vocab_size2 = 50000
input_size1 = 40
input_size2 = 40
max_words=50
emb_size = 15
nb_classes = 10


# first model
L_branch = Sequential()
L_branch.add(Embedding(vocab_size1, emb_size, input_length=input_size1, trainable=True))
L_branch.add(Conv1D(50, activation='relu', kernel_size=10))
L_branch.add(MaxPooling1D())
L_branch.add(Flatten())
L_branch.add(Dense(emb_size, activation='relu'))

# second model
R_branch = Sequential()
R_branch.add(Embedding(vocab_size2, emb_size, input_length=input_size2, trainable=True))
R_branch.add(Flatten())
R_branch.add(Dense(emb_size, activation='relu'))

merged = Concatenate()([L_branch.output, R_branch.output])
latent = Dense(50, activation='relu')(merged)
out = Dense(nb_classes, activation='softmax')(latent)

final_model = Model([L_branch.input, R_branch.input], out)
final_model.compile(
            loss='sparse_categorical_crossentropy',
            optimizer='adam',
            metrics=['accuracy'])
final_model.summary()


X1 = np.random.randint(0,vocab_size1, (100,input_size1))
X2 = np.random.randint(0,vocab_size2, (100,input_size2))
y = np.random.randint(0,nb_classes, 100)

final_model.fit([X1,X2], y, epochs=10)

评论不用于扩展讨论;这段对话已经结束。
vocab_size1 = 5000
vocab_size2 = 50000
input_size1 = 40
input_size2 = 40
max_words=50
emb_size = 15
nb_classes = 10


# first model
L_branch = Sequential()
L_branch.add(Embedding(vocab_size1, emb_size, input_length=input_size1, trainable=True))
L_branch.add(Conv1D(50, activation='relu', kernel_size=10))
L_branch.add(MaxPooling1D())
L_branch.add(Flatten())
L_branch.add(Dense(emb_size, activation='relu'))

# second model
R_branch = Sequential()
R_branch.add(Embedding(vocab_size2, emb_size, input_length=input_size2, trainable=True))
R_branch.add(Flatten())
R_branch.add(Dense(emb_size, activation='relu'))

merged = Concatenate()([L_branch.output, R_branch.output])
latent = Dense(50, activation='relu')(merged)
out = Dense(nb_classes, activation='softmax')(latent)

final_model = Model([L_branch.input, R_branch.input], out)
final_model.compile(
            loss='sparse_categorical_crossentropy',
            optimizer='adam',
            metrics=['accuracy'])
final_model.summary()


X1 = np.random.randint(0,vocab_size1, (100,input_size1))
X2 = np.random.randint(0,vocab_size2, (100,input_size2))
y = np.random.randint(0,nb_classes, 100)

final_model.fit([X1,X2], y, epochs=10)