Sockets 为每个视频帧检索相同的预测结果时出现问题
我有以下代码,它将中间预测结果从客户端发送到服务器。首先,它在前3层上执行预测,然后从3层开始在服务器端执行预测,并在服务器端打印预测结果。我在共享的视频链接上使用此执行预测 Client.pySockets 为每个视频帧检索相同的预测结果时出现问题,sockets,opencv,machine-learning,keras,deep-learning,Sockets,Opencv,Machine Learning,Keras,Deep Learning,我有以下代码,它将中间预测结果从客户端发送到服务器。首先,它在前3层上执行预测,然后从3层开始在服务器端执行预测,并在服务器端打印预测结果。我在共享的视频链接上使用此执行预测 Client.py model=VGG19(weights='imagenet') def split_model(model, index): layer_input_1 = Input(model.layers[0].input_shape[1:]) x = layer_input_1 for lay
model=VGG19(weights='imagenet')
def split_model(model, index):
layer_input_1 = Input(model.layers[0].input_shape[1:])
x = layer_input_1
for layer in model.layers[1:index]:
x = layer(x)
model1 = Model(inputs=layer_input_1, outputs=x)
input_shape_2 = model.layers[index].get_input_shape_at(0)[1:]
layer_input_2 = Input(shape=input_shape_2)
x = layer_input_2
for layer in model.layers[index:]:
x = layer(x)
model2 = Model(inputs=layer_input_2, outputs=x)
return (model1, model2)
m1,m2=split_model(model,3)
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect('tcp://localhost:5555')
videoFile = "D:/test.mp4"
camera = cv2.VideoCapture(videoFile)
while True:
grabbed, frame = camera.read()
try:
frame = cv2.resize( frame, (224, 224) ).astype( "float32" )
except cv2.error:
break
image= img_to_array(frame)
image=image.reshape((1,image.shape[0],image.shape[1],image.shape[2]))
image=preprocess_input(image)
preds=m1.predict(image)
socket.send_pyobj(preds)
socket.close()
Server.py
model=VGG19(weights='imagenet')
def split_keras_model(model, index):
layer_input_1 = Input(model.layers[0].input_shape[1:])
x = layer_input_1
for layer in model.layers[1:index]:
x = layer(x)
model1 = Model(inputs=layer_input_1, outputs=x)
input_shape_2 = model.layers[index].get_input_shape_at(0)[1:]
layer_input_2 = Input(shape=input_shape_2)
x = layer_input_2
for layer in model.layers[index:]:
x = layer(x)
model2 = Model(inputs=layer_input_2, outputs=x)
return (model1, model2)
m1,m2=split_keras_model(model,3)
context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, np.unicode(''))
while True:
frame = footage_socket.recv_pyobj()
tmp = np.zeros( frame.shape )
for i in range( 0, 1 ):
tmp[i,:] = tmp[i, :]
predictions_result = m2.predict( tmp )
label = decode_predictions( predictions_result )
print(label)
footage_socket.close()
在运行上述代码时,我没有遇到任何错误,但输出不是我所期望的,我正在为每个帧检索相同的结果。下面是我正在检索的预测结果
[[('n03788365', 'mosquito_net', 0.067498334), ('n15075141', 'toilet_tissue', 0.02530327), ('n04209239', 'shower_curtain', 0.021614889), ('n03291819', 'envelope', 0.019242924), ('n04447861', 'toilet_seat', 0.014170088)]]
[[('n03788365', 'mosquito_net', 0.067498334), ('n15075141', 'toilet_tissue', 0.02530327), ('n04209239', 'shower_curtain', 0.021614889), ('n03291819', 'envelope', 0.019242924), ('n04447861', 'toilet_seat', 0.014170088)]]
非常感谢您的帮助