Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/112.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
ios/CoreML-当keras模型转换为CoreML时,输入类型为MultiArray_Ios_Keras_Conv Neural Network_Cvpixelbuffer_Coreml - Fatal编程技术网

ios/CoreML-当keras模型转换为CoreML时,输入类型为MultiArray

ios/CoreML-当keras模型转换为CoreML时,输入类型为MultiArray,ios,keras,conv-neural-network,cvpixelbuffer,coreml,Ios,Keras,Conv Neural Network,Cvpixelbuffer,Coreml,我正在尝试训练keras模型,并使用keras1.2.2和TensorFlow后端将其转换为coreML模型。这是用于分类任务的。CoreML的输入显示为MultiArray。我需要这是Image或类似CVPixelBuffer的东西。如前所述,我尝试添加image\u input\u names='data'。另外,我的输入形状是(高度、宽度、深度),我相信这是必需的 请帮助解决此问题。我使用了dataset和以下代码(): 我刚刚用Keras 2检查了这一点,您的模型的输入是图像,而不是多阵

我正在尝试训练
keras
模型,并使用
keras1.2.2
TensorFlow
后端将其转换为
coreML
模型。这是用于分类任务的。CoreML的输入显示为
MultiArray
。我需要这是
Image
或类似
CVPixelBuffer
的东西。如前所述,我尝试添加
image\u input\u names='data'
。另外,我的
输入形状
(高度、宽度、深度)
,我相信这是必需的

请帮助解决此问题。我使用了dataset和以下代码():


我刚刚用Keras 2检查了这一点,您的模型的输入是
图像
,而不是
多阵列
。也许这取决于Keras的版本

如果需要将其设置为
BGR
,请将
is_BGR=True
添加到
coremltools.converts.keras.convert()
调用中


对于此转换器。

问题在于我的
tf
版本和
protobuf
版本。我可以通过安装
coremltools
`中提到的版本来解决这个问题。

我确实解决了这个问题。我的tf版本和protobuf版本不正确。谢谢你的回复。
from keras.datasets import cifar10
from keras.models import Model
from keras.layers import Input, Convolution2D, MaxPooling2D, Dense, Dropout, Flatten
from keras.utils import np_utils
import numpy as np
import coremltools

np.random.seed(1234)

batch_size = 32
num_epochs = 1

kernel_size = 3 
pool_size = 2 
conv_depth_1 = 32 
conv_depth_2 = 64 
drop_prob_1 = 0.25
drop_prob_2 = 0.5
hidden_size = 512 

(X_train, y_train), (X_test, y_test) = cifar10.load_data()
num_train, height, width, depth = X_train.shape
num_test = X_test.shape[0]
num_classes = 10

X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= np.max(X_train)
X_test /= np.max(X_test)

y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)

data = Input(shape=(height, width, depth))
conv_1 = Convolution2D(conv_depth_1, (kernel_size, kernel_size), padding='same', activation='relu')(data)
conv_2 = Convolution2D(conv_depth_1, (kernel_size, kernel_size), padding='same', activation='relu')(conv_1)
pool_1 = MaxPooling2D(pool_size=(pool_size, pool_size))(conv_2)
drop_1 = Dropout(drop_prob_1)(pool_1)

conv_3 = Convolution2D(conv_depth_2, (kernel_size, kernel_size), padding='same', activation='relu')(drop_1)
conv_4 = Convolution2D(conv_depth_2, (kernel_size, kernel_size), padding='same', activation='relu')(conv_3)
pool_2 = MaxPooling2D(pool_size=(pool_size, pool_size))(conv_4)
drop_2 = Dropout(drop_prob_1)(pool_2)

flat = Flatten()(drop_2)
hidden = Dense(hidden_size, activation='relu')(flat)
drop_3 = Dropout(drop_prob_2)(hidden)
out = Dense(num_classes, activation='softmax')(drop_3)

model = Model(inputs=data, outputs=out) 

model.compile(loss='categorical_crossentropy', 
              optimizer='adam', 
              metrics=['accuracy']) 

model.fit(X_train, y_train,                
          batch_size=batch_size, epochs=num_epochs,
          verbose=1, validation_split=0.1) 
loss, accuracy = model.evaluate(X_test, y_test, verbose=1)
print ("\nTest Loss: {loss} and Test Accuracy: {acc}\n".format(loss = loss, acc = accuracy))
coreml_model = coremltools.converters.keras.convert(model, input_names='data', image_input_names='data')
coreml_model.save('my_model.mlmodel')