Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/298.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何设置一个采用灰度图像并输出ARGB的图层,使其中一种灰度颜色透明?_Python_Machine Learning_Image Segmentation_Coreml_Mlmodel - Fatal编程技术网

Python 如何设置一个采用灰度图像并输出ARGB的图层,使其中一种灰度颜色透明?

Python 如何设置一个采用灰度图像并输出ARGB的图层,使其中一种灰度颜色透明?,python,machine-learning,image-segmentation,coreml,mlmodel,Python,Machine Learning,Image Segmentation,Coreml,Mlmodel,我从输出2D多阵列(分段)的DeepLabV3+mlmodel开始。已成功添加一个层,该层将此作为输入并输出灰度图像 现在,我想把这个灰度图像作为输入和输出ARGB,我想让其中任何一种颜色都是透明的 如何设置这样一个图层 我的python代码: import coremltools import coremltools.proto.FeatureTypes_pb2 as ft coreml_model = coremltools.models.MLModel('DeepLabKP.mlmode

我从输出2D多阵列(分段)的DeepLabV3+mlmodel开始。已成功添加一个层,该层将此作为输入并输出灰度图像

现在,我想把这个灰度图像作为输入和输出ARGB,我想让其中任何一种颜色都是透明的

如何设置这样一个图层

我的python代码:

import coremltools
import coremltools.proto.FeatureTypes_pb2 as ft

coreml_model = coremltools.models.MLModel('DeepLabKP.mlmodel')
spec = coreml_model.get_spec()
spec_layers = getattr(spec,spec.WhichOneof("Type")).layers


# find the current output layer and save it for later reference
last_layer = spec_layers[-1]
 
# add the post-processing layer
new_layer = spec_layers.add()
new_layer.name = 'image_gray_to_RGB'
 
# Configure it as an activation layer
new_layer.activation.linear.alpha = 255
new_layer.activation.linear.beta = 0
 
# Use the original model's output as input to this layer
new_layer.input.append(last_layer.output[0])
 
# Name the output for later reference when saving the model
new_layer.output.append('image_gray_to_RGB')
 
# Find the original model's output description
output_description = next(x for x in spec.description.output if x.name==last_layer.output[0])
 
# Update it to use the new layer as output
output_description.name = new_layer.name


# Function to mark the layer as output
# https://forums.developer.apple.com/thread/81571#241998
def convert_grayscale_image_to_RGB(spec, feature_name, is_bgr=False): 
    """ 
    Convert an output multiarray to be represented as an image 
    This will modify the Model_pb spec passed in. 
    Example: 
        model = coremltools.models.MLModel('MyNeuralNetwork.mlmodel') 
        spec = model.get_spec() 
        convert_multiarray_output_to_image(spec,'imageOutput',is_bgr=False) 
        newModel = coremltools.models.MLModel(spec) 
        newModel.save('MyNeuralNetworkWithImageOutput.mlmodel') 
    Parameters 
    ---------- 
    spec: Model_pb 
        The specification containing the output feature to convert 
    feature_name: str 
        The name of the multiarray output feature you want to convert 
    is_bgr: boolean 
        If multiarray has 3 channels, set to True for RGB pixel order or false for BGR 
    """
    for output in spec.description.output: 
        if output.name != feature_name: 
            continue
        if output.type.WhichOneof('Type') != 'imageType': 
            raise ValueError("%s is not a image type" % output.name)
        output.type.imageType.colorSpace = ft.ImageFeatureType.ColorSpace.Value('RGB')
 
# Mark the new layer as image
convert_grayscale_image_to_RGB(spec, output_description.name, is_bgr=False)

updated_model = coremltools.models.MLModel(spec)
 
updated_model.author = 'Saran'
updated_model.license = 'MIT'
updated_model.short_description = 'Inherits DeepLab V3+ and adds a layer to turn scores into an image'
updated_model.input_description['image'] = 'Input Image'
updated_model.output_description[output_description.name] = 'RGB Image'
 
model_file_name = 'DeepLabKP-G2R.mlmodel'
updated_model.save(model_file_name)
当模型成功保存且无任何错误时,预测错误如下

result = model.predict({'image': img})
  File "/Users/saran/Library/Python/2.7/lib/python/site-packages/coremltools/models/model.py", line 336, in predict
    return self.__proxy__.predict(data, useCPUOnly)
RuntimeError: {
    NSLocalizedDescription = "Failed to convert output image_gray_to_RGB to image";
    NSUnderlyingError = "Error Domain=com.apple.CoreML Code=0 \"Invalid array shape (\n    1,\n    513,\n    513\n) for converting to gray image\" UserInfo={NSLocalizedDescription=Invalid array shape (\n    1,\n    513,\n    513\n) for converting to gray image}";
}
我觉得这跟这层的激活方式有关。但是找不到任何不同的尝试

非常感谢您的帮助

它们是我添加的图层生成的灰度图像


看起来您的输出具有形状(1513513)。第一个数字1是通道数。因为这是1,所以Core ML只能将输出转换为灰度图像。彩色图像需要3个通道,或(3,513,513)的形状

由于这是DeepLab,我假设您的灰度图像中没有真正的“颜色”,而是类的索引(换句话说,您已经将ARGMAX置于预测之上)。在我看来,将灰度“图像”(实际上是分割遮罩)转换为彩色图像的最简单方法是使用Swift或金属


下面是一个源代码示例:

附加的灰度图像输出,我从前面添加到上述问题的图层中获取。如果我没有恢复原色,我很好。只要我能把分割的部分变成透明的3通道,即使有一些默认值,我会很高兴的。我将采取这一点,并作为面具与原始图像合成。谢谢你的链接,我也会看看。