Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/image-processing/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Image processing 在android中将Argb8888图像转换为YUV420 sp会产生绿色图像_Image Processing - Fatal编程技术网

Image processing 在android中将Argb8888图像转换为YUV420 sp会产生绿色图像

Image processing 在android中将Argb8888图像转换为YUV420 sp会产生绿色图像,image-processing,Image Processing,您好,我正在尝试将Argb8888图像转换为android中的YUV420 sp,我得到了一个完全绿色的压缩图像。如果我使用正确的方法,请在代码方面帮助我。 代码如下所示 Image(Context context) { // This Constructor is used to initialize height and width of screen screenHeight = 800;//m1.he

您好,我正在尝试将Argb8888图像转换为android中的YUV420 sp,我得到了一个完全绿色的压缩图像。如果我使用正确的方法,请在代码方面帮助我。 代码如下所示

            Image(Context context) {
                // This Constructor is used to initialize height and width of screen
                screenHeight = 800;//m1.heightPixels;
                screenWidth = 480;//m1.widthPixels;
                bufferSize = 4 * screenHeight * screenWidth;
                buffer = new byte[bufferSize];
                newarrs =new byte[bufferSize];
                log("constructor width:- " + screenWidth + " height:- " + screenHeight);
            }


            public void capture() {
                // Take the Data from frame buffer and store in buffer
                log("capture Screen");

                BufferedInputStream bis = null;

                try {
                // log("in try");
                bis = new BufferedInputStream(new FileInputStream("/data/fb0.raw"));
                readSize = bis.read(buffer, 0, bufferSize);
                bis.close();
               } 
               catch (Exception e) {
                // log("in catch");
                e.printStackTrace();
               }
                encodeYUV420(buffer);
                byte[] arr = resize1(buffer);
                FileOutputStream fos;
                try {
                     File f = Files.getImageFile();
                     fos = new FileOutputStream(f);
                     fos.write(arr);
                     fos.close();
                } catch (Exception e) {
               }
          private byte[] resize1(byte[] buffer) {
              final int RATIO = 4;
              byte[][][] newBuff = new byte[screenWidth][screenHeight][4];
              int pos1 = 0;
              for (int i = 0; i < screenWidth; i++) {
              for (int j = 0; j < screenHeight; j++) {
              newBuff[i][j][0] = buffer[pos1++];
              newBuff[i][j][1] = buffer[pos1++];
              newBuff[i][j][2] = buffer[pos1++];
              newBuff[i][j][3] = buffer[pos1++];
         }
        }
              byte[] buffer1 = new byte[buffer.length*3 / (RATIO * RATIO)];
              int pos2 = 0;
              int i = 0, j = 0;
              for (i = 0; i < screenWidth; i++) {
              for (j = 0; j < screenHeight; j++) {
              try {
                   if (i % RATIO == 0 && j % RATIO == 0) {
                   buffer1[pos2++] = newBuff[i][j][0];
                   buffer1[pos2++] = newBuff[i][j][1];
                   buffer1[pos2++] = newBuff[i][j][2];
                   buffer1[pos2++] = newBuff[i][j][3];
              }
             } catch (Exception e) {
            log(" i " + i + " j " + j);
            }
           }
        }

                   log(" valuesof i " + i + " j " + j);
                   if (pos2 == buffer.length / (RATIO * RATIO))
                    log("S size:- " + pos2);
                   else
                    log("F size:- " + pos2);

                    return buffer1;
            }

            private byte[] encodeYUV420(byte[] argb) {

                byte[] yuv420sp = new byte[(screenHeight * screenWidth * 3) / 2];
                final int frameSize = screenWidth * screenHeight;
                int yIndex = 0;
                int uIndex = frameSize;
                int vIndex = frameSize + (frameSize / 4);
                int R, G, B;
                int Y, U, V;
                int index = 0;

                for (int j = 0; j < screenHeight; j++) {
                for (int i = 0; i < screenWidth; i++) {
                int pp = (j * screenWidth + i) * 4;
                //a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
                R = (argb[index] & 0xff0000) >> 16;
                G = (argb[index] & 0xff00) >> 8;
                B = (argb[index] & 0xff) >> 0;
                Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
                U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
                V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128; 
                yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
                if (j % 2 == 0 && i % 2 == 0) {
                yuv420sp[uIndex++] = (byte) ((U<0) ? 0 : ((U > 255) ? 255 : U));
                yuv420sp[vIndex++] = (byte) ((V<0) ? 0 : ((V > 255) ? 255 : V)); 

              }
            }
            return yuv420sp;    
        }
更新: 说明问题的屏幕截图:

我想我已经做了一些改变,它看起来像是原始图像,但它不清晰或易读。我能得到一些关于如何使它几乎像原始图像一样的想法吗

Image(Context context) {
                // This Constructor is used to initialize height and width of screen
                screenHeight = 800;//m1.heightPixels;
                screenWidth = 480;//m1.widthPixels;
                bufferSize = 4 * screenHeight * screenWidth;
                buffer = new byte[bufferSize];
                newarrs =new byte[bufferSize];
                log("constructor width:- " + screenWidth + " height:- " + screenHeight);
            }


            public void capture() {
                // Take the Data from frame buffer and store in buffer
                log("capture Screen");

                BufferedInputStream bis = null;

                try {
                // log("in try");
                bis = new BufferedInputStream(new FileInputStream("/data/fb0.raw"));
                readSize = bis.read(buffer, 0, bufferSize);
                bis.close();
               } 
               catch (Exception e) {
                // log("in catch");
                e.printStackTrace();
               }
                encodeYUV420(buffer);
                byte[] arr = resize1(buffer);
                FileOutputStream fos;
                try {
                     File f = Files.getImageFile();
                     fos = new FileOutputStream(f);
                     fos.write(arr);
                     fos.close();
                } catch (Exception e) {
               }
          private byte[] resize1(byte[] buffer) {
              final int RATIO = 4;
              byte[][][] newBuff = new byte[screenWidth][screenHeight][4];
              int pos1 = 0;
              for (int i = 0; i < screenWidth; i++) {
              for (int j = 0; j < screenHeight; j++) {
              newBuff[i][j][0] = buffer[pos1++];
              newBuff[i][j][1] = buffer[pos1++];
              newBuff[i][j][2] = buffer[pos1++];
              newBuff[i][j][3] = buffer[pos1++];
         }
        }
              byte[] buffer1 = new byte[buffer.length*3 / (RATIO * RATIO)];
              int pos2 = 0;
              int i = 0, j = 0;
              for (i = 0; i < screenWidth; i++) {
              for (j = 0; j < screenHeight; j++) {
              try {
                   if (i % RATIO == 0 && j % RATIO == 0) {
                   buffer1[pos2++] = newBuff[i][j][0];
                   buffer1[pos2++] = newBuff[i][j][1];
                   buffer1[pos2++] = newBuff[i][j][2];
                   buffer1[pos2++] = newBuff[i][j][3];
              }
             } catch (Exception e) {
            log(" i " + i + " j " + j);
            }
           }
        }

                   log(" valuesof i " + i + " j " + j);
                   if (pos2 == buffer.length / (RATIO * RATIO))
                    log("S size:- " + pos2);
                   else
                    log("F size:- " + pos2);

                    return buffer1;
            }

            private byte[] encodeYUV420(byte[] argb) {

                byte[] yuv420sp = new byte[(screenHeight * screenWidth * 3) / 2];
                final int frameSize = screenWidth * screenHeight;
                int yIndex = 0;
                int uvIndex=frameSize;
                int a, R, G, B, Y, U, V;
                int index = 0;
                for (int j = 0; j < height; j++) {
                 for (int i = 0; i < width; i++) {
                  int pp = (j * width + i) * 4;
                    R = argb[pp+ 0];
                    G = argb[pp + 1];
                    B = argb[pp + 2];
                    a = argb[pp + 3];
                    Y = ( (  66 * R + 129 * G +  25 * B + 128) >> 8) +  16;
                    U = ( ( -38 * R -  74 * G + 112 * B + 128) >> 8) + 128;
                    V = ( ( 112 * R -  94 * G -  18 * B + 128) >> 8) + 128;

 
            yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
            if (j % 2 == 0 && i % 2 == 0) { 
                yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
                yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
               
              }
            }
            return yuv420sp;    
        }

您没有正确存储YUV数据。根据本发明,YUV420SP数据存储在两个平面中,一个包含Y数据,另一个包含交错的U和V数据:

|  Y_0  |  Y_1  |  Y_2  |  Y_3  |  Y_4  | ... | Y_w-2 | Y_w-1 |   /* h rows */
|  Y_w  | Y_w+1 | Y_w+2 | ...
    :
    :
|  U_0  |  V_0  |  U_2  |  V_2  |  U_4  | ... | U_w-2 | V_w-2 |   /* h/2 rows */
|  U_2w |  V_2w | U_2w+2| ...
    :
    :
您的代码似乎将U和V数据存储在不同的平面中:

int uIndex = frameSize;
int vIndex = frameSize + (frameSize / 4);

你能缩进你的代码吗?如果我的示例代码部分出错,我可以使用任何本机api吗?或者我应该使用什么?使用一个指针存储U和V值uvIndex=frameSize;,然后,如果j%2==0 yuv420sp[uvIndex++]=i%2==0,则在该平面中交替存储U和V值?U:V;,其中,U和V是为相同的i值计算的值,即U_0、V_0、U_2、V_2等。。顺便说一下,您将丢弃由该代码计算的四分之三的U和V值。当你真正需要这些值时,只需计算它们,你就可以让它运行得更快。但它仍然是绿色的,甚至不接近任何转换。请以任何可能的方式提供帮助。上传一些输入和输出图像的示例。这可能会有帮助。如果我们想在字节数组中看到yuv格式的图像,它应该以相同的方式复制还是反转?