Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/image-processing/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Android 从YV12或NV12字节数组裁剪图像_Android_Image Processing_Crop - Fatal编程技术网

Android 从YV12或NV12字节数组裁剪图像

Android 从YV12或NV12字节数组裁剪图像,android,image-processing,crop,Android,Image Processing,Crop,我实现了Camera.PreviewCallback,然后在其中以字节数组的形式获取原始图像(YV12或NV12格式)。我正在寻找一种方法来裁剪图像的一部分,而不将其转换为位图。图像的裁剪部分将以流形式传输到其他位置(再次作为字节数组) 谢谢你的帮助 公共类CameraAccess实现Camera.PreviewCallback, LoaderCallback接口{ private byte[] lastFrame; @Override public void onPreviewFrame(b

我实现了Camera.PreviewCallback,然后在其中以字节数组的形式获取原始图像(YV12或NV12格式)。我正在寻找一种方法来裁剪图像的一部分,而不将其转换为位图。图像的裁剪部分将以流形式传输到其他位置(再次作为字节数组)

谢谢你的帮助

公共类CameraAccess实现Camera.PreviewCallback, LoaderCallback接口{

private byte[] lastFrame;

@Override
public void onPreviewFrame(byte[] frame, Camera arg1) {
    synchronized(this) {
       this.lastFrame = frame;

    }
}

@Override
public byte[] cropFrame(Integer x, Integer y, Integer width, Integer height) {
    synchronized(this) {
       // how to crop directly from byte array?

    }
}

}

而图像作为字节数组就是一个巨大数组中图像的每个像素。它从左上角的像素开始,移动到右侧,然后向下移动下一行(回到左侧)

因此,要裁剪它,只需将想要的像素复制到一个新的字节数组中,并使用一些for循环:

Rect cropArea = ... //the are to crop
int currentPos = 0;
byte[] croppedOutput = new byte[cropArea.width() * cropArea.height()];
for(int y = 0; y < height; y++){
  for(int x = 0; x < width; x++){
      // here you compare if x and y are within the crop area you want
    if(cropArea.contains(x, y)){
       croppedOutput[currentPos] = frame[positionInArrayForXY(x, y)]
    }
  } 
}
Rect cropera=//庄稼即将收割
int currentPos=0;
byte[]cropedOutput=新字节[cropera.width()*cropera.height()];
对于(int y=0;y
对于方法
positioninarayforxy
,您必须做一些额外的数学运算,该方法几乎是
x*y
,但当值为零时,必须考虑这些

注:我相信帧是每像素1字节,但不确定,所以如果它是每像素2字节,就有一些额外的数学。但是想法是一样的,你可以从中发展

编辑:

关于你的评论: 不,这东西没有标题,只是像素。这就是为什么它总是给你相机的信息,所以你可以知道大小

当然,这不会影响我的回答,当我回答我希望YUV像RGB一样遵循数组顺序时

我做了一些额外的研究,您可以看到进行YUV到RGB转换的方法,如果仔细检查,您会注意到它每12位使用一次,即1.5字节=>921600*1.5=1382400

基于此,我可以想出一些方法:

  • (最容易实现)将您的帧转换为RGB(我知道您指定了您不想要的,但这样会更容易),然后根据我的答案进行裁剪,然后对其进行流式处理
  • (最大的开销,一点也不容易)如果流的接收器必须在YUV中接收,则执行上述操作,但在流之前执行链接方法的反转操作,将其转换回YUV
  • 非常很难实现,但按照您最初的问题解决)根据我的示例代码、我发布的链接上的代码以及每像素需要12位的事实,使用2个for循环开发代码来进行裁剪

    • 有人问我的最终解决方案和一些源代码。这就是我所做的

      场景:我的项目基于运行Android的片上系统。我为通过USB连接到主板的本地摄像头实现了摄像头处理。这个摄像头就像安卓智能手机上的摄像头一样工作。第二个是基于IP的摄像头,通过网络传输图像。因此,软件设计可能看起来有点混乱。请随意提问

      解决方案:由于OpenCV处理、相机初始化以及颜色和位图转换是一件棘手的事情,我最终将所有内容封装到两个类中,从而避免了在我的Android代码中多次出现愚蠢的代码

      第一个类处理颜色/位图和OpenCV矩阵转换。它的定义是:

      import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame;
      import org.opencv.core.Mat;    
      import android.graphics.Bitmap;
      
      public interface CameraFrame extends CvCameraViewFrame {
          Bitmap toBitmap();
      
          @Override
          Mat rgba();
      
          @Override
          Mat gray();
      }
      
      所有颜色和位图转换都在该接口的实现范围内。实际转换由OpenCV for Android附带的UTIL完成。您将看到我只使用了一个位图。这是因为节省资源和位图转换是CPU密集型的。所有UI组件都显示/呈现此位图。只有当任何组件请求位图时,才会进行转换

      private class CameraAccessFrame implements CameraFrame {
          private Mat mYuvFrameData;
          private Mat mRgba;
          private int mWidth;
          private int mHeight;
          private Bitmap mCachedBitmap;
          private boolean mRgbaConverted;
          private boolean mBitmapConverted;
      
          @Override
          public Mat gray() {
              return mYuvFrameData.submat(0, mHeight, 0, mWidth);
          }
      
          @Override
          public Mat rgba() {
              if (!mRgbaConverted) {
                  Imgproc.cvtColor(mYuvFrameData, mRgba,
                          Imgproc.COLOR_YUV2BGR_NV12, 4);
                  mRgbaConverted = true;
              }
              return mRgba;
          }
      
          // @Override
          // public Mat yuv() {
          // return mYuvFrameData;
          // }
      
          @Override
          public synchronized Bitmap toBitmap() {
              if (mBitmapConverted)
                  return mCachedBitmap;
      
              Mat rgba = this.rgba();
              Utils.matToBitmap(rgba, mCachedBitmap);
      
              mBitmapConverted = true;
              return mCachedBitmap;
          }
      
          public CameraAccessFrame(Mat Yuv420sp, int width, int height) {
              super();
              mWidth = width;
              mHeight = height;
              mYuvFrameData = Yuv420sp;
              mRgba = new Mat();
      
              this.mCachedBitmap = Bitmap.createBitmap(width, height,
                      Bitmap.Config.ARGB_8888);
          }
      
          public synchronized void put(byte[] frame) {
              mYuvFrameData.put(0, 0, frame);
              invalidate();
          }
      
          public void release() {
              mRgba.release();
              mCachedBitmap.recycle();
          }
      
          public void invalidate() {
              mRgbaConverted = false;
              mBitmapConverted = false;
          }
      };
      
      摄影机处理封装在两个特殊类中,后面将对此进行解释。一个(HardwareCamera实现ICamera)处理摄像头初始化和关闭,而第二个(CameraAccess)处理OpenCV初始化和其他组件的通知(CameraCanvasView扩展CanvasView实现CameraFrameCallback)有兴趣接收相机图像并在Android视图(UI)中显示它们的用户。此类组件必须连接(注册)到该类

      回调(由任何UI组件实现)定义如下:

      public interface CameraFrameCallback {
          void onCameraInitialized(int frameWidth, int frameHeight);
      
          void onFrameReceived(CameraFrame frame);
      
          void onCameraReleased();
      }
      
      此接口的实现由以下UI组件完成:

      import android.content.Context;
      import android.util.AttributeSet;
      import android.view.SurfaceHolder;
      import CameraFrameCallback;
      
      public class CameraCanvasView extends CanvasView implements CameraFrameCallback {
      
          private CameraAccess mCamera;
          private int cameraWidth = -1;
          private int cameraHeight = -1;
          private boolean automaticReceive;
          private boolean acceptNextFrame;
      
          public CameraCanvasView(Context context, AttributeSet attributeSet) {
              super(context, attributeSet);
          }
      
          public CameraAccess getCamera() {
              return mCamera;
          }
      
          public boolean getAcceptNextFrame() {
              return acceptNextFrame;
          }
      
          public void setAcceptNextFrame(boolean value) {
              this.acceptNextFrame = value;
          }
      
          public void setCamera(CameraAccess camera, boolean automaticReceive) {
              if (camera == null)
                  throw new NullPointerException("camera");
      
              this.mCamera = camera;
              this.mCamera.setAutomaticReceive(automaticReceive);
              this.automaticReceive = automaticReceive;
          }
      
          @Override
          public void onCameraInitialized(int frameWidth, int frameHeight) {
              cameraWidth = frameWidth;
              cameraHeight = frameHeight;
      
              setCameraBounds();
          }
      
          public void setCameraBounds() {
      
              int width = 0;
              int height = 0;
              if (fixedWidth > 0 && fixedHeight > 0) {
                  width = fixedWidth;
                  height = fixedHeight;
              } else if (cameraWidth > 0 && cameraHeight > 0) {
                  width = fixedWidth;
                  height = fixedHeight;
              }
      
              if (width > 0 && height > 0)
                  super.setCameraBounds(width, height, true);
          }
      
          @Override
          public void onFrameReceived(CameraFrame frame) {
              if (acceptNextFrame || automaticReceive)
                  super.setBackground(frame);
      
              // reset
              acceptNextFrame = false;
          }
      
          @Override
          public void onCameraReleased() {
      
              setBackgroundImage(null);
          }
      
          @Override
          public void surfaceCreated(SurfaceHolder arg0) {
              super.surfaceCreated(arg0);
      
              if (mCamera != null) {
                  mCamera.addCallback(this);
      
                  if (!automaticReceive)
                      mCamera.receive(); // we want to get the initial frame
              }
          }
      
          @Override
          public void surfaceDestroyed(SurfaceHolder arg0) {
              super.surfaceDestroyed(arg0);
      
              if (mCamera != null)
                  mCamera.removeCallback(this);
          }
      }
      
      <?xml version="1.0" encoding="utf-8"?>
      <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
          android:layout_width="match_parent"
          android:layout_height="match_parent"
          android:orientation="vertical" >
      
          <eu.level12.graphics.laser.CameraCanvasView
              android:id="@+id/my_camera_view"
              android:layout_width="match_parent"
              android:layout_height="match_parent"
              />
      
      </LinearLayout>
      
      该UI组件可在XML布局中使用,如下所示:

      import android.content.Context;
      import android.util.AttributeSet;
      import android.view.SurfaceHolder;
      import CameraFrameCallback;
      
      public class CameraCanvasView extends CanvasView implements CameraFrameCallback {
      
          private CameraAccess mCamera;
          private int cameraWidth = -1;
          private int cameraHeight = -1;
          private boolean automaticReceive;
          private boolean acceptNextFrame;
      
          public CameraCanvasView(Context context, AttributeSet attributeSet) {
              super(context, attributeSet);
          }
      
          public CameraAccess getCamera() {
              return mCamera;
          }
      
          public boolean getAcceptNextFrame() {
              return acceptNextFrame;
          }
      
          public void setAcceptNextFrame(boolean value) {
              this.acceptNextFrame = value;
          }
      
          public void setCamera(CameraAccess camera, boolean automaticReceive) {
              if (camera == null)
                  throw new NullPointerException("camera");
      
              this.mCamera = camera;
              this.mCamera.setAutomaticReceive(automaticReceive);
              this.automaticReceive = automaticReceive;
          }
      
          @Override
          public void onCameraInitialized(int frameWidth, int frameHeight) {
              cameraWidth = frameWidth;
              cameraHeight = frameHeight;
      
              setCameraBounds();
          }
      
          public void setCameraBounds() {
      
              int width = 0;
              int height = 0;
              if (fixedWidth > 0 && fixedHeight > 0) {
                  width = fixedWidth;
                  height = fixedHeight;
              } else if (cameraWidth > 0 && cameraHeight > 0) {
                  width = fixedWidth;
                  height = fixedHeight;
              }
      
              if (width > 0 && height > 0)
                  super.setCameraBounds(width, height, true);
          }
      
          @Override
          public void onFrameReceived(CameraFrame frame) {
              if (acceptNextFrame || automaticReceive)
                  super.setBackground(frame);
      
              // reset
              acceptNextFrame = false;
          }
      
          @Override
          public void onCameraReleased() {
      
              setBackgroundImage(null);
          }
      
          @Override
          public void surfaceCreated(SurfaceHolder arg0) {
              super.surfaceCreated(arg0);
      
              if (mCamera != null) {
                  mCamera.addCallback(this);
      
                  if (!automaticReceive)
                      mCamera.receive(); // we want to get the initial frame
              }
          }
      
          @Override
          public void surfaceDestroyed(SurfaceHolder arg0) {
              super.surfaceDestroyed(arg0);
      
              if (mCamera != null)
                  mCamera.removeCallback(this);
          }
      }
      
      <?xml version="1.0" encoding="utf-8"?>
      <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
          android:layout_width="match_parent"
          android:layout_height="match_parent"
          android:orientation="vertical" >
      
          <eu.level12.graphics.laser.CameraCanvasView
              android:id="@+id/my_camera_view"
              android:layout_width="match_parent"
              android:layout_height="match_parent"
              />
      
      </LinearLayout>
      

      备注:我知道这不是一个完整的实现,但我希望你能理解这一点。最有趣的部分是颜色转换,这可以在这篇文章的顶部找到。

      但是通常标题应该包含一些关于图像格式的信息,或者?无论如何,我的预览是1280x720,这将导致921600字节的数组长度。但预览帧的长度为1382399字节。这不适合你的解决方案,对吗?谢谢你的高级研究。好消息。我将尝试直接从YUV字节数组中裁剪。我想避免发送端的RGB转换,因为它会占用太多的CPU。工作正常,但我想知道它的性能。因为它会查看原始图像的每个像素,但是如果for循环的开始和限制计算正确,那么for循环和for循环都可以在裁剪区域中迭代。我完全同意你的观点。如果您已经知道如何将XY位置映射到YUV阵列,那么下一个优化步骤就是仅在该区域上迭代。广告使代码复杂化,但可以肯定的是
      @Override
      protected void onResume() {
          super.onResume();
      
          if (fourPointView != null) {
              cameraAccess = CameraAccess.getInstance(this);
              canvasView.setCamera(cameraAccess, true);
          } else {
              cameraAccess = null;
          }
      
          if (cameraAccess != null)
              cameraAccess.setAutomaticReceive(true);
      
          if (cameraAccess != null && fourPointView != null)
              cameraAccess.setRegionOfInterest(RectTools.toRect(canvasView
                      .getCamera().getViewport()));
      }
      
      @Override
      protected void onPause() {
          super.onPause();
      
          if (cameraAccess != null)
              cameraAccess.setRegionOfInterest(null);
      }