Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/android/230.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
IplImage裁剪和旋转-Android_Android_Opencv_Ffmpeg_Javacv - Fatal编程技术网

IplImage裁剪和旋转-Android

IplImage裁剪和旋转-Android,android,opencv,ffmpeg,javacv,Android,Opencv,Ffmpeg,Javacv,我正在使用ffmpeg进行30秒的视频捕获 @Override public void onPreviewFrame(byte[] data, Camera camera) { if (yuvIplimage != null && recording && rec) { new SaveFrame().execute(data);

我正在使用ffmpeg进行30秒的视频捕获

@Override
        public void onPreviewFrame(byte[] data, Camera camera) {
            if (yuvIplimage != null && recording && rec) 
                {
                    new SaveFrame().execute(data);
                }
            }
        }
下面是保存框架类

private class SaveFrame extends AsyncTask<byte[], Void, File> {
            long t;
            protected File doInBackground(byte[]... arg) {

                t = 1000 * (System.currentTimeMillis() - firstTime - pausedTime);
                toSaveFrames++;
                File pathCache = new File(Environment.getExternalStorageDirectory()+"/DCIM", (System.currentTimeMillis() / 1000L)+ "_" + toSaveFrames + ".tmp");
                BufferedOutputStream bos;
                try {
                    bos = new BufferedOutputStream(new FileOutputStream(pathCache));
                    bos.write(arg[0]);
                    bos.flush();
                    bos.close();
                } catch (FileNotFoundException e) {
                    e.printStackTrace();
                    pathCache = null;
                    toSaveFrames--;
                } catch (IOException e) {
                    e.printStackTrace();
                    pathCache = null;
                    toSaveFrames--;
                }
                return pathCache;


            }
            @Override
            protected void onPostExecute(File filename)
            {
                if(filename!=null)
                {
                    savedFrames++;
                    tempList.add(new FileFrame(t,filename));
                }
            }
        }
我最后的视频剪辑和旋转,但绿色帧和彩色帧与此混合

如何解决这个问题。我不知道iplimage。在一些博客中,他们提到了它的YUV格式。首先你需要转换Y,然后再转换UV


如何解决这个问题?

我已经修改了onPreviewFrame方法,对捕获的帧进行转置和调整大小

我在setCameraParams()方法中定义了“YuvipImage”,如下所示

还可以如下初始化录像机对象,将宽度指定为高度,反之亦然

//call initVideoRecorder() method like this to initialize videoRecorder object of FFmpegFrameRecorder class.
initVideoRecorder(strVideoPath, mPreview.getPreviewSize().height, mPreview.getPreviewSize().width, recorderParameters);

//method implementation
public void initVideoRecorder(String videoPath, int width, int height, RecorderParameters recorderParameters)
{
    Log.e(TAG, "initVideoRecorder");

    videoRecorder = new FFmpegFrameRecorder(videoPath, width, height, 1);
    videoRecorder.setFormat(recorderParameters.getVideoOutputFormat());
    videoRecorder.setSampleRate(recorderParameters.getAudioSamplingRate());
    videoRecorder.setFrameRate(recorderParameters.getVideoFrameRate());
    videoRecorder.setVideoCodec(recorderParameters.getVideoCodec());
    videoRecorder.setVideoQuality(recorderParameters.getVideoQuality());
    videoRecorder.setAudioQuality(recorderParameters.getVideoQuality());
    videoRecorder.setAudioCodec(recorderParameters.getAudioCodec());
    videoRecorder.setVideoBitrate(1000000);
    videoRecorder.setAudioBitrate(64000);
}
这是我的onPreviewFrame()方法:

这段代码使用了一个方法“YUV\u NV21\u TO\u BGR”,我从下面的代码中找到了这个方法

基本上,这个方法是用来解决的,我称之为“Android上的绿色魔鬼问题”,就像你的一样。我也有同样的问题,浪费了差不多3-4天。在添加“YUV_NV21_TO_BGR”方法之前,当我刚刚对YuvIplImage进行转置时,更重要的是转置、翻转(有或没有调整大小)的组合,结果视频中有绿色输出。这种“YUV_NV21_TO_BGR”方法节省了时间。感谢上面的google群组帖子中的@David Han

此外,您还应该知道,在onPreviewFrame中,所有这些处理(转置、翻转和调整大小)都需要花费大量时间,这会导致每秒帧数(FPS)率受到严重影响。当我在onPreviewFrame方法中使用这段代码时,录制视频的帧速率从30fps降至3帧/秒

我建议不要使用这种方法。相反,您可以在异步任务中使用JavaCV对视频文件进行录制后处理(转置、翻转和调整大小)。希望这有帮助

@Override
public void onPreviewFrame(byte[] data, Camera camera) {  
     //IplImage newImage = cvCreateImage(cvGetSize(yuvIplimage), IPL_DEPTH_8U, 1);   
     if (recording) {    
         videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);                
         yuvimage = IplImage.create(imageWidth, imageHeight * 3 / 2, IPL_DEPTH_8U,1); 
         yuvimage.getByteBuffer().put(data); 

         rgbimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 3); 
         opencv_imgproc.cvCvtColor(yuvimage, rgbimage, opencv_imgproc.CV_YUV2BGR_NV21);      

         IplImage rotateimage=null;
              try {
                    recorder.setTimestamp(videoTimestamp);   
                    int rot=0;
                    switch (degrees) {
                    case 0:
                        rot =1;
                        rotateimage=rotate(rgbimage,rot);
                    break;
                    case 180:
                        rot = -1;
                        rotateimage=rotate(rgbimage,rot);
                        break;                     
                    default:
                        rotateimage=rgbimage;
                }                      
                    recorder.record(rotateimage);

              } catch (FFmpegFrameRecorder.Exception e) {
                 e.printStackTrace();
              }  
        }

    }
IplImage rotate(IplImage IplSrc,int angle) {
    IplImage img= IplImage.create(IplSrc.height(), IplSrc.width(), IplSrc.depth(), IplSrc.nChannels());
    cvTranspose(IplSrc, img);
    cvFlip(img, img, angle);        
    return img;
    }    
}

经过多次搜索,这对我很有用。

你找到解决方案了吗?当图像旋转时,图像的大小在宽度和高度上都会增加/减少。记录器。记录(图像);需要与记录器初始化时提供的宽度和高度相同。@topcpl其在行号opencv_imgproc.cvcvcvtColor(yuvimage、RGBIImage、opencv_imgproc.CV_YUV2BGR_NV21)上给我的错误;
IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
//call initVideoRecorder() method like this to initialize videoRecorder object of FFmpegFrameRecorder class.
initVideoRecorder(strVideoPath, mPreview.getPreviewSize().height, mPreview.getPreviewSize().width, recorderParameters);

//method implementation
public void initVideoRecorder(String videoPath, int width, int height, RecorderParameters recorderParameters)
{
    Log.e(TAG, "initVideoRecorder");

    videoRecorder = new FFmpegFrameRecorder(videoPath, width, height, 1);
    videoRecorder.setFormat(recorderParameters.getVideoOutputFormat());
    videoRecorder.setSampleRate(recorderParameters.getAudioSamplingRate());
    videoRecorder.setFrameRate(recorderParameters.getVideoFrameRate());
    videoRecorder.setVideoCodec(recorderParameters.getVideoCodec());
    videoRecorder.setVideoQuality(recorderParameters.getVideoQuality());
    videoRecorder.setAudioQuality(recorderParameters.getVideoQuality());
    videoRecorder.setAudioCodec(recorderParameters.getAudioCodec());
    videoRecorder.setVideoBitrate(1000000);
    videoRecorder.setAudioBitrate(64000);
}
@Override
public void onPreviewFrame(byte[] data, Camera camera)
{

    long frameTimeStamp = 0L;

    if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
    {
        frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
    }
    else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
    {
        frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
    }
    else
    {
        long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
        frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
        FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
    }

    synchronized(FragmentCamera.mVideoRecordLock)
    {
        if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
        {
            FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;

            if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
            {
                FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
            }

            try
            {
                yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());

                IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
                IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
                IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);

                int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];

                Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width,  mPreviewSize.height);

                bgrImage.getIntBuffer().put(_temp);

                opencv_core.cvTranspose(bgrImage, transposed);
                opencv_core.cvFlip(transposed, transposed, 1);

                opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
                opencv_core.cvCopy(transposed, squared, null);
                opencv_core.cvResetImageROI(transposed);

                videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
                videoRecorder.record(squared);
            }
            catch(com.googlecode.javacv.FrameRecorder.Exception e)
            {
                e.printStackTrace();
            }
        }

        lastSavedframe = new SavedFrames(data, frameTimeStamp);
    }
}
@Override
public void onPreviewFrame(byte[] data, Camera camera) {  
     //IplImage newImage = cvCreateImage(cvGetSize(yuvIplimage), IPL_DEPTH_8U, 1);   
     if (recording) {    
         videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);                
         yuvimage = IplImage.create(imageWidth, imageHeight * 3 / 2, IPL_DEPTH_8U,1); 
         yuvimage.getByteBuffer().put(data); 

         rgbimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 3); 
         opencv_imgproc.cvCvtColor(yuvimage, rgbimage, opencv_imgproc.CV_YUV2BGR_NV21);      

         IplImage rotateimage=null;
              try {
                    recorder.setTimestamp(videoTimestamp);   
                    int rot=0;
                    switch (degrees) {
                    case 0:
                        rot =1;
                        rotateimage=rotate(rgbimage,rot);
                    break;
                    case 180:
                        rot = -1;
                        rotateimage=rotate(rgbimage,rot);
                        break;                     
                    default:
                        rotateimage=rgbimage;
                }                      
                    recorder.record(rotateimage);

              } catch (FFmpegFrameRecorder.Exception e) {
                 e.printStackTrace();
              }  
        }

    }
IplImage rotate(IplImage IplSrc,int angle) {
    IplImage img= IplImage.create(IplSrc.height(), IplSrc.width(), IplSrc.depth(), IplSrc.nChannels());
    cvTranspose(IplSrc, img);
    cvFlip(img, img, angle);        
    return img;
    }    
}