Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/image-processing/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Image processing 使用网络摄像头检测心跳?_Image Processing_Processing_Javacv_Heartbeat - Fatal编程技术网

Image processing 使用网络摄像头检测心跳?

Image processing 使用网络摄像头检测心跳?,image-processing,processing,javacv,heartbeat,Image Processing,Processing,Javacv,Heartbeat,我试图创建一个应用程序,可以检测心跳使用您的计算机网络摄像头。我从2周开始就在编写代码,开发了这段代码,到目前为止我已经完成了 它是如何工作的?如下图所示 基于opencv的人脸检测 获取前额图像 应用过滤器将其转换为灰度图像[您可以跳过它] 查找每帧绿色像素的平均强度 将平均值保存到数组中 应用FFT(我使用了minim库)从FFT频谱中提取心跳(这里,我需要一些帮助) 这里,我需要帮助从FFT频谱中提取心跳。谁能帮帮我吗,是用python开发的类似应用程序,但我无法理解这段代码,因此我正在开

我试图创建一个应用程序,可以检测心跳使用您的计算机网络摄像头。我从2周开始就在编写代码,开发了这段代码,到目前为止我已经完成了

它是如何工作的?如下图所示

  • 基于opencv的人脸检测
  • 获取前额图像
  • 应用过滤器将其转换为灰度图像[您可以跳过它]
  • 查找每帧绿色像素的平均强度
  • 将平均值保存到数组中
  • 应用FFT(我使用了minim库)从FFT频谱中提取心跳(这里,我需要一些帮助)
  • 这里,我需要帮助从FFT频谱中提取心跳。谁能帮帮我吗,是用python开发的类似应用程序,但我无法理解这段代码,因此我正在开发相同的程序。有谁能帮助我理解python代码中提取心跳的部分吗

    //---------import required ilbrary -----------
    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;
    import java.util.*;
    import ddf.minim.analysis.*;
    import ddf.minim.*;
    //----------create objects---------------------------------
    Capture video; // camera object
    OpenCV opencv; // opencv object
    Minim       minim;
    FFT         fft;
    //IIRFilter filt;
    //--------- Create ArrayList--------------------------------
    ArrayList<Float> poop = new ArrayList(); 
    float[] sample;
    int bufferSize = 128;
    int sampleRate = 512;
    int bandWidth = 20;
    int centerFreq = 80;
    //---------------------------------------------------
    void setup() {
      size(640, 480); // size of the window
      minim = new Minim(this);
      fft = new FFT( bufferSize, sampleRate);
      video = new Capture(this, 640/2, 480/2); // initializing video object
      opencv = new OpenCV(this, 640/2, 480/2); // initializing opencv object
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  // loading haar cscade file for face detection
      video.start(); // start video
    }
    
    void draw() {
      background(0);
      // image(video, 0, 0 ); // show video in the background
      opencv.loadImage(video);
      Rectangle[] faces = opencv.detect();
      video.loadPixels();
      //------------ Finding faces in the video ----------- 
      float gavg = 0;
      for (int i = 0; i < faces.length; i++) {
        noFill();
        stroke(#FFB700); // yellow rectangle
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // creating rectangle around the face (YELLOW)
        stroke(#0070FF); //blue rectangle
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height-2*faces[i].height/3); // creating a blue rectangle around the forehead
        //-------------------- storing forehead white rectangle part into an image -------------------
        stroke(0, 255, 255);
        rect(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15);
        PImage img = video.get(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15); // storing the forehead aera into a image
        img.loadPixels();
        img.filter(GRAY); // converting capture image rgb to gray
        img.updatePixels();
    
        int numPixels = img.width*img.height;
        for (int px = 0; px < numPixels; px++) { // For each pixel in the video frame...
          final color c = img.pixels[px];
          final color luminG = c>>010 & 0xFF;
          final float luminRangeG = luminG/255.0;
          gavg = gavg + luminRangeG;
        }
    
        //--------------------------------------------------------
        gavg = gavg/numPixels;
        if (poop.size()< bufferSize) {
          poop.add(gavg);
        }
        else poop.remove(0);
      }
      sample = new float[poop.size()];
      for (int i=0;i<poop.size();i++) {
        Float f = (float) poop.get(i);
        sample[i] = f;
      }
    
      if (sample.length>=bufferSize) {
        //fft.window(FFT.NONE); 
        fft.forward(sample, 0);
        //    bpf = new BandPass(centerFreq, bandwidth, sampleRate);
        //    in.addEffect(bpf);
        float bw = fft.getBandWidth(); // returns the width of each frequency band in the spectrum (in Hz).
        println(bw); // returns 21.5332031 Hz for spectrum [0] & [512]
    
        for (int i = 0; i < fft.specSize(); i++)
        {
          // println( " Freq" + max(sample));
          stroke(0, 255, 0);
          float x = map(i, 0, fft.specSize(), 0, width);
          line( x, height, x, height - fft.getBand(i)*100);
         // text("FFT FREQ " + fft.getFreq(i), width/2-100, 10*(i+1));
         // text("FFT BAND " + fft.getBand(i), width/2+100, 10*(i+1));
        }
      }
      else {
        println(sample.length + " " + poop.size());
      }
    }
    
    void captureEvent(Capture c) {
      c.read();
    }
    
    for(int i = 0; i < fft.specSize(); i++)
    { // draw the line for frequency band i, scaling it up a bit so we can see it
        heartBeatFrequency = max(heartBeatFrequency,fft.getBand(i));
    }
    
    /-----------导入所需的iLibrary-----------
    导入gab.opencv.*;
    导入处理。视频。*;
    导入java.awt.*;
    导入java.util.*;
    进口ddf.微量分析。*;
    进口ddf.微量。*;
    //----------创建对象---------------------------------
    捕获视频;//摄影机对象
    OpenCV OpenCV;//opencv对象
    极小极小;
    FFT;
    //IIRFilter过滤器;
    //---------创建ArrayList--------------------------------
    ArrayList poop=新的ArrayList();
    浮动[]样品;
    int bufferSize=128;
    int-sampleRate=512;
    int带宽=20;
    int中心频率=80;
    //---------------------------------------------------
    无效设置(){
    大小(640480);//窗口的大小
    最小值=新的最小值(本);
    fft=新的fft(缓冲区大小,采样器);
    视频=新捕获(此,640/2480/2);//初始化视频对象
    opencv=newOpenCV(this,640/2480/2);//初始化opencv对象
    opencv.loadCascade(opencv.CASCADE_FRONTALFACE);//加载用于人脸检测的haar cscade文件
    video.start();//启动视频
    }
    作废提款(){
    背景(0);
    //图像(视频,0,0);//在背景中显示视频
    opencv.loadImage(视频);
    矩形[]面=opencv.detect();
    loadPixels();
    //------------在视频中寻找面孔-----------
    浮动gavg=0;
    对于(int i=0;i>010&0xFF;
    最终浮点数luminG=luminG/255.0;
    gavg=gavg+g;
    }
    //--------------------------------------------------------
    gavg=gavg/numPixels;
    if(poop.size()
    FFT应用于具有128个样本的窗口中

    int bufferSize = 128;
    
    在draw方法期间,样本存储在阵列中,直到填充要应用的FFT的缓冲区。然后,缓冲区保持满。要插入新样本,将删除最旧的样本。gavg是平均灰色通道颜色

    gavg = gavg/numPixels;
    if (poop.size()< bufferSize) {
      poop.add(gavg);
    }
    else poop.remove(0);
    
    在代码中只显示光谱结果。必须计算心跳频率。 对于fft中的每个频带,必须找到最大值,该位置是心跳频率

    //---------import required ilbrary -----------
    import gab.opencv.*;
    import processing.video.*;
    import java.awt.*;
    import java.util.*;
    import ddf.minim.analysis.*;
    import ddf.minim.*;
    //----------create objects---------------------------------
    Capture video; // camera object
    OpenCV opencv; // opencv object
    Minim       minim;
    FFT         fft;
    //IIRFilter filt;
    //--------- Create ArrayList--------------------------------
    ArrayList<Float> poop = new ArrayList(); 
    float[] sample;
    int bufferSize = 128;
    int sampleRate = 512;
    int bandWidth = 20;
    int centerFreq = 80;
    //---------------------------------------------------
    void setup() {
      size(640, 480); // size of the window
      minim = new Minim(this);
      fft = new FFT( bufferSize, sampleRate);
      video = new Capture(this, 640/2, 480/2); // initializing video object
      opencv = new OpenCV(this, 640/2, 480/2); // initializing opencv object
      opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE);  // loading haar cscade file for face detection
      video.start(); // start video
    }
    
    void draw() {
      background(0);
      // image(video, 0, 0 ); // show video in the background
      opencv.loadImage(video);
      Rectangle[] faces = opencv.detect();
      video.loadPixels();
      //------------ Finding faces in the video ----------- 
      float gavg = 0;
      for (int i = 0; i < faces.length; i++) {
        noFill();
        stroke(#FFB700); // yellow rectangle
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // creating rectangle around the face (YELLOW)
        stroke(#0070FF); //blue rectangle
        rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height-2*faces[i].height/3); // creating a blue rectangle around the forehead
        //-------------------- storing forehead white rectangle part into an image -------------------
        stroke(0, 255, 255);
        rect(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15);
        PImage img = video.get(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15); // storing the forehead aera into a image
        img.loadPixels();
        img.filter(GRAY); // converting capture image rgb to gray
        img.updatePixels();
    
        int numPixels = img.width*img.height;
        for (int px = 0; px < numPixels; px++) { // For each pixel in the video frame...
          final color c = img.pixels[px];
          final color luminG = c>>010 & 0xFF;
          final float luminRangeG = luminG/255.0;
          gavg = gavg + luminRangeG;
        }
    
        //--------------------------------------------------------
        gavg = gavg/numPixels;
        if (poop.size()< bufferSize) {
          poop.add(gavg);
        }
        else poop.remove(0);
      }
      sample = new float[poop.size()];
      for (int i=0;i<poop.size();i++) {
        Float f = (float) poop.get(i);
        sample[i] = f;
      }
    
      if (sample.length>=bufferSize) {
        //fft.window(FFT.NONE); 
        fft.forward(sample, 0);
        //    bpf = new BandPass(centerFreq, bandwidth, sampleRate);
        //    in.addEffect(bpf);
        float bw = fft.getBandWidth(); // returns the width of each frequency band in the spectrum (in Hz).
        println(bw); // returns 21.5332031 Hz for spectrum [0] & [512]
    
        for (int i = 0; i < fft.specSize(); i++)
        {
          // println( " Freq" + max(sample));
          stroke(0, 255, 0);
          float x = map(i, 0, fft.specSize(), 0, width);
          line( x, height, x, height - fft.getBand(i)*100);
         // text("FFT FREQ " + fft.getFreq(i), width/2-100, 10*(i+1));
         // text("FFT BAND " + fft.getBand(i), width/2+100, 10*(i+1));
        }
      }
      else {
        println(sample.length + " " + poop.size());
      }
    }
    
    void captureEvent(Capture c) {
      c.read();
    }
    
    for(int i = 0; i < fft.specSize(); i++)
    { // draw the line for frequency band i, scaling it up a bit so we can see it
        heartBeatFrequency = max(heartBeatFrequency,fft.getBand(i));
    }
    
    调整频率

    float bw = fft.getBandWidth();
    
    heartBeatFrequency = fft.getBandWidth() * heartBeatFrequency ;
    

    在获得bufferSize值或大于该值的样本大小128后,使用样本数组转发fft,然后获得频谱的峰值,这将是我们的心跳频率 以下文件对此进行了解释:

  • 从视频中测量心率-伊莎贝尔·布什-斯坦福-(图2下第4页段落对此进行了解释。)
  • 使用网络摄像头从面部RGB彩色视频实时监测心率-H.Rahman,M.U.Ahmed,S.Begum,P.Funk-(第4页)

  • 在看了你的问题后,我想让我把手放在这个问题上,我试着对此做出解释

    嗯,如果有人能看一下的话,会有一些问题


    谢谢你的回答,这帮了大忙。

    对于坐在那里的人来说,男人的脉搏太高了…@这取决于他看的是一个多么赤裸/肮脏的宝贝…嗨,大卫,谢谢你的友好回复和我的代码。我已经添加了您在文章末尾提到的建议代码。当我运行代码时,它会给我“无穷大”,你能告诉我为什么会发生这种情况吗?再次感谢:)是吗