Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/android/208.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Android j变换、输出频率不准确。_Android_Audio_Signal Processing_Fft - Fatal编程技术网

Android j变换、输出频率不准确。

Android j变换、输出频率不准确。,android,audio,signal-processing,fft,Android,Audio,Signal Processing,Fft,我正在为谷歌眼镜(Google Glass)开发一个应用程序,它可以实时显示当前峰值频率(ish),因为它记录了音频。我目前的问题是频率报告变化非常快,因此很难确定频率。我也不确定我的数字格式输出格式是否正确,因为它只达到“00.000”。我可能需要一些关于窗口的帮助,但我对它的理解是存在的 谢谢 public class RTAactivity extends Activity { private static final int SAMPLING_RATE = 44100; privat

我正在为谷歌眼镜(Google Glass)开发一个应用程序,它可以实时显示当前峰值频率(ish),因为它记录了音频。我目前的问题是频率报告变化非常快,因此很难确定频率。我也不确定我的数字格式输出格式是否正确,因为它只达到“00.000”。我可能需要一些关于窗口的帮助,但我对它的理解是存在的

谢谢

public class RTAactivity extends Activity {

private static final int SAMPLING_RATE = 44100;

private TextView tvfreq;
private TextView tvdb;

private RecordingThread mRecordingThread;
private int mBufferSize;
private short[] mAudioBuffer;
private String mDecibelFormat;
private double  mFreqFormat = 0.0;
private int blockSize = 1024;  //4096
private DoubleFFT_1D fft;
private int[] bufferDouble, bufferDouble2;



@Override
protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.rta_view);
    getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);

    tvfreq = (TextView) findViewById(R.id.tv_freq);
    tvdb = (TextView) findViewById(R.id.tv_decibels);

    // Compute the minimum required audio buffer size and allocate the buffer.
    mBufferSize = AudioRecord.getMinBufferSize(SAMPLING_RATE, AudioFormat.CHANNEL_IN_MONO,
            AudioFormat.ENCODING_PCM_16BIT);
    mAudioBuffer = new short[mBufferSize / 2];
    bufferDouble2 = new int[mBufferSize /2];
    bufferDouble = new int[(blockSize-1) * 2 ];

    mDecibelFormat = getResources().getString(R.string.decibel_format);
}

@Override
protected void onResume() {
    super.onResume();

    mRecordingThread = new RecordingThread();
    mRecordingThread.start();
}

@Override
protected void onPause() {
    super.onPause();

    if (mRecordingThread != null) {
        mRecordingThread.stopRunning();
        mRecordingThread = null;
    }
}
private class RecordingThread extends Thread{

    private boolean mShallContinue = true;

    @Override
    public void run() {
        android.os.Process.setThreadPriority(Process.THREAD_PRIORITY_AUDIO);

        AudioRecord record = new AudioRecord(AudioSource.MIC, SAMPLING_RATE, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mBufferSize);

        short[] buffer = new short[blockSize];
        double[] audioDataDoubles = new double[(blockSize * 2)];
        double[] re = new double[blockSize];
        double[] im = new double[blockSize];
        double[] magnitude = new double[blockSize];

        //start collecting data
        record.startRecording();



        DoubleFFT_1D fft = new DoubleFFT_1D(blockSize);

        while (shallContinue()) {

            /**decibels */
            record.read(mAudioBuffer, 0, mBufferSize / 2);
            updateDecibelLevel();

            /**frequency */
                ///windowing!?
            for(int i=0;i<mAudioBuffer.length;i++) {
                bufferDouble2[i] = (int) mAudioBuffer[i];
            }

            for(int i=0;i<blockSize-1;i++){
                double x=-Math.PI+2*i*(Math.PI/blockSize);
                double winValue=(1+Math.cos(x))/2.0;
                bufferDouble[i]= (int) (bufferDouble2[i]*winValue); }

               // bufferDouble[2*i]=bufferDouble2[i];
               // bufferDouble[2*i+1] = (int) 0.0;}


            int bufferReadResult = record.read(buffer, 0, blockSize);

            // Read in the data from the mic to the array
            for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
                audioDataDoubles[2 * i] = (double) buffer[i] / 32768.0; // signed 16 bit
                audioDataDoubles[(2 * i) + 1] = 0.0;
            }

        //audiodataDoubles now holds data to work with
        fft.complexForward(audioDataDoubles);   //complexForward


        // Calculate the Real and imaginary and Magnitude.

        for (int i = 0; i < blockSize; i++) {
            double real = audioDataDoubles[2 * i];
            double imag = audioDataDoubles[2 * i + 1];
            magnitude[i] = Math.sqrt((real * real) + (imag * imag));
        }
        for (int i = 0; i < blockSize; i++) {
            // real is stored in first part of array
            re[i] = audioDataDoubles[i * 2];
            // imaginary is stored in the sequential part
            im[i] = audioDataDoubles[(i * 2) + 1];
            // magnitude is calculated by the square root of (imaginary^2 + real^2)
            magnitude[i] = Math.sqrt((re[i] * re[i]) + (im[i] * im[i]));
        }

        double peak = -1.0;
        // Get the largest magnitude peak
        for (int i = 0; i < blockSize; i++) {
            peak = magnitude[i];
        }

        // calculated the frequency
        mFreqFormat = (SAMPLING_RATE * peak) / blockSize;
        updateFrequency();

    }

        record.stop();   //stop recording please.
        record.release();  // Deystroy the recording, PLEASE!
    }

    /**true if the thread should continue running or false if it should stop
    */
    private synchronized boolean shallContinue() {return mShallContinue; }

    /** Notifies the thread that it should stop running at the next opportunity. */
    private synchronized void stopRunning() { mShallContinue = false; }


    private void updateDecibelLevel() {
        // Compute the root-mean-squared of the sound buffer and then apply the formula for
        // computing the decibel level, 20 * log_10(rms). This is an uncalibrated calculation
        // that assumes no noise in the samples; with 16-bit recording, it can range from
        // -90 dB to 0 dB.
        double sum = 0;

        for (short rawSample : mAudioBuffer) {
            double sample = rawSample / 32768.0;
            sum += sample * sample;
        }

        double rms = Math.sqrt(sum / mAudioBuffer.length);
        final double db = 20 * Math.log10(rms);

        // Update the text view on the main thread.
        tvdb.post(new Runnable() {
            @Override
            public void run() {
                tvdb.setText(String.format(mDecibelFormat, db));
            }
        });
    }

  }
           /// post the output frequency to TextView
private void updateFrequency() {
    tvfreq.post(new Runnable() {
        @Override
        public void run() {
            NumberFormat nM = NumberFormat.getNumberInstance();
            tvfreq.setText(nM.format(mFreqFormat) + " hz");
        }
    });


}
公共类活动扩展活动{
私有静态最终整数采样率=44100;
私有文本视图tvfreq;
私有文本视图tvdb;
私有记录线程mrecordingread;
私营企业;
私人短缓冲区;
私有字符串mDecibelFormat;
专用双mFreqFormat=0.0;
私有int blockSize=1024;//4096
专用双fft_1D fft;
私有int[]bufferDouble,bufferDouble2;
@凌驾
创建时受保护的void(Bundle savedInstanceState){
super.onCreate(savedInstanceState);
setContentView(R.layout.rta_视图);
getWindow().addFlags(WindowManager.LayoutParams.FLAG\u保持屏幕打开);
tvfreq=(文本视图)findViewById(R.id.tv\u freq);
tvdb=(TextView)findViewById(R.id.tv_分贝);
//计算所需的最小音频缓冲区大小并分配缓冲区。
mBufferSize=AudioRecord.getMinBufferSize(单声道中的采样率、AudioFormat.CHANNEL、,
音频格式。编码(PCM(16位);
mAudioBuffer=新短[mBufferSize/2];
bufferDouble2=新整数[mBufferSize/2];
bufferDouble=newint[(blockSize-1)*2];
mDecibelFormat=getResources().getString(R.string.decibel_格式);
}
@凌驾
受保护的void onResume(){
super.onResume();
mRecordingThread=newrecordingthread();
mrecordingread.start();
}
@凌驾
受保护的void onPause(){
super.onPause();
if(mRecordingThread!=null){
mrecordingread.stopRunning();
mRecordingThread=null;
}
}
私有类RecordingThread扩展线程{
私有布尔值mShallContinue=true;
@凌驾
公开募捐{
android.os.Process.setThreadPriority(Process.THREAD\u PRIORITY\u AUDIO);
AudioRecord record=新的录音(AudioSource.MIC、采样率、单声道中的AudioFormat.CHANNEL、AudioFormat.ENCODING、PCM、mBufferSize);
short[]buffer=新的short[blockSize];
double[]audioDataDoubles=新的double[(块大小*2)];
double[]re=新的double[blockSize];
double[]im=新的double[blockSize];
double[]震级=新的double[blockSize];
//开始收集数据
record.startRecording();
DoubleFFT_1D fft=新的DoubleFFT_1D(块大小);
while(shallContinue()){
/**分贝*/
记录读取(mAudioBuffer,0,mBufferSize/2);
updateDecibelLevel();
/**频率*/
///开窗!?

对于(int i=0;i已添加):仅使用FFT的峰值幅度单元的频率分辨率将设置(量化)为采样率除以FFT的长度(参数为44100/1024 Hz)。对于短FFT,430 Hz可能是距离440最近的FFT结果。要做得更好,您需要插值、使用更长的FFT或使用其他频率估计算法


如果您试图显示音高频率(音乐音高或人声音高),这通常与FFT结果中的峰值频谱频率不同。请查阅音高检测/估计方法(关于该主题的许多学术论文),因为这通常需要比计算FFT幅值峰值更复杂、更稳健的算法。

您的代码有几个问题,但最重要的是峰值查找循环完全中断-更改:

    double peak = -1.0;
    // Get the largest magnitude peak
    for (int i = 0; i < blockSize; i++) {
        peak = magnitude[i];
    }
双峰=-1.0;
//获得最大震级峰值
对于(int i=0;i
致:

double peak_val=幅值[0];//峰值的初始幅值
peak=0;//peak的初始索引
对于(int i=1;i峰值值){
peak_val=val;//更新峰值大小
peak=i;//更新peak的索引
}
}

你需要检查你的代码——出于某种原因,你计算了两次幅度(无害但毫无意义),但更重要的是,你的峰值查找循环完全被打破了。虽然这是真的,OP可能需要做更多的研究,但它并没有解决眼前的问题(代码中的bug),所以这可能只是一个评论,而不是一个答案。我的目标是找到共振频率。我是一名音频工程师,每天都会设置音响系统并将其唤醒。我花了很多时间进行研究,以达到这一目的,如果你有任何论文可以推荐,以帮助我进一步理解非常感谢!谢谢你PAUL R!我必须说,我在构建这个过程中非常喜欢你,并且从你在这里的帖子中学到了很多。我已经实现了上面提到的修复,它似乎解决了我的问题,我现在通过Klipsch ref扬声器播放440hz!我确实注意到了一件奇怪的事情这种情况会发生,但读数有时会跳到43281Hz?如果你想指出我给自己的其他几个问题,我将不胜感激。再次感谢你的回答。很抱歉,当我回放440hz时,读数实际上是10hz左右。我读到430hz,我会仔细检查我的计算,然后尝试另一套明天演讲。如果还有更多想法,那就太好了。谢谢!很高兴它现在至少部分起作用了。请注意,FFT的分辨率仅为44100/1024=43 Hz,因此您可能在bin 10中看到440 Hz的峰值,估计频率为430 Hz。至于代码的其他问题,我已经知道了
    double peak_val = magnitude[0];   // init magnitude of peak
    peak = 0;                         // init index of peak
    for (int i = 1; i < blockSize; i++) {
        double val = magnitude[i];
        if (val > peak_val) {
            peak_val = val;           // update magnitude of peak
            peak = i;                 // update index of peak
        }
    }