Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/file/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
iOS:如何将音频文件读入浮点缓冲区_Ios_File_Core Audio - Fatal编程技术网

iOS:如何将音频文件读入浮点缓冲区

iOS:如何将音频文件读入浮点缓冲区,ios,file,core-audio,Ios,File,Core Audio,我有一个很短的音频文件,比如说.PCM格式的十分之一秒 我想使用RemoteIO反复循环文件以产生连续的音乐音调。那么如何将其读入一个浮点数组呢 编辑:虽然我可能会找出文件格式,将文件提取到NSData中并手动处理,但我猜有一种更合理的通用方法。。。(例如处理不同的格式)我不熟悉RemoteIO,但我熟悉WAV,我想我应该发布一些关于它们的格式信息。如果需要,您应该能够轻松解析出诸如持续时间、比特率等信息 首先,这里有一个很好的网站,详细介绍了这个项目。该站点还出色地演示了“fmt”子块中的不同

我有一个很短的音频文件,比如说.PCM格式的十分之一秒

我想使用RemoteIO反复循环文件以产生连续的音乐音调。那么如何将其读入一个浮点数组呢


编辑:虽然我可能会找出文件格式,将文件提取到NSData中并手动处理,但我猜有一种更合理的通用方法。。。(例如处理不同的格式)

我不熟悉RemoteIO,但我熟悉WAV,我想我应该发布一些关于它们的格式信息。如果需要,您应该能够轻松解析出诸如持续时间、比特率等信息

首先,这里有一个很好的网站,详细介绍了这个项目。该站点还出色地演示了“fmt”子块中的不同字节地址所指的内容

波形文件格式
  • 波由“RIFF”块和后续子块组成
  • 每个区块至少有8个字节
  • 前4个字节是区块ID
  • 接下来的4个字节是区块大小(区块大小给出区块剩余部分的大小,不包括用于区块ID和区块大小的8个字节)
  • 每个WAVE都有以下块/子块
    • “RIFF”(第一个也是唯一的块,其余的都是技术上的子块。)
    • “fmt”(通常是“RIFF”之后的第一个子块,但可以在“RIFF”和“data”之间的任何位置。该块包含有关WAV的信息,如通道数、采样率和字节率)
    • “数据”(必须是最后一个子块,包含所有声音数据)
通用波形音频格式:
  • PCM
  • IEEE_浮点数
  • PCM_可扩展(具有PCM或IEEE_FLOAT的子格式)
波浪持续时间和大小 波形文件的持续时间可按如下方式计算:

seconds = DataChunkSize / ByteRate
在哪里

DataChunkSize不包括为“数据”子块的ID和大小保留的8个字节

知道了这一点,如果知道WAV和字节的持续时间,就可以计算DataChunkSize

DataChunkSize = seconds * ByteRate

当从mp3或wma等格式转换时,这对于计算wav数据的大小非常有用。请注意,典型的wav头是44字节,后跟DataChunkSize(如果wav是使用规范化工具转换的,则始终是这种情况-至少在撰写本文时是这样)。

您可以使用ExtAudioFile从任何受支持的数据格式中以多种客户端格式读取数据。以下是将文件读取为16位整数的示例:

CFURLRef url = /* ... */;
ExtAudioFileRef eaf;
OSStatus err = ExtAudioFileOpenURL((CFURLRef)url, &eaf);
if(noErr != err)
  /* handle error */

AudioStreamBasicDescription format;
format.mSampleRate = 44100;
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags = kAudioFormatFormatFlagIsPacked;
format.mBitsPerChannel = 16;
format.mChannelsPerFrame = 2;
format.mBytesPerFrame = format.mChannelsPerFrame * 2;
format.mFramesPerPacket = 1;
format.mBytesPerPacket = format.mFramesPerPacket * format.mBytesPerFrame;

err = ExtAudioFileSetProperty(eaf, kExtAudioFileProperty_ClientDataFormat, sizeof(format), &format);

/* Read the file contents using ExtAudioFileRead */
如果您想要Float32数据,您可以如下设置
格式

format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags = kAudioFormatFlagsNativeFloatPacked;
format.mBitsPerChannel = 32;

这是我用来将音频数据(音频文件)转换为浮点表示并保存到数组中的代码

-(void) PrintFloatDataFromAudioFile {

NSString *  name = @"Filename";  //YOUR FILE NAME
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:@"m4a"]; // SPECIFY YOUR FILE FORMAT

const char *cString = [source cStringUsingEncoding:NSASCIIStringEncoding];

CFStringRef str = CFStringCreateWithCString(
                                            NULL,
                                            cString,
                                            kCFStringEncodingMacRoman
                                            );
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(
                                                      kCFAllocatorDefault,
                                                      str,
                                                      kCFURLPOSIXPathStyle,
                                                      false
                                                      );

ExtAudioFileRef fileRef;
ExtAudioFileOpenURL(inputFileURL, &fileRef);


  AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100;   // GIVE YOUR SAMPLING RATE 
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat;
audioFormat.mBitsPerChannel = sizeof(Float32) * 8;
audioFormat.mChannelsPerFrame = 1; // Mono
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(Float32);  // == sizeof(Float32)
audioFormat.mFramesPerPacket = 1;
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mBytesPerFrame; // = sizeof(Float32)

// 3) Apply audio format to the Extended Audio File
ExtAudioFileSetProperty(
                        fileRef,
                        kExtAudioFileProperty_ClientDataFormat,
                        sizeof (AudioStreamBasicDescription), //= audioFormat
                        &audioFormat);

int numSamples = 1024; //How many samples to read in at a time
UInt32 sizePerPacket = audioFormat.mBytesPerPacket; // = sizeof(Float32) = 32bytes
UInt32 packetsPerBuffer = numSamples;
UInt32 outputBufferSize = packetsPerBuffer * sizePerPacket;

// So the lvalue of outputBuffer is the memory location where we have reserved space
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8 *) * outputBufferSize);



AudioBufferList convertedData ;//= malloc(sizeof(convertedData));

convertedData.mNumberBuffers = 1;    // Set this to 1 for mono
convertedData.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame;  //also = 1
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer; //

UInt32 frameCount = numSamples;
float *samplesAsCArray;
int j =0;
    double floatDataArray[882000]   ; // SPECIFY YOUR DATA LIMIT MINE WAS 882000 , SHOULD BE EQUAL TO OR MORE THAN DATA LIMIT

while (frameCount > 0) {
    ExtAudioFileRead(
                     fileRef,
                     &frameCount,
                     &convertedData
                     );
    if (frameCount > 0)  {
        AudioBuffer audioBuffer = convertedData.mBuffers[0];
        samplesAsCArray = (float *)audioBuffer.mData; // CAST YOUR mData INTO FLOAT

       for (int i =0; i<1024 /*numSamples */; i++) { //YOU CAN PUT numSamples INTEAD OF 1024

            floatDataArray[j] = (double)samplesAsCArray[i] ; //PUT YOUR DATA INTO FLOAT ARRAY
              printf("\n%f",floatDataArray[j]);  //PRINT YOUR ARRAY'S DATA IN FLOAT FORM RANGING -1 TO +1
            j++;


        }
    }
}}
-(void)从音频文件打印FloatDataFromAudioFile{
NSString*name=@“Filename”;//您的文件名
NSString*source=[[NSBundle mainBundle]路径FORResource:类型名称:@“m4a”];//指定文件格式
const char*cString=[源cStringUsingEncoding:NSASCIIStringEncoding];
CFStringRef str=CFStringCreateWithCString(
无效的
cString,
kCFStringEncodingMacRoman
);
CFURLRef inputFileURL=cfurlCreateWithFileSystem(
KCO默认值,
str,
kCFURLPOSIXPathStyle,
假的
);
ExtAudioFileRef;
ExtAudioFileOpenURL(inputFileURL和fileRef);
音频流基本描述音频格式;
audioFormat.mSampleRate=44100;//给出您的采样率
audioFormat.mFormatID=kAudioFormatLinearPCM;
audioFormat.mFormatFlags=kLinearPCMFormatFlagIsFloat;
audioFormat.mBitsPerChannel=sizeof(Float32)*8;
audioFormat.mChannelsPerFrame=1;//单声道
audioFormat.mBytesPerFrame=audioFormat.mChannelsPerFrame*sizeof(Float32);//==sizeof(Float32)
audioFormat.mFramesPerPacket=1;
audioFormat.mBytesPerPacket=audioFormat.mFramesPerPacket*audioFormat.mBytesPerFrame;//=sizeof(Float32)
//3)将音频格式应用于扩展音频文件
ExtAudioFileSetProperty(
fileRef,
kExtAudioFileProperty_ClientDataFormat,
sizeof(AudioStreamBasicDescription),/=audioFormat
&音频格式);
int numSamples=1024;//一次读取多少个样本
UInt32 sizePerPacket=audioFormat.mBytesPerPacket;//=sizeof(Float32)=32字节
UInt32 packetsPerBuffer=数个样本;
UInt32 outputBufferSize=packetsPerBuffer*sizepacket;
//所以outputBuffer的左值是我们保留空间的内存位置
UInt8*outputBuffer=(UInt8*)malloc(sizeof(UInt8*)*outputBufferSize);
AudioBufferList convertedData;//=malloc(sizeof(convertedData));
convertedData.mNumberBuffers=1;//对于mono,将其设置为1
convertedData.mBuffers[0].mNumberChannels=audioFormat.mChannelsPerFrame;//也=1
convertedData.mBuffers[0].mDataByteSize=outputBufferSize;
convertedData.mBuffers[0].mData=outputBuffer//
UInt32 frameCount=numSamples;
浮点*采样阵列;
int j=0;
double floatDataArray[882000];//指定您的数据限制MINE为882000,应等于或大于数据限制
而(帧数>0){
ExtAudioFileRead(
fileRef,
&帧数,
&转换数据
);
如果(帧数>0){
AudioBuffer AudioBuffer=convertedData.mBuffers[0];
samplesAsCArray=(float*)audioBuffer.mData;//将您的mData强制转换为float
对于(int i=0;i
Swift 5的更新

这是一个简单的功能,可以帮助您获得音频fi
-(void) PrintFloatDataFromAudioFile {

NSString *  name = @"Filename";  //YOUR FILE NAME
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:@"m4a"]; // SPECIFY YOUR FILE FORMAT

const char *cString = [source cStringUsingEncoding:NSASCIIStringEncoding];

CFStringRef str = CFStringCreateWithCString(
                                            NULL,
                                            cString,
                                            kCFStringEncodingMacRoman
                                            );
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(
                                                      kCFAllocatorDefault,
                                                      str,
                                                      kCFURLPOSIXPathStyle,
                                                      false
                                                      );

ExtAudioFileRef fileRef;
ExtAudioFileOpenURL(inputFileURL, &fileRef);


  AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100;   // GIVE YOUR SAMPLING RATE 
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat;
audioFormat.mBitsPerChannel = sizeof(Float32) * 8;
audioFormat.mChannelsPerFrame = 1; // Mono
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(Float32);  // == sizeof(Float32)
audioFormat.mFramesPerPacket = 1;
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mBytesPerFrame; // = sizeof(Float32)

// 3) Apply audio format to the Extended Audio File
ExtAudioFileSetProperty(
                        fileRef,
                        kExtAudioFileProperty_ClientDataFormat,
                        sizeof (AudioStreamBasicDescription), //= audioFormat
                        &audioFormat);

int numSamples = 1024; //How many samples to read in at a time
UInt32 sizePerPacket = audioFormat.mBytesPerPacket; // = sizeof(Float32) = 32bytes
UInt32 packetsPerBuffer = numSamples;
UInt32 outputBufferSize = packetsPerBuffer * sizePerPacket;

// So the lvalue of outputBuffer is the memory location where we have reserved space
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8 *) * outputBufferSize);



AudioBufferList convertedData ;//= malloc(sizeof(convertedData));

convertedData.mNumberBuffers = 1;    // Set this to 1 for mono
convertedData.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame;  //also = 1
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer; //

UInt32 frameCount = numSamples;
float *samplesAsCArray;
int j =0;
    double floatDataArray[882000]   ; // SPECIFY YOUR DATA LIMIT MINE WAS 882000 , SHOULD BE EQUAL TO OR MORE THAN DATA LIMIT

while (frameCount > 0) {
    ExtAudioFileRead(
                     fileRef,
                     &frameCount,
                     &convertedData
                     );
    if (frameCount > 0)  {
        AudioBuffer audioBuffer = convertedData.mBuffers[0];
        samplesAsCArray = (float *)audioBuffer.mData; // CAST YOUR mData INTO FLOAT

       for (int i =0; i<1024 /*numSamples */; i++) { //YOU CAN PUT numSamples INTEAD OF 1024

            floatDataArray[j] = (double)samplesAsCArray[i] ; //PUT YOUR DATA INTO FLOAT ARRAY
              printf("\n%f",floatDataArray[j]);  //PRINT YOUR ARRAY'S DATA IN FLOAT FORM RANGING -1 TO +1
            j++;


        }
    }
}}