Objective c 目标c:通过套接字发送rtp数据包中的音频数据

Objective c 目标c:通过套接字发送rtp数据包中的音频数据,objective-c,audio-streaming,rtp,Objective C,Audio Streaming,Rtp,在我的应用程序中,我必须捕获麦克风并在rtp数据包中发送音频数据。但我只看到接收rtp数据,如或未回答 我使用下面的代码发送音频数据,但它并没有包装在rtp数据包中。是否有任何库可以将我的音频数据包装到rtp数据包中 初始AsyncUdpSocket: udpSender = [[GCDAsyncUdpSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()]; NSError *error; [u

在我的应用程序中,我必须捕获麦克风并在rtp数据包中发送音频数据。但我只看到接收rtp数据,如或未回答

我使用下面的代码发送音频数据,但它并没有包装在rtp数据包中。是否有任何库可以将我的音频数据包装到rtp数据包中

初始AsyncUdpSocket:

 udpSender = [[GCDAsyncUdpSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()];

NSError *error;
[udpSender connectToHost:@"192.168.1.29" onPort:1024 error:&error];   
我在播放回调函数中发送音频数据:

static OSStatus playbackCallback(void *inRefCon, 
                             AudioUnitRenderActionFlags *ioActionFlags, 
                             const AudioTimeStamp *inTimeStamp, 
                             UInt32 inBusNumber, 
                             UInt32 inNumberFrames, 
                             AudioBufferList *ioData) {    

/**
 This is the reference to the object who owns the callback.
 */
AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;

// iterate over incoming stream an copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) { 
    AudioBuffer buffer = ioData->mBuffers[i];

    // find minimum size
    UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);

    // copy buffer to audio buffer which gets played after function return
    memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);

    // set data size
    buffer.mDataByteSize = size;

    //Send data to remote server      

    NSMutableData *data=[[NSMutableData alloc] init];
    Float32 *frame = (Float32*)buffer.mData;
    [data appendBytes:frame length:size];
    if ([udpSender isConnected])
    {
        [udpSender sendData:data withTimeout:-1 tag:1];
    }


}



return noErr;
} 
静态OSStatus播放回调(void*inRefCon,
AudioUnitRenderActionFlags*ioActionFlags,
常量音频时间戳*inTimeStamp,
UInt32 InBunsNumber,
UInt32数字帧,
音频缓冲列表*ioData){
/**
这是对拥有回调的对象的引用。
*/
EFCON中的音频处理器*音频处理器=(音频处理器*);
//在传入流上迭代复制到输出流
对于(inti=0;imNumberBuffers;i++){
AudioBuffer=ioData->mBuffers[i];
//找到最小尺寸
UInt32 size=min(buffer.mDataByteSize[audioProcessor audioBuffer].mDataByteSize];
//将缓冲区复制到在函数返回后播放的音频缓冲区
memcpy(buffer.mData,[audioProcessor audioBuffer].mData,size);
//设置数据大小
buffer.mDataByteSize=大小;
//将数据发送到远程服务器
NSMutableData*数据=[[NSMutableData alloc]init];
Float32*frame=(Float32*)buffer.mData;
[数据字节:帧长度:大小];
如果([udpSender已连接])
{
[udpSender sendData:data with Timeout:-1标记:1];
}
}
返回noErr;
} 
我如何做到这一点


谢谢。

最后,这是我的解决方案

设置麦克风捕获过程:

-(void)open {
NSError *error;
m_capture = [[AVCaptureSession alloc]init];
AVCaptureDevice *audioDev = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
if (audioDev == nil)
{
    printf("Couldn't create audio capture device");
    return ;
}
//m_capture.sessionPreset = AVCaptureSessionPresetLow;

// create mic device
AVCaptureDeviceInput *audioIn = [AVCaptureDeviceInput deviceInputWithDevice:audioDev error:&error];
if (error != nil)
{
    printf("Couldn't create audio input");
    return ;
}


// add mic device in capture object
if ([m_capture canAddInput:audioIn] == NO)
{
    printf("Couldn't add audio input");
    return ;
}
[m_capture addInput:audioIn];
// export audio data
AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init];
[audioOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
if ([m_capture canAddOutput:audioOutput] == NO)
{
    printf("Couldn't add audio output");
    return ;
}


[m_capture addOutput:audioOutput];
[audioOutput connectionWithMediaType:AVMediaTypeAudio];
[m_capture startRunning];
return ;
}
捕获麦克风数据:

-(void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
char szBuf[450];
int  nSize = sizeof(szBuf);

if (isConnect == YES)
{
if ([self encoderAAC:sampleBuffer aacData:szBuf aacLen:&nSize] == YES)
{
    [self sendAudioData:szBuf len:nSize channel:0];
}

}
在插座上写首字母

-(void)initialSocket{
    //Use socket
    printf("initialSocket\n");
    CFReadStreamRef readStream = NULL;
    CFWriteStreamRef writeStream = NULL;

    NSString *ip = @"192.168.1.147";   //Your IP Address
    uint *port = 22133;

    CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault, (__bridge CFStringRef)ip, port, &readStream,  &writeStream);
    if (readStream && writeStream) {
    CFReadStreamSetProperty(readStream, kCFStreamPropertyShouldCloseNativeSocket, kCFBooleanTrue);
    CFWriteStreamSetProperty(writeStream, kCFStreamPropertyShouldCloseNativeSocket, kCFBooleanTrue);

    iStream = (__bridge NSInputStream *)readStream;
    [iStream setDelegate:self];
    [iStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
    [iStream open];

    oStream = (__bridge NSOutputStream *)writeStream;
    [oStream setDelegate:self];
    [oStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
    [oStream open];
    }
}
从麦克风捕获数据时,将数据发送到插座

-(void)sendAudioData: (char *)buffer len:(int)len channel:(UInt32)channel
{
    Float32 *frame = (Float32*)buffer;
    [globalData appendBytes:frame length:len];

    if (isConnect == YES)
    {
        if ([oStream streamStatus] == NSStreamStatusOpen)
        {
            [oStream write:globalData.mutableBytes maxLength:globalData.length];


            globalData = [[NSMutableData alloc] init];

        }
    }

}

希望这能帮助一些人。

如果我没有错的话,这里你也没有将数据包装为RTP。或者你在任何一种方法中都是这样做的?请帮帮我,我现在需要解决办法stuck@Wei文孝:嘿,亲爱的。我读了你的问题。我有同样的要求。请给出一些关于如何为音频流创建RTP数据包的建议。globalData?它的定义是什么?