.net core WebRTC和Asp.NetCore
我想将音频流从我的Angular Web应用程序录制到我的Asp.net核心Api 我认为,使用Signal及其WebSocket是一个很好的方法 使用此typescript代码,我可以获得媒体流:.net core WebRTC和Asp.NetCore,.net-core,signalr,webrtc,.net Core,Signalr,Webrtc,我想将音频流从我的Angular Web应用程序录制到我的Asp.net核心Api 我认为,使用Signal及其WebSocket是一个很好的方法 使用此typescript代码,我可以获得媒体流: import { HubConnection } from '@aspnet/signalr'; [...] private stream: MediaStream; private connection: webkitRTCPeerConnection; @ViewChild('video')
import { HubConnection } from '@aspnet/signalr';
[...]
private stream: MediaStream;
private connection: webkitRTCPeerConnection;
@ViewChild('video') video;
[...]
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
console.trace('Received local stream');
this.video.srcObject = stream;
this.stream = stream;
var _hubConnection = new HubConnection('[MY_API_URL]/webrtc');
this._hubConnection.send("SendStream", stream);
})
.catch(function (e) {
console.error('getUserMedia() error: ' + e.message);
});
我使用.NetCore API处理流
public class MyHub: Hub{
public void SendStream(object o)
{
}
}
但当我将o强制转换为System.IO.Stream时,得到了一个空值
当我阅读WebRTC的文档时,我看到了有关RTPeerConnection的信息。IceConnection。。。我需要这个吗
如何使用信号器将音频从WebClient传输到Asp.netCore API?文档GitHub
感谢您的帮助我找到了访问麦克风流并将其传输到服务器的方法,以下是代码:
private audioCtx: AudioContext;
private stream: MediaStream;
convertFloat32ToInt16(buffer:Float32Array) {
let l = buffer.length;
let buf = new Int16Array(l);
while (l--) {
buf[l] = Math.min(1, buffer[l]) * 0x7FFF;
}
return buf.buffer;
}
startRecording() {
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
this.audioCtx = new AudioContext();
this.audioCtx.createMediaStreamSource(stream);
this.audioCtx.onstatechange = (state) => { console.log(state); }
var scriptNode = this.audioCtx.createScriptProcessor(4096, 1, 1);
scriptNode.onaudioprocess = (audioProcessingEvent) => {
var buffer = [];
// The input buffer is the song we loaded earlier
var inputBuffer = audioProcessingEvent.inputBuffer;
// Loop through the output channels (in this case there is only one)
for (var channel = 0; channel < inputBuffer.numberOfChannels; channel++) {
console.log("inputBuffer:" + audioProcessingEvent.inputBuffer.getChannelData(channel));
var chunk = audioProcessingEvent.inputBuffer.getChannelData(channel);
//because endianness does matter
this.MySignalRService.send("SendStream", this.convertFloat32ToInt16(chunk));
}
}
var source = this.audioCtx.createMediaStreamSource(stream);
source.connect(scriptNode);
scriptNode.connect(this.audioCtx.destination);
this.stream = stream;
})
.catch(function (e) {
console.error('getUserMedia() error: ' + e.message);
});
}
stopRecording() {
try {
let stream = this.stream;
stream.getAudioTracks().forEach(track => track.stop());
stream.getVideoTracks().forEach(track => track.stop());
this.audioCtx.close();
}
catch (error) {
console.error('stopRecording() error: ' + error);
}
}
私有audioCtx:AudioContext;
私有流:媒体流;
convertFloat32ToInt16(缓冲区:Float32Array){
设l=buffer.length;
设buf=新的Int16Array(l);
而(l--){
buf[l]=数学最小值(1,缓冲区[l])*0x7FFF;
}
返回buf缓冲区;
}
startRecording(){
navigator.mediaDevices.getUserMedia({audio:true})
。然后(流=>{
this.audioCtx=新的AudioContext();
this.audioCtx.createMediaStreamSource(流);
this.audioCtx.onstatechange=(状态)=>{console.log(状态);}
var scriptNode=this.audioCtx.createScriptProcessor(4096,1,1);
scriptNode.onaudioprocess=(音频处理事件)=>{
var缓冲区=[];
//输入缓冲区是我们之前加载的歌曲
var inputBuffer=audioProcessingEvent.inputBuffer;
//通过输出通道循环(在这种情况下,只有一个)
用于(var通道=0;通道track.stop());
stream.getVideoTracks().forEach(track=>track.stop());
this.audioCtx.close();
}
捕获(错误){
console.error('stopRecording()错误:'+错误);
}
}
下一步是将我的int32Array转换为wav文件
帮助我的资料来源:
我没有添加关于如何配置信号器的代码,这不是本文的目的。很好的解决方案。您是否介意分享您的信号集线器代码,或指向任何图坦卡蒙或文档页面介绍如何做到这一点?我在接收流和在中心端处理流时遇到问题。如果我有时间,我会尝试这样做,但我可能不会。。。很抱歉我删除了POC,它现在已集成到我们的项目中,提取它需要很长时间:(