Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/angular/28.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Node.js 将流从getUserMedia发送到nodejs后端,并将其重新发送到google云平台语音API_Node.js_Angular_Sockets_Socket.io_Google Cloud Platform - Fatal编程技术网

Node.js 将流从getUserMedia发送到nodejs后端,并将其重新发送到google云平台语音API

Node.js 将流从getUserMedia发送到nodejs后端,并将其重新发送到google云平台语音API,node.js,angular,sockets,socket.io,google-cloud-platform,Node.js,Angular,Sockets,Socket.io,Google Cloud Platform,我正在做一个项目,我们需要使用google云平台语音API,所以我使用getUserMedia来获取MediaStream,但我不知道从它向后端发送什么 在后端,我有一个简单的服务器nodeJs,带有socket.io、socket.io-stream和googlespeechapi 我在调查第二起案件。 我想将流发送到后端,并将其重新发送到google语音API。我真的不想录制音频文件,也不想出于安全原因将流直接从我的前端发送到谷歌 前端 import { Component } from '

我正在做一个项目,我们需要使用google云平台语音API,所以我使用getUserMedia来获取MediaStream,但我不知道从它向后端发送什么

在后端,我有一个简单的服务器nodeJs,带有socket.io、socket.io-stream和googlespeechapi

我在调查第二起案件。 我想将流发送到后端,并将其重新发送到google语音API。我真的不想录制音频文件,也不想出于安全原因将流直接从我的前端发送到谷歌

前端

import { Component } from '@angular/core';
import { Context } from "./types/context";
import { KdSchema } from './types/kdschema/kd-schema';
import * as io from 'socket.io-client';
import * as ss from 'socket.io-stream';
declare var { navigator }: any;

@Component({
    selector: 'test-root',
    templateUrl: './test.component.html',
    styleUrls: ['./test.component.css']
})

export class TestComponent {
    stream: MediaStream;
    server = 'http://localhost:5000';
    socket;
    socketStream;

    constructor() {
        this.socket = io(this.server);
        this.socket.emit('connection');
        this.socketStream = ss.createStream();
        navigator.getUserMedia = navigator.getUserMedia ||
                                 navigator.webkitGetUserMedia ||
                                 navigator.mozGetUserMedia;
    }

    startRecording() {
        const mediaSession = {audio: true, video: false};

        const successCallback = (stream: MediaStream) => {
            this.stream = stream;
            ss(this.socket).emit('audioStream', stream.getAudioTracks[0] );
        }

        if (navigator.getUserMedia) {
            navigator.getUserMedia(mediaSession, successCallback, (err) => console.log(err));
        } else {
            console.log('Error: getUserMedia not supported !');
        }
    }

    stopRecording() {}
}
let app = require('express')();
let http = require('http');
let io = require('socket.io').listen(5000);
let socketStream = require('socket.io-stream');
let Speech = require('@google-cloud/speech')(MY CREDENTIAL);


// The encoding of the audio file, e.g. 'LINEAR16'
const encoding = 'LINEAR16';

// The sample rate of the audio file in hertz, e.g. 16000
const sampleRateHertz = 16000;

// The BCP-47 language code to use, e.g. 'en-US'
const languageCode = 'fr';

const request = {
    config: {
        encoding: encoding,
        sampleRateHertz: sampleRateHertz,
        languageCode: languageCode
    },
    interimResults: false // If you want interim results, set this to true
};

// Create a recognize stream
const recognizeStream = Speech.streamingRecognize(request)
    .on('data', data => {
        console.log(data[0]);
    }).on('error', err => console.log('Error: ', err));

io.on('connection', (socket) => {
    console.log('user connected');

    socket.on('disconnect', function() {
        console.log('user disconnected');
    });

    socketStream(socket).on('audioStream', stream => {
        console.log(stream);
    });
});
后端

import { Component } from '@angular/core';
import { Context } from "./types/context";
import { KdSchema } from './types/kdschema/kd-schema';
import * as io from 'socket.io-client';
import * as ss from 'socket.io-stream';
declare var { navigator }: any;

@Component({
    selector: 'test-root',
    templateUrl: './test.component.html',
    styleUrls: ['./test.component.css']
})

export class TestComponent {
    stream: MediaStream;
    server = 'http://localhost:5000';
    socket;
    socketStream;

    constructor() {
        this.socket = io(this.server);
        this.socket.emit('connection');
        this.socketStream = ss.createStream();
        navigator.getUserMedia = navigator.getUserMedia ||
                                 navigator.webkitGetUserMedia ||
                                 navigator.mozGetUserMedia;
    }

    startRecording() {
        const mediaSession = {audio: true, video: false};

        const successCallback = (stream: MediaStream) => {
            this.stream = stream;
            ss(this.socket).emit('audioStream', stream.getAudioTracks[0] );
        }

        if (navigator.getUserMedia) {
            navigator.getUserMedia(mediaSession, successCallback, (err) => console.log(err));
        } else {
            console.log('Error: getUserMedia not supported !');
        }
    }

    stopRecording() {}
}
let app = require('express')();
let http = require('http');
let io = require('socket.io').listen(5000);
let socketStream = require('socket.io-stream');
let Speech = require('@google-cloud/speech')(MY CREDENTIAL);


// The encoding of the audio file, e.g. 'LINEAR16'
const encoding = 'LINEAR16';

// The sample rate of the audio file in hertz, e.g. 16000
const sampleRateHertz = 16000;

// The BCP-47 language code to use, e.g. 'en-US'
const languageCode = 'fr';

const request = {
    config: {
        encoding: encoding,
        sampleRateHertz: sampleRateHertz,
        languageCode: languageCode
    },
    interimResults: false // If you want interim results, set this to true
};

// Create a recognize stream
const recognizeStream = Speech.streamingRecognize(request)
    .on('data', data => {
        console.log(data[0]);
    }).on('error', err => console.log('Error: ', err));

io.on('connection', (socket) => {
    console.log('user connected');

    socket.on('disconnect', function() {
        console.log('user disconnected');
    });

    socketStream(socket).on('audioStream', stream => {
        console.log(stream);
    });
});

我的问题是,我必须向后端发送什么?

我做了两个更改:一个是您的RecognitizeStream,另一个是您的socket.io-stream

let app = require('express')();
let http = require('http');
let io = require('socket.io').listen(5000);
let socketStream = require('socket.io-stream');
let Speech = require('@google-cloud/speech')(MY CREDENTIAL);


// The encoding of the audio file, e.g. 'LINEAR16'
const encoding = 'LINEAR16';

// The sample rate of the audio file in hertz, e.g. 16000
const sampleRateHertz = 16000;

// The BCP-47 language code to use, e.g. 'en-US'
const languageCode = 'fr';

const request = {
    config: {
        encoding: encoding,
        sampleRateHertz: sampleRateHertz,
        languageCode: languageCode
    },
    interimResults: false // If you want interim results, set this to true
};

// Create a recognize stream
const recognizeStream = Speech.createRecognizeStream(request)
    .on('data', data => {
        console.log("Receiving data!!!!!!"); 
        console.log(data[0]);
    }).on('error', err => console.log('Error: ', err));

io.on('connection', (socket) => {
    console.log('user connected');

    socket.on('disconnect', function() {
        console.log('user disconnected');
    });

    socketStream(socket).on('audioStream', stream => {
        //console.log(stream);
        console.log("Got a stream");
        stream.pipe(recognizeStream);
    });
});

如果这不起作用,将流传输到一个文件,并使用类似audacity的方法检查文件的赫兹。过去,我在发送google wav文件时,在指定错误的赫兹和/或编码时遇到了问题。

我做了两个更改:一个是您的RecognitizeStream,另一个是您的socket.io-stream

let app = require('express')();
let http = require('http');
let io = require('socket.io').listen(5000);
let socketStream = require('socket.io-stream');
let Speech = require('@google-cloud/speech')(MY CREDENTIAL);


// The encoding of the audio file, e.g. 'LINEAR16'
const encoding = 'LINEAR16';

// The sample rate of the audio file in hertz, e.g. 16000
const sampleRateHertz = 16000;

// The BCP-47 language code to use, e.g. 'en-US'
const languageCode = 'fr';

const request = {
    config: {
        encoding: encoding,
        sampleRateHertz: sampleRateHertz,
        languageCode: languageCode
    },
    interimResults: false // If you want interim results, set this to true
};

// Create a recognize stream
const recognizeStream = Speech.createRecognizeStream(request)
    .on('data', data => {
        console.log("Receiving data!!!!!!"); 
        console.log(data[0]);
    }).on('error', err => console.log('Error: ', err));

io.on('connection', (socket) => {
    console.log('user connected');

    socket.on('disconnect', function() {
        console.log('user disconnected');
    });

    socketStream(socket).on('audioStream', stream => {
        //console.log(stream);
        console.log("Got a stream");
        stream.pipe(recognizeStream);
    });
});

如果这不起作用,将流传输到一个文件,并使用类似audacity的方法检查文件的赫兹。我过去在尝试发送google wav文件时遇到了问题,同时指定了错误的赫兹和/或编码。

您可以在
socketStream(socket.on)中执行类似stream.pipe(recognizeStream)的操作吗('audioStream',…
block?不,这会引发一个错误,MediaStream对象在不做任何工作的情况下肯定不是正确的发送对象,但我不知道该怎么做。您可以在
socketStream(socket.on)中执行类似stream.pipe(recognizeStream)的操作吗('audioStream',…
block?不,这会引发错误,MediaStream对象在不做任何工作的情况下肯定不是正确的发送对象,但我不知道该怎么做。谢谢,这对后端很有用。现在我必须弄清楚从前端发送什么。是的,我已经对这个问题做出了反应,这很有帮助,但我仍然无法使它工作。But谢谢。我在过去遇到的另一件事是,Chrome需要https才能访问麦克风。不确定是否仍然是这样。尝试使用Firefox或创建一些证书来使用https进行测试。好的!谢谢提示,伙计!谢谢,这对后端很有用。现在我必须弄清楚从前端发送什么nd.是的,我已经对这个问题做出了反应,这很有帮助,但我仍然无法让它工作。但是谢谢。我在过去遇到的另一件事是,Chrome需要https才能访问麦克风。不确定是否仍然如此。尝试使用Firefox或创建一些证书来测试https。好的!谢谢提示!