C# SpeakSsmlAsync返回BadRequest
调用C# SpeakSsmlAsync返回BadRequest,c#,.net,speech-synthesis,azure-cognitive-services,azure-speech,C#,.net,Speech Synthesis,Azure Cognitive Services,Azure Speech,调用SpeakSsmlAsync(Microsoft Speech SDK)时,返回以下错误消息: > CANCELED: Reason=Error > CANCELED: ErrorCode=BadRequest > CANCELED: ErrorDetails=[HTTPAPI result code = HTTPAPI_OK. HTTP status code=400.] > CANCELED: Did you update the subscription i
SpeakSsmlAsync
(Microsoft Speech SDK)时,返回以下错误消息:
> CANCELED: Reason=Error
> CANCELED: ErrorCode=BadRequest
> CANCELED: ErrorDetails=[HTTPAPI result code = HTTPAPI_OK. HTTP status code=400.]
> CANCELED: Did you update the subscription info?
复制步骤:
SpeakTextAsync
)speaktextmlasync
替换为SpeakSsmlAsync
abracadabra
“
-->ErrorCode=BadRequest- .NET Framework 4.6.1
- Windows10版本17134
- 服务区域=“西欧”
using System;
using System.Threading.Tasks;
using Microsoft.CognitiveServices.Speech;
namespace helloworld
{
class Program
{
private static string endpointSpeechKey = "<MyOwnServiceKey>";
private static string region = "westeurope";
public static async Task SynthesisToSpeakerAsync()
{
var config = SpeechConfig.FromSubscription(endpointSpeechKey, region);
using (var synthesizer = new SpeechSynthesizer(config))
{
Console.WriteLine("Type some text that you want to speak...");
Console.Write("> ");
string text = Console.ReadLine();
using (var result = await synthesizer.SpeakSsmlAsync(text))
{
if (result.Reason == ResultReason.SynthesizingAudioCompleted)
{
Console.WriteLine($"Speech synthesized to speaker for text [{text}]");
}
else if (result.Reason == ResultReason.Canceled)
{
var cancellation = SpeechSynthesisCancellationDetails.FromResult(result);
Console.WriteLine($"CANCELED: Reason={cancellation.Reason}");
if (cancellation.Reason == CancellationReason.Error)
{
Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
Console.WriteLine($"CANCELED: ErrorDetails=[{cancellation.ErrorDetails}]");
Console.WriteLine($"CANCELED: Did you update the subscription info?");
}
}
}
// This is to give some time for the speaker to finish playing back the audio
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
}
static void Main()
{
SynthesisToSpeakerAsync().Wait();
}
}
}
使用系统;
使用System.Threading.Tasks;
使用Microsoft.CognitiveServices.Speech;
名称空间helloworld
{
班级计划
{
私有静态字符串endpointSpeechKey=“”;
私有静态字符串区域=“西欧”;
公共静态异步任务SynthesisToSpeakerAsync()
{
var config=SpeechConfig.FromSubscription(endpointSpeechKey,region);
使用(var合成器=新的语音合成器(配置))
{
WriteLine(“键入一些您想说的文本…”);
控制台。写(“>”;
string text=Console.ReadLine();
使用(var result=await synthesizer.SpeakSsmlAsync(text))
{
if(result.Reason==ResultReason.synthesis已完成)
{
Console.WriteLine($“为文本[{text}]合成到说话人的语音]”;
}
else if(result.Reason==ResultReason.cancelled)
{
var取消=SpeechSynthesisScanCellationDetails.FromResult(结果);
WriteLine($“cancelled:Reason={cancellation.Reason}”);
if(cancellation.Reason==CancellationReason.Error)
{
Console.WriteLine($“Cancelled:ErrorCode={cancellation.ErrorCode}”);
Console.WriteLine($“Cancelled:ErrorDetails=[{cancellation.ErrorDetails}]”);
Console.WriteLine($“已取消:是否更新了订阅信息?”);
}
}
}
//这是为了给演讲者一些时间来完成音频播放
Console.WriteLine(“按任意键退出…”);
Console.ReadKey();
}
}
静态void Main()
{
SynthesisToSpeakerAsync().Wait();
}
}
}
调试屏幕截图
Azure似乎只在包含声控标签时才接受SSML。否则您将收到http-400错误 使用下面的代码,对SpeakSsmlAsync的调用将成功运行:
text = @"<speak version='1.0' xmlns='https://www.w3.org/2001/10/synthesis' xml:lang='en-US'><voice name='en-US-ZiraRUS'>abracadabra</voice></speak>";
using (var result = await synthesizer.SpeakSsmlAsync(text))
text=@“abracadabra”;
使用(var result=await synthesizer.SpeakSsmlAsync(text))
搜索Microsoft SSML时要小心。这两者之间存在差异
(这是针对Azure语音服务编程时所需的)和
是的,Azure TTS服务只接受带有声控标签的SSML
原因是语音太多,所以最好明确指定要使用的语音。我的理解是语音是可选的。所以问题可能是为什么会出现400错误?可能是语音服务正在查找未安装的默认语音吗?我会在语音服务文档中将其标记为缺陷陈述和/或使用反馈进行报告。@Micromuncher我刚在MSDN论坛上得到一个答案,确认该行为。他们将检查是否可以修复它。您是否可以实际获得返回并自动播放的音频文件?因为每个新请求都会从服务器获取一个新的音频文件(即使文本没有更改)将音频文件隐藏起来供以后使用并从“缓存”中播放会很方便。有人吗?@mramosch-是的,当然,这是你通常会做的。如果它对你不起作用,也许发布相应的问题是正确的方法。我想我最好把问题放在人们参与的地方,看看我希望有必要的知识来帮助我。通常当我发布一些新的问题,我没有得到任何回应。。。