使用SpeechSynthesis将文本转换为语音时,对.NET 4.5中WebAPI中的HTTP Get请求没有响应
我正在尝试使用WebAPI设置一个简单的web服务。以下是我的代码:使用SpeechSynthesis将文本转换为语音时,对.NET 4.5中WebAPI中的HTTP Get请求没有响应,.net,asp.net-web-api,text-to-speech,speechsynthesizer,.net,Asp.net Web Api,Text To Speech,Speechsynthesizer,我正在尝试使用WebAPI设置一个简单的web服务。以下是我的代码: public class SpeakController : ApiController { // // api/speak public HttpResponseMessage Get(String textToConvert, String outputFile, string gender, string age = "Adult") {
public class SpeakController : ApiController
{
//
// api/speak
public HttpResponseMessage Get(String textToConvert, String outputFile, string gender, string age = "Adult")
{
VoiceGender voiceGender = (VoiceGender)Enum.Parse(typeof(VoiceGender), gender);
VoiceAge voiceAge = (VoiceAge)Enum.Parse(typeof(VoiceAge), age);
using (SpeechSynthesizer synthesizer = new SpeechSynthesizer())
{
synthesizer.SelectVoiceByHints(voiceGender, voiceAge);
synthesizer.SetOutputToWaveFile(outputFile, new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono));
synthesizer.Speak(textToConvert);
}
return Request.CreateResponse(HttpStatusCode.OK, new Response { HttpStatusCode = (int)HttpStatusCode.OK, Message = "Payload Accepted." });
}
}
这段代码相当直截了当,绝不是生产准备就绪。但在我的测试中,我注意到对控制器的任何请求都会发生以下情况:
- 成功生成WAV文件
- 在调试期间,我可以看到控件命中返回并退出该方法
- 然而,我的浏览器一直在旋转,我从来没有收到服务器的响应
synthesizer.Speak
到synthesizer.SpeakAsync
,并遇到了相同的问题
但是,当我像下面所示单独测试代码段时,代码按预期工作
用语音部分测试WebAPI调用注释掉:
public class SpeakController : ApiController
{
//
// api/speak
public HttpResponseMessage Get(String textToConvert, String outputFile, string gender, string age = "Adult")
{
VoiceGender voiceGender = (VoiceGender)Enum.Parse(typeof(VoiceGender), gender);
VoiceAge voiceAge = (VoiceAge)Enum.Parse(typeof(VoiceAge), age);
//using (SpeechSynthesizer synthesizer = new SpeechSynthesizer())
//{
// synthesizer.SelectVoiceByHints(voiceGender, voiceAge);
// synthesizer.SetOutputToWaveFile(outputFile, new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono));
// synthesizer.Speak(textToConvert);
//}
return Request.CreateResponse(HttpStatusCode.OK, new Response { HttpStatusCode = (int)HttpStatusCode.OK, Message = "Payload Accepted." });
}
}
static string usageInfo = "Invalid or no input arguments!"
+ "\n\nUsage: initiatives \"text to speak\" c:\\path\\to\\generate.wav gender"
+ "\nGender:\n\tMale or \n\tFemale"
+ "\n";
static void Main(string[] args)
{
if (args.Length != 3)
{
Console.WriteLine(usageInfo);
}
else
{
ConvertStringToSpeechWav(args[0], args[1], (VoiceGender)Enum.Parse(typeof(VoiceGender), args[2]));
}
Console.WriteLine("Press any key to continue...");
Console.ReadLine();
}
static void ConvertStringToSpeechWav(String textToConvert, String pathToCreateWavFile, VoiceGender gender, VoiceAge age = VoiceAge.Adult)
{
using (SpeechSynthesizer synthesizer = new SpeechSynthesizer())
{
synthesizer.SelectVoiceByHints(gender, age);
synthesizer.SetOutputToWaveFile(pathToCreateWavFile, new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono));
synthesizer.Speak(textToConvert);
}
}
在控制台应用程序中单独测试语音:
public class SpeakController : ApiController
{
//
// api/speak
public HttpResponseMessage Get(String textToConvert, String outputFile, string gender, string age = "Adult")
{
VoiceGender voiceGender = (VoiceGender)Enum.Parse(typeof(VoiceGender), gender);
VoiceAge voiceAge = (VoiceAge)Enum.Parse(typeof(VoiceAge), age);
//using (SpeechSynthesizer synthesizer = new SpeechSynthesizer())
//{
// synthesizer.SelectVoiceByHints(voiceGender, voiceAge);
// synthesizer.SetOutputToWaveFile(outputFile, new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono));
// synthesizer.Speak(textToConvert);
//}
return Request.CreateResponse(HttpStatusCode.OK, new Response { HttpStatusCode = (int)HttpStatusCode.OK, Message = "Payload Accepted." });
}
}
static string usageInfo = "Invalid or no input arguments!"
+ "\n\nUsage: initiatives \"text to speak\" c:\\path\\to\\generate.wav gender"
+ "\nGender:\n\tMale or \n\tFemale"
+ "\n";
static void Main(string[] args)
{
if (args.Length != 3)
{
Console.WriteLine(usageInfo);
}
else
{
ConvertStringToSpeechWav(args[0], args[1], (VoiceGender)Enum.Parse(typeof(VoiceGender), args[2]));
}
Console.WriteLine("Press any key to continue...");
Console.ReadLine();
}
static void ConvertStringToSpeechWav(String textToConvert, String pathToCreateWavFile, VoiceGender gender, VoiceAge age = VoiceAge.Adult)
{
using (SpeechSynthesizer synthesizer = new SpeechSynthesizer())
{
synthesizer.SelectVoiceByHints(gender, age);
synthesizer.SetOutputToWaveFile(pathToCreateWavFile, new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, AudioChannel.Mono));
synthesizer.Speak(textToConvert);
}
}
WebAPI和SpeechSynthesis似乎不能很好地结合在一起。如果您能帮我解决这个问题,我们将不胜感激
谢谢 我不知道为什么会发生这种情况,但在一个单独的线程中运行SpeechSynthesizer似乎可以做到这一点(线程模型不兼容?)。这是我过去的做法 基于:
您的pathToCreateWavFile(本地磁盘、网络、什么样的路径格式)位于什么样的位置,特别是您的IIS应用程序池是否具有写入该文件夹的权限?因为我正在我的开发机器上进行测试,所以路径是本地的。特别是'C:\temp'。是的,我认为这不是权限问题,因为WAV已成功生成。值得一提的是,VisualStudio2012附带了IISExpress。好吧,显然如果文件已完全生成,那就不是问题了-我想我会检查一下:-)通常最好将相关信息复制到您的帖子中,而不仅仅是发布链接。如果链接失效,您的信息将失效。然后你可以把它放在一个更好的环境中,以提高它的质量。谢谢你的链接!纳里曼在《更新2》中的回答完全适合我。