Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/csharp/324.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/vb.net/17.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C# 如何在C语言中使用SpinProcreContext识别语音事件?_C#_Vb.net_Speech Recognition_Sapi - Fatal编程技术网

C# 如何在C语言中使用SpinProcreContext识别语音事件?

C# 如何在C语言中使用SpinProcreContext识别语音事件?,c#,vb.net,speech-recognition,sapi,C#,Vb.net,Speech Recognition,Sapi,我即将完成一个通过C SAPI 5.4修改Windows语音词典的个人项目。我正在研究的最后一点是如何为给定单词获取SAPI电话机。我已经找到了一种方法,通过C表单和通过C获得的语音识别来实现这一点。然而,我试图让识别工作与语音文件*.wav作为输入。我理解这需要通过一个网络来完成 我在微软找到的每个关于SAPI 5.4识别的例子,比如VB,都是针对SPSharedRecordContext,而不是SpinProcRecordContext,我相信我已经看到一些评论,其中一些例子缺少细节。此外,

我即将完成一个通过C SAPI 5.4修改Windows语音词典的个人项目。我正在研究的最后一点是如何为给定单词获取SAPI电话机。我已经找到了一种方法,通过C表单和通过C获得的语音识别来实现这一点。然而,我试图让识别工作与语音文件*.wav作为输入。我理解这需要通过一个网络来完成

我在微软找到的每个关于SAPI 5.4识别的例子,比如VB,都是针对SPSharedRecordContext,而不是SpinProcRecordContext,我相信我已经看到一些评论,其中一些例子缺少细节。此外,我在这个论坛上发现了Eric Brown主要回答的多个主题,看到了吗,提到使用SpinProcreContext需要比SPSharedRecordContext更多的设置,但是我还没有找到在C中使用SpinProcreContext时如何捕获语音识别事件的明确答案

我怎样才能继续呢

以下是我为更好地组织而编辑的代码:

下面是VB中的另一个示例,它结合了Microsoft示例,但仍然不起作用。请查看Command1中的注释,单击以查找遇到运行时错误的位置

进口演讲稿 公开课表格1 Const WaveFile=C:\Reco\MYWAVE.wav 使用事件RC作为SpinProcureContext进行Dim 作为SpInprocRecognizer的昏暗识别器 Dim myGrammar作为iPeechRecogram 将MyFileStream设置为SpeechLib.SpFileStream 将MyVoice设置为SpeechLib.SpVoice 将MyText设置为字符串 私有子表单1\u Loadsender作为对象,e作为EventArgs处理MyBase.Load 关于错误转到EH RC=新SpinProcContext 识别器=RC.识别器 myGrammar=RC.CreateGrammar myGrammar.DictionSetStateSpeechRuleState.SGDSActive MyVoice=新语音 MyVoice.Voice=MyVoice.GetVoicesgender=male.Item0 作为SPObject TokenCategory的Dim类别 类别=新的SpObjectTokenCategory Category.SetIdSpeechStringConstants.SpeechCategoryAudion 作为SpObjectToken的Dim标记 Token=新的SpObjectToken Token.SetIdCategory.Default Recognizer.AudioInput=令牌 TextBox1.Text=玩八个球杆 呃, 如果错误编号,则为RMSG 端接头 私有子命令1\u Clicksender作为对象,e作为事件args处理命令1。单击 MyFileStream=MakeWavFileFromTextBox1.Text,波形文件 MyFileStream.OpenWaveFile Recognizer.AudioInputStream=MyFileStream'==>产生运行时错误!!! 端接头 私有子RC_识别ByVal StreamNumber为Long,ByVal StreamPosition为Object,ByVal识别类型为SpeechLib.SpeechRecognitionType,ByVal结果为SpeechLib.ISpeechRecoResult 关于错误转到EH TextBox2.Text=Result.PhraseInfo.GetText 呃, 如果错误编号,则为RMSG 端接头 私人分公司 '声明标识符: 常量NL=vbNewLine 调暗T作为字符串 T=Desc:&错误描述&NL T=T&Err:&Err.Number MsgBoxT、VBEQUOTION、运行时错误 终止 端接头 私有函数将WavFileFromTextByVal strText作为字符串,将ByVal strFName作为字符串作为SpFileStream 关于错误转到EH '声明标识符: 将FileStream设置为SpFileStream 暗声 '实例化语音和文件流对象: 语音=新语音 FileStream=新的SpFileStream '打开指定的.wav文件,设置语音输出 '同步归档和讲话: FileStream.OpenstrFName,SpeechStreamFileMode.SSFMCreateForWrite,True Voice.AudioOutputStream=FileStream Voice.SpeakstrText,SpeechVoiceSpeakFlags.SVSFIsXML '关闭文件并返回对FileStream对象的引用: 文件流。关闭 MakeWAVFileFromText=FileStream 呃, 如果错误编号,则为RMSG 端函数 末级 ' https://msdn.microsoft.com/en-us/library/ee125184%28v=vs.85%29.aspx ' https://msdn.microsoft.com/en-us/library/ee125344v=vs.85.aspx 更新:所以这可以工作,但是流结束事件不会触发,保持应用程序。从返回开始运行。我可以用秒表来关闭所有东西作为解决办法,但显然这并不理想。请记住,我仍然是C语言的新手,所以我的评论可能不是100%准确

您知道如何启动end stream事件吗


很抱歉花了这么长时间,但是查看您的代码,我发现了几个可能的问题

在将识别器设置为活动之前,需要在识别器上设置输入流。一旦识别器激活,它将立即开始读取。更改活动识别器上的输入流将导致错误。 在将识别器设置为活动之前,确实需要设置记录配置文件和记录引擎 也我将为每种类型创建单独的SpObjectTokenCategory对象。
我回过头来提供完整的解决方案,它允许我提取给定的单词,创建一个语音文件流,将文本转换为语音,然后提取这个单词的SAPI音素。其中包含了我最初问题的答案。使用SpeechLib还可以参考Interop.SpeechLib.dll,它是COM Microsoft语音对象库v5.4

请记住,此代码在另一个名为的父应用程序中用作内联函数,因此代码的格式与Visual Studio中的预期略有不同。从这种格式转换到VisualStudio并不困难,希望其他人可以将其作为未来工作的跳板

请注意,我是一个C爱好者。代码在功能和速度方面完全符合我的需要,但它可能没有像某些人所希望的那样优化,并且描述性注释仅限于我的现有知识。我当然愿意接受关于如何改进的建议

非常感谢Eric Brown的反馈

using SpeechLib;
using System;
using System.IO;
using System.Threading;
using System.Windows.Forms;

class VAInline
{
    // Initialize variables needed throughout this code
    ISpeechRecoGrammar grammar; // Declare the grammar
    SpFileStream FileStream; // Declare the voice recognition input file stream
    string AudioPath = null; // Declare directory path to wav file
    string GrammarPath = null; // Declare directory path to grammar file
    string RecognitionFlag = "";
    string RecognitionConfidence = "";
    bool UseDictation; // Declare boolean variable for storing pronunciation dictation grammar setting

    public void main()
    {
        // Reset relevant VoiceAttack text variables
        VA.SetText("~~RecognitionError", null);
        VA.SetText("~~RecognizedText", null);
        VA.SetText("~~SAPIPhonemes", null);
        VA.SetText("~~SAPIPhonemesRaw", null);
        //VA.SetText("~~FalseRecognitionFlag", null);

        // Retrieve the desired word data contained within VoiceAttack text variable
        string ProcessText = null; // Initialize string variable for storing the text of interest
        if (VA.GetText("~~ProcessText") != null) // Check if user provided valid text in input variable
            ProcessText = VA.GetText("~~ProcessText"); // Store text of interest held by VA text variable
        else
        {
            VA.SetText("~~RecognitionError", "Error in input text string (SAPI)"); // Send error detail back to VoiceAttack as text variable
            return; // End code processing
        }

        // Retrieve path to speech grammar XML file from VoiceAttack
        GrammarPath = VA.GetText("~~GrammarFilePath");

        // Retrieve path to voice recognition input wav file from VoiceAttack
        AudioPath = VA.GetText("~~AudioFilePath");

        // Check if TTS engine is voicing the input for the speech recognition engine
        if (VA.GetBoolean("~~UserVoiceInput") == false)
        {
            //VA.WriteToLog("creating wav file");
            if (TextToWav(AudioPath, ProcessText) == false) // Create wav file with specified path that voices specified text (with text-to-speech) and check if the creation was NOT successful
                return; // Stop executing the code
        }

        // Create speech recognizer and associated context
        SpInprocRecognizer MyRecognizer = new SpInprocRecognizer(); // Create new instance of SpInprocRecognizer
        SpInProcRecoContext RecoContext = (SpInProcRecoContext)MyRecognizer.CreateRecoContext(); // Initialize the SpInProcRecoContext (in-process recognition context)

        try // Attempt the following code
        {
            // Open the created wav in a new FileStream
            FileStream = new SpFileStream(); // Create new instance of SpFileStream
            FileStream.Open(AudioPath, SpeechStreamFileMode.SSFMOpenForRead, true); // Open the specified file in the FileStream for reading with events enabled

            // Set the voice recognition input as the FileStream
            MyRecognizer.AudioInputStream = FileStream; // This will internally "speak" the wav file for input into the voice recognition engine

            // Set up recognition event handling
            RecoContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(RecoContext_Recognition); // Register for successful voice recognition events
            RecoContext.FalseRecognition += new _ISpeechRecoContextEvents_FalseRecognitionEventHandler(RecoContext_FalseRecognition); // Register for failed (low confidence) voice recognition events
            if (VA.GetBoolean("~~ShowRecognitionHypothesis") == true) // Check if user wants to show voice recognition hypothesis results
                RecoContext.Hypothesis += new _ISpeechRecoContextEvents_HypothesisEventHandler(RecoContext_Hypothesis); // Register for voice recognition hypothesis events
            RecoContext.EndStream += new _ISpeechRecoContextEvents_EndStreamEventHandler(RecoContext_EndStream); // Register for end of file stream events

            // Set up the grammar
            grammar = RecoContext.CreateGrammar(); // Initialize the grammar object
            UseDictation = (bool?)VA.GetBoolean("~~UseDictation") ?? false; // Set UserDictation based on value from VoiceAttack boolean variable
            if (UseDictation == true) // Check if pronunciation dictation grammar should be used with speech recognition
            {
                //grammar.DictationLoad("", SpeechLoadOption.SLOStatic); // Load blank dictation topic into the grammar
                grammar.DictationLoad("Pronunciation", SpeechLoadOption.SLOStatic); // Load pronunciation dictation topic into the grammar so that the raw (unfiltered) phonemes may be retrieved
                grammar.DictationSetState(SpeechRuleState.SGDSActive); // Activate dictation grammar
            }
            else
            {
                grammar.CmdLoadFromFile(GrammarPath, SpeechLoadOption.SLODynamic); // Load custom XML grammar file
                grammar.CmdSetRuleIdState(0, SpeechRuleState.SGDSActive); // Activate the loaded grammar
            }
            Application.Run(); // Starts a standard application message loop on the current thread
        }
        catch // Handle exceptions in above code
        {
            VA.SetText("~~RecognitionError", "Error during voice recognition setup (SAPI)"); // Send error detail back to VoiceAttack as text variable
            return; // Stop executing the code
        }
        finally // Runs whether an exception is encountered or not
        {
            MyRecognizer = null; // Set to null in preparation for garbage collection
            FileStream.Close(); // Close the input FileStream
            FileStream = null; // Set to null in preparation for garbage collection

            // Close up recognition event handling
            RecoContext.Recognition -= new _ISpeechRecoContextEvents_RecognitionEventHandler(RecoContext_Recognition); // Unregister for successful voice recognition events
            RecoContext.FalseRecognition -= new _ISpeechRecoContextEvents_FalseRecognitionEventHandler(RecoContext_FalseRecognition); // Unregister for failed (low confidence) voice recognition events
            if (VA.GetBoolean("~~ShowRecognitionHypothesis") == true) // Check if user wanted to show voice recognition hypothesis results
                RecoContext.Hypothesis -= new _ISpeechRecoContextEvents_HypothesisEventHandler(RecoContext_Hypothesis); // Unregister for voice recognition hypothesis events
            RecoContext.EndStream -= new _ISpeechRecoContextEvents_EndStreamEventHandler(RecoContext_EndStream); // Unregister for end of file stream events
            RecoContext = null; // Set to null in preparation for garbage collection
        }
        //VA.WriteToLog("voice recognition complete"); // Output info to event log
    }

    // Function for converting text to a voiced wav file via text-to-speech
    public bool TextToWav(string FilePath, string text)
    {
        //VA.WriteToLog("creating wav file"); // Output info to event log
        SpFileStream stream = new SpFileStream(); // Create new SpFileStream instance
        try // Attempt the following code
        {
            if (System.IO.File.Exists(FilePath) == true) // Check if voice recognition wav file already exists
                System.IO.File.Delete(FilePath); // Delete existing voice recognition wav file
            stream.Format.Type = SpeechAudioFormatType.SAFT48kHz16BitStereo; // Set the file stream audio format
            stream.Open(FilePath, SpeechStreamFileMode.SSFMCreateForWrite, true); // Open the specified file for writing with events enabled
            SpVoice voice = new SpVoice(); // Create new SPVoice instance
            voice.Volume = 100; // Set the volume level of the text-to-speech voice
            voice.Rate = -2; // Set the rate at which text is spoken by the text-to-speech engine
            string NameAttribute = "Name = " + VA.GetText("~~TextToSpeechVoice");
            voice.Voice = voice.GetVoices(NameAttribute).Item(0);
            //voice.Speak(text);
            voice.AudioOutputStream = stream; // Send the audio output to the file stream
            voice.Speak(text, SpeechVoiceSpeakFlags.SVSFDefault); // Internally "speak" the inputted text (which records it in the wav file)
            voice = null; // Set to null in preparation for garbage collection
        }
        catch // Handle exceptions in above code
        {
            VA.SetText("~~RecognitionError", "Error during wav file creation (SAPI)"); // Send error detail back to VoiceAttack as text variable
            return false; // Send "false" back to calling code line
        }
        finally // Runs whether an exception is encountered or not
        {
            stream.Close(); // Close the file stream
            stream = null; // Set to null in preparation for garbage collection
        }
        return true; // Send "true" back to calling code line
    }

    // Event handler for successful (higher confidence) voice recognition
    public void RecoContext_Recognition(int StreamNumber, object StreamPosition, SpeechRecognitionType RecognitionType, ISpeechRecoResult Result)
    {
        //VA.WriteToLog("Recognition successful"); // Output info to event log

        //VA.SetText("~~FalseRecognitionFlag", ""); // Send blank recognition flag ("") back to VoiceAttack as text variable
        //RecognitionFlag = ""; // Set the RecognitionFlag as blank
        RecognitionProcessing(Result); // Process the voice recognition result
        //if (UseDictation == false) // Check if pronunciation dictation grammar should NOT be used with speech recognition
        GetPhonemes(Result); // Retrieve SAPI phonemes from recognition result
    }

    // Event handler for unsuccessful (low confidence) voice recognition
    public void RecoContext_FalseRecognition(int StreamNumber, object StreamPosition, ISpeechRecoResult Result)
    {
        //VA.WriteToLog("Low confidence recognition"); // Output info to event log

        //VA.WriteToLog(Result.PhraseInfo.GetText());
        //VA.SetText("~~FalseRecognitionFlag", "*"); // Send unsuccessful recognition flag (text character) back to VoiceAttack as text variable
        RecognitionFlag = "*"; // Set the RecognitionFlag as "*"
        RecognitionProcessing(Result); // Process the voice recognition result
        GetPhonemes(Result); // Retrieve SAPI phonemes from recognition result
    }

    // Event handler for voice recognition hypotheses
    public void RecoContext_Hypothesis(int StreamNumber, object StreamPosition, ISpeechRecoResult Result)
    {
        //VA.WriteToLog("Recognition hypothesis"); // Output info to event log

        float confidence = Result.PhraseInfo.Elements.Item(0).EngineConfidence;
        VA.WriteToLog("Hypothesis = " + Result.PhraseInfo.GetText() + " (" + Decimal.Round(Convert.ToDecimal(confidence), (confidence > 0.01 ? 3 : 4)) + ")"); // Output info to event log
    }

    // Event handler for reaching the end of an audio input stream
    public void RecoContext_EndStream(int StreamNumber, object StreamPosition, bool StreamReleased)
    {
        // VA.WriteToLog("End of stream, cleaning up now"); // Output info to event log

        // Clean up now that voice recognition is complete
        try // Attempt the following code
        {
            if (UseDictation == true)
                grammar.DictationSetState(SpeechRuleState.SGDSInactive); // Deactivate dictation grammar
            else
                grammar.CmdSetRuleIdState(0, SpeechRuleState.SGDSInactive); // Deactivate the loaded grammar
        }
        catch // Handle exceptions in above code
        {
            VA.SetText("~~RecognitionError", "Error during cleanup process (SAPI)"); // Send error detail back to VoiceAttack as text variable
        }
        finally // Runs whether an exception is encountered or not
        {
            Application.ExitThread(); // Terminates the message loop on the current thread
        }
    }

    // Function for processing voice recognition results
    public void RecognitionProcessing(ISpeechRecoResult Result)
    {
        //VA.WriteToLog("Processing recognition result"); // Output info to event log

        try // Attempt the following code
        {
            string RecognizedText = Result.PhraseInfo.GetText().Trim(); // Store recognized text    
            float confidence = Result.PhraseInfo.Elements.Item(0).EngineConfidence; // Get confidence of voice recognition result
            decimal RecognitionConfidenceScore = Decimal.Round(Convert.ToDecimal(confidence), (confidence > 0.01 ? 3 : 4)); // Calculate confidence of voice recognition result convert to decimal, and round the result
            string RecognitionConfidenceLevel = Result.PhraseInfo.Elements.Item(0).ActualConfidence.ToString().Replace("SEC", "").Replace("Confidence", "");
            VA.SetText("~~RecognizedText", RecognizedText); // Send recognized text back to VoiceAttack as text variable
            //VA.SetText("~~RecognitionConfidenceLevel", RecognitionConfidenceLevel); // Send speech recognition confidence level back to VoiceAttack as text variable
            //VA.SetDecimal("~~RecognitionConfidence", RecognitionConfidenceScore); // Send recognized confidence back to VoiceAttack as decimal variable

            if (VA.GetBoolean("~~ShowConfidence") == true)
                RecognitionConfidence = "(" + RecognitionConfidenceLevel + " @ " + RecognitionConfidenceScore.ToString() + ")" + RecognitionFlag;
            //VA.SetText("~~RecognitionConfidence", RecognitionConfidenceLevel + " @ " + RecognitionConfidenceScore.ToString()); // Send speech recognition confidence data back to VoiceAttack as text variable
            VA.SetText("~~RecognitionConfidence", RecognitionConfidence); // Send formatted speech recognition confidence data back to VoiceAttack as text variable
            if (UseDictation == true) // Check if pronunciation dictation grammar should be used with speech recognition
            {
                RecognizedText = RecognizedText.Replace("hh", "h"); // Replace any instances of "hh" in recognized phonemes with "h"
                VA.SetText("~~SAPIPhonemes", RecognizedText); // Send word-delimited SAPI phoneme data back to VoiceAttack as text variable
            }
        }
        catch (Exception e) // Handle exceptions in above code
        {
            VA.WriteToLog(e.ToString());
            VA.SetText("~~RecognitionError", "Error during processing of recognition result (SAPI)"); // Send error detail back to VoiceAttack as text variable
        }
    }

    // Function for extracting SAPI phonemes from voice recognition results
    public void GetPhonemes(ISpeechRecoResult Result)
    {
        //VA.WriteToLog("Extracting phonemes from voice recognition result"); // Output info to event log

        try // Attempt the following code
        {
            SpPhoneConverter MyPhoneConverter = new SpPhoneConverter(); // Create new SPPhoneConverter instance
            MyPhoneConverter.LanguageId = 1033; // Set the phone converter's language (English = 1033)
            string SAPIPhonemesRaw = null; // Initialize string for storing raw SAPI phoneme data
            string SAPIPhonemes = null; // Initialize string for storing delimited SAPI phoneme data
            int i = 1; // Initialize integer for tracking phoneme count
            string WordSeparator = " "; // Initialize string variable for storing the characters used to separate words within the phoneme result

            if (VA.GetBoolean("~~SeparatePhonemes") == true) // Check if user wants to have the "-" character separate the words within the phoneme result
                WordSeparator = " - "; // Redefine the WordSeparator            
            foreach (ISpeechPhraseElement MyPhrase in Result.PhraseInfo.Elements) // Loop through each element of the recognized text
            {
                if (MyPhrase.DisplayText != " ")
                {
                    SAPIPhonemesRaw += " " + MyPhoneConverter.IdToPhone(MyPhrase.Pronunciation); // Build string of SAPI phonemes extracted from the recognized text
                    SAPIPhonemes += (i++ > 1 ? WordSeparator : " ") + MyPhoneConverter.IdToPhone(MyPhrase.Pronunciation); // Build string of SAPI phonemes extracted from the recognized text, delimited by " "
                }
            }
            MyPhoneConverter = null; // Set to null in preparation for garbage collection

            VA.SetText("~~SAPIPhonemesRaw", SAPIPhonemesRaw.Trim()); // Send raw SAPI phoneme data back to VoiceAttack as text variable
            VA.SetText("~~SAPIPhonemes", SAPIPhonemes.Trim()); // Send word-delimited SAPI phoneme data back to VoiceAttack as text variable
        }
        catch // Handle exceptions in above code
        {
            VA.SetText("~~RecognitionError", "Error during phoneme extraction"); // Send error detail back to VoiceAttack as text variable
        }
    }
}

// References:
// https://github.com/rti7743/rtilabs/blob/master/files/asobiba/DictationFilter/DictationFilter/SpeechRecognitionRegexp.cs
// https://stackoverflow.com/questions/6193874/help-with-sapi-v5-1-speechrecognitionengine-always-gives-same-wrong-result-with/6203533#6203533
// http://www.drdobbs.com/com-objects-c-and-the-microsoft-speech-a/184416575
// http://vbcity.com/forums/t/125150.aspx
// https://people.kth.se/~maguire/DEGREE-PROJECT-REPORTS/050702-Johan_Sverin-with-cover.pdf
// https://msdn.microsoft.com/en-us/library/ee125471(v=vs.85).aspx
// https://stackoverflow.com/questions/20770593/speech-to-phoneme-in-net

谢谢@halfer的编辑,我很感激你关于避免无关内容的建议。不用担心,不客气。如果您感兴趣,可以在Meta上进行一些参考讨论,例如和。不管怎样,看起来是个好问题!另外,您会遇到什么运行时错误?不太清楚这是如何发生的,System.Speech命名空间中的.NET SAPI包装器非常好。更容易开始。基本上,我有一个语音识别应用程序,可以作为宏的一部分运行C代码。我已经创建了一个C函数集合,可以修改Windows语音识别词典,应用程序可以利用该词典改进识别和文本到语音,据我所知,唯一的方法是直接通过SAPI。我的项目的最后一部分涉及识别口语短语和提取SAPI音素,然后将其作为字典的输入。我知道我可以通过System.Speech获得IPA发音,但不确定SAPI。简而言之,这就是故事。谢谢你的评论Eric,我希望你能找到我的帖子:我编辑的代码与你上面的答案相比如何?再次感谢你的帮助Eric!你似乎对语音识别和SAPI了解很多,我希望你能尽快看看我的相关问题。这是我项目的最后一部分,非常感谢您的反馈!
using SpeechLib;
using System;
using System.Windows.Forms;

namespace RecoForm
{
    public partial class Form1 : Form
    {
        // Speech Recognition Object
        SpSharedRecoContext listener;

        // Grammar object
        ISpeechRecoGrammar grammar;

        public Form1()
        {
            InitializeComponent();
        }

        private void Form1_Load(object sender, EventArgs e)
        {
            // nothing
        }

        public string ps;
        private void button1_Click(object sender, EventArgs e)
        {
            if (btnListen.Text == "Start Listening")
            {
               // textBox1.Clear();
                try
                {

                    listener = new SpSharedRecoContext();
                    listener.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(listener_Reco);
                    grammar = listener.CreateGrammar(0);
                    grammar.DictationLoad("", SpeechLoadOption.SLOStatic);
                    grammar.DictationSetState(SpeechRuleState.SGDSActive);
                    btnListen.Text = "Stop Listening";
                    if (ps == "1")
                    {
                        listener.Resume();
                        ps = "0";
                    }
                }
                catch (Exception ex)
                {
                    MessageBox.Show(ex.Message);
                }
            }
            else if (btnListen.Text == "Stop Listening")
            {
                listener.Pause();
                btnListen.Text = "Start Listening";
                if (ps == "0")
                {
                    ps = "1";
                }
            }
        }        

        public void listener_Reco(int StreamNumber, object StreamPosition, SpeechRecognitionType RecognitionType, ISpeechRecoResult Result)
        {
            string heard = Result.PhraseInfo.GetText(0, -1, true);
            textBox1.Text += " " + heard;

            SpPhoneConverter MyPhoneConverter = new SpPhoneConverter();
            MyPhoneConverter.LanguageId = 1033;

            foreach (ISpeechPhraseElement MyPhrase in Result.PhraseInfo.Elements)
                textBox2.Text += " " + MyPhoneConverter.IdToPhone(MyPhrase.Pronunciation);
        }
    }
}

// https://stackoverflow.com/questions/11935533/c-sharp-sapi-5-4-languages
using SpeechLib;
using System;
using System.Windows.Forms;

namespace SAPITextFromVoice
{
    class Program
    {
        // Initialize variables needed throughout this code
        static ISpeechRecoGrammar grammar; // Declare the grammar
        static SpFileStream FileStream; // Declare the voice recognition input file stream
        static string AudioPath = null; // Declare directory path to wav file
        static string GrammarPath = null; // Declare directory path to grammar file

        static void Main(string[] args)
        {
            // Initialize string variable for storing the text of interest
            string MyText = "the rain in spain";

            // Store path to speech grammar XML file
            //GrammarPath = @"C:\Reco\MyGrammar.xml";

            // Store path to voice recognition input wav file
            AudioPath = @"C:\Reco\MyAudio.wav";

            TextToWav(AudioPath, MyText);

            try // Attempt the following code
            {
                // Open the created wav in a new FileStream
                FileStream = new SpFileStream(); // Create new instance of SpFileStream
                FileStream.Open(AudioPath, SpeechStreamFileMode.SSFMOpenForRead, true); // Open the specified file in the FileStream for reading with events enabled

                // Create speech recognizer and associated context
                SpInprocRecognizer MyRecognizer = new SpInprocRecognizer(); // Create new instance of SpInprocRecognizer
                SpInProcRecoContext RecoContext = (SpInProcRecoContext)MyRecognizer.CreateRecoContext(); // Initialize the SpInProcRecoContext (in-process recognition context)

                // Set the voice recognition input as the FileStream
                MyRecognizer.AudioInputStream = FileStream; // This will internally "speak" the wav file for input into the voice recognition engine

                // Set up recognition event handling
                RecoContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(RecoContext_Recognition); // Register for successful voice recognition events
                RecoContext.FalseRecognition += new _ISpeechRecoContextEvents_FalseRecognitionEventHandler(RecoContext_FalseRecognition); // Register for failed (low confidence) voice recognition events
                RecoContext.Hypothesis += new _ISpeechRecoContextEvents_HypothesisEventHandler(RecoContext_Hypothesis); // Register for voice recognition hypothesis events
                RecoContext.EndStream += new _ISpeechRecoContextEvents_EndStreamEventHandler(RecoContext_EndStream); // Register for end of file stream events

                // Set up the grammar
                grammar = RecoContext.CreateGrammar(); // Initialize the grammar object
                //grammar.CmdLoadFromFile(GrammarPath, SpeechLoadOption.SLODynamic); // Load custom XML grammar file
                //grammar.CmdSetRuleIdState(0, SpeechRuleState.SGDSActive); // Activate the loaded grammar
                grammar.DictationLoad("", SpeechLoadOption.SLOStatic); // Load blank dictation topic into the grammar
                grammar.DictationSetState(SpeechRuleState.SGDSActive); // Activate dictation grammar
            }
            catch // Handle exceptions in above code
            {
                Console.WriteLine("Error during voice recognition setup");
                return; // Stop executing the code
            }

            Application.Run(); // Starts a standard application message loop on the current thread

            Console.WriteLine("done");
            Console.ReadLine();
        }

        // Function for converting text to a voiced wav file via text-to-speech
        public static bool TextToWav(string FilePath, string text)
        {
            try // Attempt the following code
            {
                if (System.IO.File.Exists(FilePath) == true) // Check if voice recognition wav file already exists
                    System.IO.File.Delete(FilePath); // Delete existing voice recognitoin wav file
                SpFileStream stream = new SpFileStream(); // Create new SpFileStream instance
                stream.Format.Type = SpeechAudioFormatType.SAFT48kHz16BitStereo; // Set the file stream audio format
                stream.Open(FilePath, SpeechStreamFileMode.SSFMCreateForWrite, true); // Open the specified file for writing with events enabled

                SpVoice voice = new SpVoice(); // Create new SPVoice instance
                voice.Volume = 100; // Set the volume level of the text-to-speech voice
                voice.Rate = -2; // Set the rate at which text is spoken by the text-to-speech engine
                string NameAttribute = "Name = " + "Microsoft Anna";
                voice.Voice = voice.GetVoices(NameAttribute).Item(0);
                //voice.Speak(text);
                voice.AudioOutputStream = stream; // Send the audio output to the file stream
                voice.Speak(text, SpeechVoiceSpeakFlags.SVSFDefault); // Internally "speak" the inputted text (which records it in the wav file)

                stream.Close(); // Close the file stream
                return true; // Send "true" back to calling code line
            }
            catch // Handle exceptions in above code
            {
                Console.WriteLine("Error during wav file creation");
                return false; // Send "false" back to calling code line
            }
        }

        // Event handler for successful (higher confidence) voice recognition
        public static void RecoContext_Recognition(int StreamNumber, object StreamPosition, SpeechRecognitionType RecognitionType, ISpeechRecoResult Result)
        {
            RecognitionProcessing(Result, true); // Process the voice recognition result
        }

        // Event handler for false (low confidence) voice recognition
        public static void RecoContext_FalseRecognition(int StreamNumber, object StreamPosition, ISpeechRecoResult Result)
        {
            RecognitionProcessing(Result, false); // Process the voice recognition result
        }

        // Event handler for voice recognition hypotheses
        public static void RecoContext_Hypothesis(int StreamNumber, object StreamPosition, ISpeechRecoResult Result)
        {
            float confidence = Result.PhraseInfo.Elements.Item(0).EngineConfidence;
            Console.WriteLine(("Hypothesis = " + Result.PhraseInfo.GetText() + " (" + Decimal.Round(Convert.ToDecimal(confidence), (confidence > 0.01 ? 3 : 4)) + ")")); // Output info to console
        }

        // Event handler for reaching the end of an audio input stream
        public static void RecoContext_EndStream(int StreamNumber, object StreamPosition, bool StreamReleased)
        {
            // Clean up now that voice recognition is complete

            Console.WriteLine("--- END OF STREAM ---"); // Output info to the console

            try // Attempt the following code
            {
                //grammar.CmdSetRuleIdState(0, SpeechRuleState.SGDSInactive); // Deactivate the loaded grammar
                grammar.DictationSetState(SpeechRuleState.SGDSInactive); // Deactivate dictation grammar
                FileStream.Close(); // Close the input FileStream

                Application.ExitThread(); // Terminates the message loop on the current thread
            }
            catch // Handle exceptions in above code
            {
                Console.WriteLine("Error during cleanup process");
            }
        }

        // Function for processing voice recognition results
        public static void RecognitionProcessing(ISpeechRecoResult Result, bool RecoType)
        {
            try // Attempt the following code
            {
                string RecognizedText = Result.PhraseInfo.GetText().Trim(); // Store recognized text    
                float confidence = Result.PhraseInfo.Elements.Item(0).EngineConfidence; // Get confidence of voice recognition result
                decimal RecognitionConfidence = Decimal.Round(Convert.ToDecimal(confidence), (confidence > 0.01 ? 3 : 4)); // Calculate confidence of voice recognition result convert to decimal, and round the result
                Console.WriteLine((RecoType == false ? "false " : "") + "recognition = " + RecognizedText + " (" + RecognitionConfidence + ")"); // Output info to the console
                GetPhonemes(Result); // Retrieve SAPI phonemes from recognized words
            }
            catch // Handle exceptions in above code
            {
                Console.WriteLine("Error during processing of recognition result");
            }
        }

        // Function for extracting SAPI phonemes from voice recognition results
        public static void GetPhonemes(ISpeechRecoResult Result)
        {
            try // Attempt the following code
            {
                SpPhoneConverter MyPhoneConverter = new SpPhoneConverter(); // Create new SPPhoneConverter instance
                MyPhoneConverter.LanguageId = 1033; // Set the phone converter's language (English = 1033)
                string SAPIPhonemesRaw = null; // Initialize string for storing raw SAPI phoneme data
                string SAPIPhonemes = null; // Initialize string for storing delimited SAPI phoneme data
                int i = 1; // Initialize integer for tracking phoneme count

                foreach (ISpeechPhraseElement MyPhrase in Result.PhraseInfo.Elements) // Loop through each element of the recognized text
                {
                    SAPIPhonemesRaw += " " + MyPhoneConverter.IdToPhone(MyPhrase.Pronunciation); // Build string of SAPI phonemes extracted from the recognized text
                    SAPIPhonemes += (i++ > 1 ? " - " : " ") + MyPhoneConverter.IdToPhone(MyPhrase.Pronunciation); // Build string of SAPI phonemes extracted from the recognized text, delimited by "-"
                }

                Console.WriteLine("Phonemes = " + SAPIPhonemes.Trim());
            }
            catch // Handle exceptions in above code
            {
                Console.WriteLine("Error during phoneme extraction");
            }
        }
    }
}
using SpeechLib;
using System;
using System.IO;
using System.Threading;
using System.Windows.Forms;

class VAInline
{
    // Initialize variables needed throughout this code
    ISpeechRecoGrammar grammar; // Declare the grammar
    SpFileStream FileStream; // Declare the voice recognition input file stream
    string AudioPath = null; // Declare directory path to wav file
    string GrammarPath = null; // Declare directory path to grammar file
    string RecognitionFlag = "";
    string RecognitionConfidence = "";
    bool UseDictation; // Declare boolean variable for storing pronunciation dictation grammar setting

    public void main()
    {
        // Reset relevant VoiceAttack text variables
        VA.SetText("~~RecognitionError", null);
        VA.SetText("~~RecognizedText", null);
        VA.SetText("~~SAPIPhonemes", null);
        VA.SetText("~~SAPIPhonemesRaw", null);
        //VA.SetText("~~FalseRecognitionFlag", null);

        // Retrieve the desired word data contained within VoiceAttack text variable
        string ProcessText = null; // Initialize string variable for storing the text of interest
        if (VA.GetText("~~ProcessText") != null) // Check if user provided valid text in input variable
            ProcessText = VA.GetText("~~ProcessText"); // Store text of interest held by VA text variable
        else
        {
            VA.SetText("~~RecognitionError", "Error in input text string (SAPI)"); // Send error detail back to VoiceAttack as text variable
            return; // End code processing
        }

        // Retrieve path to speech grammar XML file from VoiceAttack
        GrammarPath = VA.GetText("~~GrammarFilePath");

        // Retrieve path to voice recognition input wav file from VoiceAttack
        AudioPath = VA.GetText("~~AudioFilePath");

        // Check if TTS engine is voicing the input for the speech recognition engine
        if (VA.GetBoolean("~~UserVoiceInput") == false)
        {
            //VA.WriteToLog("creating wav file");
            if (TextToWav(AudioPath, ProcessText) == false) // Create wav file with specified path that voices specified text (with text-to-speech) and check if the creation was NOT successful
                return; // Stop executing the code
        }

        // Create speech recognizer and associated context
        SpInprocRecognizer MyRecognizer = new SpInprocRecognizer(); // Create new instance of SpInprocRecognizer
        SpInProcRecoContext RecoContext = (SpInProcRecoContext)MyRecognizer.CreateRecoContext(); // Initialize the SpInProcRecoContext (in-process recognition context)

        try // Attempt the following code
        {
            // Open the created wav in a new FileStream
            FileStream = new SpFileStream(); // Create new instance of SpFileStream
            FileStream.Open(AudioPath, SpeechStreamFileMode.SSFMOpenForRead, true); // Open the specified file in the FileStream for reading with events enabled

            // Set the voice recognition input as the FileStream
            MyRecognizer.AudioInputStream = FileStream; // This will internally "speak" the wav file for input into the voice recognition engine

            // Set up recognition event handling
            RecoContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(RecoContext_Recognition); // Register for successful voice recognition events
            RecoContext.FalseRecognition += new _ISpeechRecoContextEvents_FalseRecognitionEventHandler(RecoContext_FalseRecognition); // Register for failed (low confidence) voice recognition events
            if (VA.GetBoolean("~~ShowRecognitionHypothesis") == true) // Check if user wants to show voice recognition hypothesis results
                RecoContext.Hypothesis += new _ISpeechRecoContextEvents_HypothesisEventHandler(RecoContext_Hypothesis); // Register for voice recognition hypothesis events
            RecoContext.EndStream += new _ISpeechRecoContextEvents_EndStreamEventHandler(RecoContext_EndStream); // Register for end of file stream events

            // Set up the grammar
            grammar = RecoContext.CreateGrammar(); // Initialize the grammar object
            UseDictation = (bool?)VA.GetBoolean("~~UseDictation") ?? false; // Set UserDictation based on value from VoiceAttack boolean variable
            if (UseDictation == true) // Check if pronunciation dictation grammar should be used with speech recognition
            {
                //grammar.DictationLoad("", SpeechLoadOption.SLOStatic); // Load blank dictation topic into the grammar
                grammar.DictationLoad("Pronunciation", SpeechLoadOption.SLOStatic); // Load pronunciation dictation topic into the grammar so that the raw (unfiltered) phonemes may be retrieved
                grammar.DictationSetState(SpeechRuleState.SGDSActive); // Activate dictation grammar
            }
            else
            {
                grammar.CmdLoadFromFile(GrammarPath, SpeechLoadOption.SLODynamic); // Load custom XML grammar file
                grammar.CmdSetRuleIdState(0, SpeechRuleState.SGDSActive); // Activate the loaded grammar
            }
            Application.Run(); // Starts a standard application message loop on the current thread
        }
        catch // Handle exceptions in above code
        {
            VA.SetText("~~RecognitionError", "Error during voice recognition setup (SAPI)"); // Send error detail back to VoiceAttack as text variable
            return; // Stop executing the code
        }
        finally // Runs whether an exception is encountered or not
        {
            MyRecognizer = null; // Set to null in preparation for garbage collection
            FileStream.Close(); // Close the input FileStream
            FileStream = null; // Set to null in preparation for garbage collection

            // Close up recognition event handling
            RecoContext.Recognition -= new _ISpeechRecoContextEvents_RecognitionEventHandler(RecoContext_Recognition); // Unregister for successful voice recognition events
            RecoContext.FalseRecognition -= new _ISpeechRecoContextEvents_FalseRecognitionEventHandler(RecoContext_FalseRecognition); // Unregister for failed (low confidence) voice recognition events
            if (VA.GetBoolean("~~ShowRecognitionHypothesis") == true) // Check if user wanted to show voice recognition hypothesis results
                RecoContext.Hypothesis -= new _ISpeechRecoContextEvents_HypothesisEventHandler(RecoContext_Hypothesis); // Unregister for voice recognition hypothesis events
            RecoContext.EndStream -= new _ISpeechRecoContextEvents_EndStreamEventHandler(RecoContext_EndStream); // Unregister for end of file stream events
            RecoContext = null; // Set to null in preparation for garbage collection
        }
        //VA.WriteToLog("voice recognition complete"); // Output info to event log
    }

    // Function for converting text to a voiced wav file via text-to-speech
    public bool TextToWav(string FilePath, string text)
    {
        //VA.WriteToLog("creating wav file"); // Output info to event log
        SpFileStream stream = new SpFileStream(); // Create new SpFileStream instance
        try // Attempt the following code
        {
            if (System.IO.File.Exists(FilePath) == true) // Check if voice recognition wav file already exists
                System.IO.File.Delete(FilePath); // Delete existing voice recognition wav file
            stream.Format.Type = SpeechAudioFormatType.SAFT48kHz16BitStereo; // Set the file stream audio format
            stream.Open(FilePath, SpeechStreamFileMode.SSFMCreateForWrite, true); // Open the specified file for writing with events enabled
            SpVoice voice = new SpVoice(); // Create new SPVoice instance
            voice.Volume = 100; // Set the volume level of the text-to-speech voice
            voice.Rate = -2; // Set the rate at which text is spoken by the text-to-speech engine
            string NameAttribute = "Name = " + VA.GetText("~~TextToSpeechVoice");
            voice.Voice = voice.GetVoices(NameAttribute).Item(0);
            //voice.Speak(text);
            voice.AudioOutputStream = stream; // Send the audio output to the file stream
            voice.Speak(text, SpeechVoiceSpeakFlags.SVSFDefault); // Internally "speak" the inputted text (which records it in the wav file)
            voice = null; // Set to null in preparation for garbage collection
        }
        catch // Handle exceptions in above code
        {
            VA.SetText("~~RecognitionError", "Error during wav file creation (SAPI)"); // Send error detail back to VoiceAttack as text variable
            return false; // Send "false" back to calling code line
        }
        finally // Runs whether an exception is encountered or not
        {
            stream.Close(); // Close the file stream
            stream = null; // Set to null in preparation for garbage collection
        }
        return true; // Send "true" back to calling code line
    }

    // Event handler for successful (higher confidence) voice recognition
    public void RecoContext_Recognition(int StreamNumber, object StreamPosition, SpeechRecognitionType RecognitionType, ISpeechRecoResult Result)
    {
        //VA.WriteToLog("Recognition successful"); // Output info to event log

        //VA.SetText("~~FalseRecognitionFlag", ""); // Send blank recognition flag ("") back to VoiceAttack as text variable
        //RecognitionFlag = ""; // Set the RecognitionFlag as blank
        RecognitionProcessing(Result); // Process the voice recognition result
        //if (UseDictation == false) // Check if pronunciation dictation grammar should NOT be used with speech recognition
        GetPhonemes(Result); // Retrieve SAPI phonemes from recognition result
    }

    // Event handler for unsuccessful (low confidence) voice recognition
    public void RecoContext_FalseRecognition(int StreamNumber, object StreamPosition, ISpeechRecoResult Result)
    {
        //VA.WriteToLog("Low confidence recognition"); // Output info to event log

        //VA.WriteToLog(Result.PhraseInfo.GetText());
        //VA.SetText("~~FalseRecognitionFlag", "*"); // Send unsuccessful recognition flag (text character) back to VoiceAttack as text variable
        RecognitionFlag = "*"; // Set the RecognitionFlag as "*"
        RecognitionProcessing(Result); // Process the voice recognition result
        GetPhonemes(Result); // Retrieve SAPI phonemes from recognition result
    }

    // Event handler for voice recognition hypotheses
    public void RecoContext_Hypothesis(int StreamNumber, object StreamPosition, ISpeechRecoResult Result)
    {
        //VA.WriteToLog("Recognition hypothesis"); // Output info to event log

        float confidence = Result.PhraseInfo.Elements.Item(0).EngineConfidence;
        VA.WriteToLog("Hypothesis = " + Result.PhraseInfo.GetText() + " (" + Decimal.Round(Convert.ToDecimal(confidence), (confidence > 0.01 ? 3 : 4)) + ")"); // Output info to event log
    }

    // Event handler for reaching the end of an audio input stream
    public void RecoContext_EndStream(int StreamNumber, object StreamPosition, bool StreamReleased)
    {
        // VA.WriteToLog("End of stream, cleaning up now"); // Output info to event log

        // Clean up now that voice recognition is complete
        try // Attempt the following code
        {
            if (UseDictation == true)
                grammar.DictationSetState(SpeechRuleState.SGDSInactive); // Deactivate dictation grammar
            else
                grammar.CmdSetRuleIdState(0, SpeechRuleState.SGDSInactive); // Deactivate the loaded grammar
        }
        catch // Handle exceptions in above code
        {
            VA.SetText("~~RecognitionError", "Error during cleanup process (SAPI)"); // Send error detail back to VoiceAttack as text variable
        }
        finally // Runs whether an exception is encountered or not
        {
            Application.ExitThread(); // Terminates the message loop on the current thread
        }
    }

    // Function for processing voice recognition results
    public void RecognitionProcessing(ISpeechRecoResult Result)
    {
        //VA.WriteToLog("Processing recognition result"); // Output info to event log

        try // Attempt the following code
        {
            string RecognizedText = Result.PhraseInfo.GetText().Trim(); // Store recognized text    
            float confidence = Result.PhraseInfo.Elements.Item(0).EngineConfidence; // Get confidence of voice recognition result
            decimal RecognitionConfidenceScore = Decimal.Round(Convert.ToDecimal(confidence), (confidence > 0.01 ? 3 : 4)); // Calculate confidence of voice recognition result convert to decimal, and round the result
            string RecognitionConfidenceLevel = Result.PhraseInfo.Elements.Item(0).ActualConfidence.ToString().Replace("SEC", "").Replace("Confidence", "");
            VA.SetText("~~RecognizedText", RecognizedText); // Send recognized text back to VoiceAttack as text variable
            //VA.SetText("~~RecognitionConfidenceLevel", RecognitionConfidenceLevel); // Send speech recognition confidence level back to VoiceAttack as text variable
            //VA.SetDecimal("~~RecognitionConfidence", RecognitionConfidenceScore); // Send recognized confidence back to VoiceAttack as decimal variable

            if (VA.GetBoolean("~~ShowConfidence") == true)
                RecognitionConfidence = "(" + RecognitionConfidenceLevel + " @ " + RecognitionConfidenceScore.ToString() + ")" + RecognitionFlag;
            //VA.SetText("~~RecognitionConfidence", RecognitionConfidenceLevel + " @ " + RecognitionConfidenceScore.ToString()); // Send speech recognition confidence data back to VoiceAttack as text variable
            VA.SetText("~~RecognitionConfidence", RecognitionConfidence); // Send formatted speech recognition confidence data back to VoiceAttack as text variable
            if (UseDictation == true) // Check if pronunciation dictation grammar should be used with speech recognition
            {
                RecognizedText = RecognizedText.Replace("hh", "h"); // Replace any instances of "hh" in recognized phonemes with "h"
                VA.SetText("~~SAPIPhonemes", RecognizedText); // Send word-delimited SAPI phoneme data back to VoiceAttack as text variable
            }
        }
        catch (Exception e) // Handle exceptions in above code
        {
            VA.WriteToLog(e.ToString());
            VA.SetText("~~RecognitionError", "Error during processing of recognition result (SAPI)"); // Send error detail back to VoiceAttack as text variable
        }
    }

    // Function for extracting SAPI phonemes from voice recognition results
    public void GetPhonemes(ISpeechRecoResult Result)
    {
        //VA.WriteToLog("Extracting phonemes from voice recognition result"); // Output info to event log

        try // Attempt the following code
        {
            SpPhoneConverter MyPhoneConverter = new SpPhoneConverter(); // Create new SPPhoneConverter instance
            MyPhoneConverter.LanguageId = 1033; // Set the phone converter's language (English = 1033)
            string SAPIPhonemesRaw = null; // Initialize string for storing raw SAPI phoneme data
            string SAPIPhonemes = null; // Initialize string for storing delimited SAPI phoneme data
            int i = 1; // Initialize integer for tracking phoneme count
            string WordSeparator = " "; // Initialize string variable for storing the characters used to separate words within the phoneme result

            if (VA.GetBoolean("~~SeparatePhonemes") == true) // Check if user wants to have the "-" character separate the words within the phoneme result
                WordSeparator = " - "; // Redefine the WordSeparator            
            foreach (ISpeechPhraseElement MyPhrase in Result.PhraseInfo.Elements) // Loop through each element of the recognized text
            {
                if (MyPhrase.DisplayText != " ")
                {
                    SAPIPhonemesRaw += " " + MyPhoneConverter.IdToPhone(MyPhrase.Pronunciation); // Build string of SAPI phonemes extracted from the recognized text
                    SAPIPhonemes += (i++ > 1 ? WordSeparator : " ") + MyPhoneConverter.IdToPhone(MyPhrase.Pronunciation); // Build string of SAPI phonemes extracted from the recognized text, delimited by " "
                }
            }
            MyPhoneConverter = null; // Set to null in preparation for garbage collection

            VA.SetText("~~SAPIPhonemesRaw", SAPIPhonemesRaw.Trim()); // Send raw SAPI phoneme data back to VoiceAttack as text variable
            VA.SetText("~~SAPIPhonemes", SAPIPhonemes.Trim()); // Send word-delimited SAPI phoneme data back to VoiceAttack as text variable
        }
        catch // Handle exceptions in above code
        {
            VA.SetText("~~RecognitionError", "Error during phoneme extraction"); // Send error detail back to VoiceAttack as text variable
        }
    }
}

// References:
// https://github.com/rti7743/rtilabs/blob/master/files/asobiba/DictationFilter/DictationFilter/SpeechRecognitionRegexp.cs
// https://stackoverflow.com/questions/6193874/help-with-sapi-v5-1-speechrecognitionengine-always-gives-same-wrong-result-with/6203533#6203533
// http://www.drdobbs.com/com-objects-c-and-the-microsoft-speech-a/184416575
// http://vbcity.com/forums/t/125150.aspx
// https://people.kth.se/~maguire/DEGREE-PROJECT-REPORTS/050702-Johan_Sverin-with-cover.pdf
// https://msdn.microsoft.com/en-us/library/ee125471(v=vs.85).aspx
// https://stackoverflow.com/questions/20770593/speech-to-phoneme-in-net