Objective c 语音识别-语音识别器还是其他?

Objective c 语音识别-语音识别器还是其他?,objective-c,Objective C,我已经阅读了NSSpeechRecognizer的苹果文档,我仍然调用了openears 苹果说: The NSSpeechRecognizer class is the Cocoa interface to Speech Recognition on OS X. Speech Recognition is architected as a “command and control” voice recognition system. It uses a finite state grammar

我已经阅读了
NSSpeechRecognizer
的苹果文档,我仍然调用了
openears

苹果说:

The NSSpeechRecognizer class is the Cocoa interface to Speech Recognition on OS X. Speech Recognition is architected as a “command and control” voice recognition system. It uses a finite state grammar and listens for phrases in that grammar. When it recognizes a phrase, it notifies the client process. This architecture is different from that used to support dictation.
我不明白这实际上意味着什么。 当按下按钮时,我们需要倾听iPhone上的用户,然后识别所说的单词和数字。 识别应该在用户说话时进行,而不是在停止时返回单词。 我们不希望它停在中间,像:300 - >不会返回3…但是等到沉默。< /P> OpenEar更好还是苹果本地的


谢谢。

Apple native不适用于iPhone,因此您必须使用open ears、nuance dragon、iPeech或任何其他Framework。我没有看到这一点。谢谢。打开耳朵符合我的要求吗?我刚刚使用了iSeech,可以满足您的需要,但您可以全部试用。