有人为Nuance Dragon Mobile Speech SDK for iOS创建了MonoTouch绑定吗?
我的Dragon Mobile SDK在Windows Phone 7上运行得很好,我希望在iOS上使用同等的功能。因为SDK封装了麦克风,所以在我的MonoTouch项目中实际上不可能使用.NET程序集(即使我有源代码)。似乎最好的方法是创建一个绑定库(正如Miguel所描述的)有人为Nuance Dragon Mobile Speech SDK for iOS创建了MonoTouch绑定吗?,ios,xamarin.ios,speech-recognition,Ios,Xamarin.ios,Speech Recognition,我的Dragon Mobile SDK在Windows Phone 7上运行得很好,我希望在iOS上使用同等的功能。因为SDK封装了麦克风,所以在我的MonoTouch项目中实际上不可能使用.NET程序集(即使我有源代码)。似乎最好的方法是创建一个绑定库(正如Miguel所描述的) 这看起来确实需要做很多工作,如果有人已经做了,我愿意重复使用,而不是重新发明轮子…Nuance的SDK协议不允许任何人发布iOS SDK绑定以用于MonoTouch。但图书馆本身应该运作良好 这就是说,SDK只有少数
这看起来确实需要做很多工作,如果有人已经做了,我愿意重复使用,而不是重新发明轮子…Nuance的SDK协议不允许任何人发布iOS SDK绑定以用于MonoTouch。但图书馆本身应该运作良好 这就是说,SDK只有少数几个类型需要映射,如果要重新完成其他人可能已经完成的工作,那将是相当微不足道的。您可以使用以下参考指南查看如何绑定程序集: 还有一个BindingSample项目可以帮助用户更好地了解如何使用btouch绑定本机组件:
再次感谢Anuj的回答。我想我应该留下一两条关于如何做到这一点的建议。构建绑定库并不困难(仍在调整它,但这不是一项困难的任务) 更晦涩的部分是找出如何链接SpeechKit框架。这些示例仅显示如何链接.a或.dylib。在OSX上花了一点时间处理ld(1)手册页之后,看起来与框架链接的正确ld(以及gcc)参数如下所示:
-gcc_flags "-F<insert_framework_path_here> -framework SpeechKit"
-gcc_标志“-F-框架演讲工具包”
将其放在项目属性的文本框中-在Build::iPhone Build::Additional mtouch arguments下
请注意-L不起作用,因为这不是一个库;还要注意的是,引用的-force_load和-ObjC似乎没有必要,因为这是一个框架,而不是一个库。以下是我如何使其工作的更多细节
// the SpeechKitWrapper isn't actually used - rather, it is a way to exercise all the API's that
// the binding library needs from the SpeechKit framework, so that those can be linked into the generated .a file.
@implementation SpeechKitWrapper
@synthesize status;
- (id)initWithDelegate:(id <SKRecognizerDelegate>)delegate
{
self = [super init];
if (self) {
del = delegate;
[self setStatus:@"initializing"];
SpeechKit setupWithID:@"NMDPTRIAL_ogazitt20120220010133"
host:@"sandbox.nmdp.nuancemobility.net"
port:443
useSSL:NO
delegate:nil];
NSString *text = [NSString stringWithFormat:@"initialized. sessionid = %@", [SpeechKit sessionID]];
[self setStatus:text];
SKEarcon* earconStart = [SKEarcon earconWithName:@"beep.wav"];
[SpeechKit setEarcon:earconStart forType:SKStartRecordingEarconType];
voiceSearch = [[SKRecognizer alloc] initWithType:SKDictationRecognizerType
detection:SKLongEndOfSpeechDetection
language:@"en_US"
delegate:delegate];
text = [NSString stringWithFormat:@"recognizer connecting. sessionid = %@", [SpeechKit sessionID]];
[self setStatus:text];
}
return self;
}
@end
using System;
using MonoTouch.Foundation;
namespace Nuance.SpeechKit
{
// SKEarcon.h
public enum SKEarconType
{
SKStartRecordingEarconType = 1,
SKStopRecordingEarconType = 2,
SKCancelRecordingEarconType = 3,
};
// SKRecognizer.h
public enum SKEndOfSpeechDetection
{
SKNoEndOfSpeechDetection = 1,
SKShortEndOfSpeechDetection = 2,
SKLongEndOfSpeechDetection = 3,
};
public static class SKRecognizerType
{
public static string SKDictationRecognizerType = "dictation";
public static string SKWebSearchRecognizerType = "websearch";
};
// SpeechKitErrors.h
public enum SpeechKitErrors
{
SKServerConnectionError = 1,
SKServerRetryError = 2,
SKRecognizerError = 3,
SKVocalizerError = 4,
SKCancelledError = 5,
};
// SKEarcon.h
[BaseType(typeof(NSObject))]
interface SKEarcon
{
[Export("initWithContentsOfFile:")]
IntPtr Constructor(string path);
[Static, Export("earconWithName:")]
SKEarcon FromName(string name);
}
// SKRecognition.h
[BaseType(typeof(NSObject))]
interface SKRecognition
{
[Export("results")]
string[] Results { get; }
[Export("scores")]
NSNumber[] Scores { get; }
[Export("suggestion")]
string Suggestion { get; }
[Export("firstResult")]
string FirstResult();
}
// SKRecognizer.h
[BaseType(typeof(NSObject))]
interface SKRecognizer
{
[Export("audioLevel")]
float AudioLevel { get; }
[Export ("initWithType:detection:language:delegate:")]
IntPtr Constructor (string type, SKEndOfSpeechDetection detection, string language, SKRecognizerDelegate del);
[Export("stopRecording")]
void StopRecording();
[Export("cancel")]
void Cancel();
/*
[Field ("SKSearchRecognizerType", "__Internal")]
NSString SKSearchRecognizerType { get; }
[Field ("SKDictationRecognizerType", "__Internal")]
NSString SKDictationRecognizerType { get; }
*/
}
[BaseType(typeof(NSObject))]
[Model]
interface SKRecognizerDelegate
{
[Export("recognizerDidBeginRecording:")]
void OnRecordingBegin (SKRecognizer recognizer);
[Export("recognizerDidFinishRecording:")]
void OnRecordingDone (SKRecognizer recognizer);
[Export("recognizer:didFinishWithResults:")]
[Abstract]
void OnResults (SKRecognizer recognizer, SKRecognition results);
[Export("recognizer:didFinishWithError:suggestion:")]
[Abstract]
void OnError (SKRecognizer recognizer, NSError error, string suggestion);
}
// speechkit.h
[BaseType(typeof(NSObject))]
interface SpeechKit
{
[Static, Export("setupWithID:host:port:useSSL:delegate:")]
void Initialize(string id, string host, int port, bool useSSL, [NullAllowed] SpeechKitDelegate del);
[Static, Export("destroy")]
void Destroy();
[Static, Export("sessionID")]
string GetSessionID();
[Static, Export("setEarcon:forType:")]
void SetEarcon(SKEarcon earcon, SKEarconType type);
}
[BaseType(typeof(NSObject))]
[Model]
interface SpeechKitDelegate
{
[Export("destroyed")]
void Destroyed();
}
[BaseType(typeof(NSObject))]
interface SpeechKitWrapper
{
[Export("initWithDelegate:")]
IntPtr Constructor(SKRecognizerDelegate del);
[Export("status")]
string Status { get; set; }
}
}
-gcc_flags "-F<insert_framework_path_here> -framework SpeechKit -framework SystemConfiguration -framework Security -framework AVFoundation -framework AudioToolbox"
-gcc_标志“-F-框架演讲套件-框架系统配置-框架安全-框架AVFoundation-框架音频工具箱”
如果有人(kos或其他人)使用了SetEarcon方法,请发布一个解决方案:-)我正是这么做的(按照您提到的链接中的描述对绑定库进行编码-这与我在原始问题中引用的相同)。但我仍然并没有让它工作(仍然在努力将它链接到中,因为它是一个框架,而不是libX.a)。您知道有没有绑定框架的绑定示例(显然MonoTouch绑定了每个CoCoCoatouch框架-UIKit,等等-所以这一定是可能的)。在给定的*.framework文件夹中通常有一个静态编译的库。还可以尝试新的LinkWith属性:-)更新:我终于实现了这一点-Anuj的绑定示例无疑是一个不错的选择。我永远无法让绑定库项目模板为我工作——相反,我做了大多数其他人似乎正在做的事情——显式地使用btouch并采用Anuj的方法创建一个包含arm6、arm7和i386静态库的通用静态库。另一件非常重要的事情是不要忘记在应用程序项目上使用-F-framework mtouch参数。有没有可能通过github共享库?如果这是可能的话,我真的很感激重用。科斯-见上面anuj的评论-他是对的-细微差别不允许这样做。这花了我几天的时间(看起来比应该的要难),但这绝对是可行的。有没有机会展示一下您是如何定义ApiDefinition.cs包装的?我正试图让这项工作为SpeechKit,SKRecognitor和SKRecognition类。Omri,非常感谢。只需在步骤2中澄清-库包含SpeechKitWrapper.m和SpeechKitWrapper.h,但是您在哪里定义SpeechKitApplicationKey?在SpeechKitWrapper.m中还是单独的文件中?另外,
status
是否与setStatus
相同?谢谢……没关系——我把它放在SpeechKitWrapper.m中(在@implementation上方)。我定义了SpeechKitWrapper.h这样#进口#
using System;
using MonoTouch.Foundation;
namespace Nuance.SpeechKit
{
// SKEarcon.h
public enum SKEarconType
{
SKStartRecordingEarconType = 1,
SKStopRecordingEarconType = 2,
SKCancelRecordingEarconType = 3,
};
// SKRecognizer.h
public enum SKEndOfSpeechDetection
{
SKNoEndOfSpeechDetection = 1,
SKShortEndOfSpeechDetection = 2,
SKLongEndOfSpeechDetection = 3,
};
public static class SKRecognizerType
{
public static string SKDictationRecognizerType = "dictation";
public static string SKWebSearchRecognizerType = "websearch";
};
// SpeechKitErrors.h
public enum SpeechKitErrors
{
SKServerConnectionError = 1,
SKServerRetryError = 2,
SKRecognizerError = 3,
SKVocalizerError = 4,
SKCancelledError = 5,
};
// SKEarcon.h
[BaseType(typeof(NSObject))]
interface SKEarcon
{
[Export("initWithContentsOfFile:")]
IntPtr Constructor(string path);
[Static, Export("earconWithName:")]
SKEarcon FromName(string name);
}
// SKRecognition.h
[BaseType(typeof(NSObject))]
interface SKRecognition
{
[Export("results")]
string[] Results { get; }
[Export("scores")]
NSNumber[] Scores { get; }
[Export("suggestion")]
string Suggestion { get; }
[Export("firstResult")]
string FirstResult();
}
// SKRecognizer.h
[BaseType(typeof(NSObject))]
interface SKRecognizer
{
[Export("audioLevel")]
float AudioLevel { get; }
[Export ("initWithType:detection:language:delegate:")]
IntPtr Constructor (string type, SKEndOfSpeechDetection detection, string language, SKRecognizerDelegate del);
[Export("stopRecording")]
void StopRecording();
[Export("cancel")]
void Cancel();
/*
[Field ("SKSearchRecognizerType", "__Internal")]
NSString SKSearchRecognizerType { get; }
[Field ("SKDictationRecognizerType", "__Internal")]
NSString SKDictationRecognizerType { get; }
*/
}
[BaseType(typeof(NSObject))]
[Model]
interface SKRecognizerDelegate
{
[Export("recognizerDidBeginRecording:")]
void OnRecordingBegin (SKRecognizer recognizer);
[Export("recognizerDidFinishRecording:")]
void OnRecordingDone (SKRecognizer recognizer);
[Export("recognizer:didFinishWithResults:")]
[Abstract]
void OnResults (SKRecognizer recognizer, SKRecognition results);
[Export("recognizer:didFinishWithError:suggestion:")]
[Abstract]
void OnError (SKRecognizer recognizer, NSError error, string suggestion);
}
// speechkit.h
[BaseType(typeof(NSObject))]
interface SpeechKit
{
[Static, Export("setupWithID:host:port:useSSL:delegate:")]
void Initialize(string id, string host, int port, bool useSSL, [NullAllowed] SpeechKitDelegate del);
[Static, Export("destroy")]
void Destroy();
[Static, Export("sessionID")]
string GetSessionID();
[Static, Export("setEarcon:forType:")]
void SetEarcon(SKEarcon earcon, SKEarconType type);
}
[BaseType(typeof(NSObject))]
[Model]
interface SpeechKitDelegate
{
[Export("destroyed")]
void Destroyed();
}
[BaseType(typeof(NSObject))]
interface SpeechKitWrapper
{
[Export("initWithDelegate:")]
IntPtr Constructor(SKRecognizerDelegate del);
[Export("status")]
string Status { get; set; }
}
}
-gcc_flags "-F<insert_framework_path_here> -framework SpeechKit -framework SystemConfiguration -framework Security -framework AVFoundation -framework AudioToolbox"