How to use System.Speech for Programmatically multilingual Speech Recognition

I'm using system.speech.recognition library in .NET. I was able to program it so that it worked with one specific language at a time. Is there a way to override or setup the SpeechRecognitionEngine so that it can recognize multiple language at once? Say I have a audio file that contains both English and Japanese speech and one can not know when and where in the audio file will be English speech or Japanese speech. I currently have English and Japanese recognizer installed and

SpeechRecognitionEngine.InstalledRecognizers

returns two language which are English and Japanese

If .NET API wont be able to achieve this, is there any available API can do this? (My purpose basically is to do automatic detecting language and transcription)

Thanks in advance!!


No, you can't do that. Multiple languages are not supported.

Automatic language detection may be implemented in several complex speech recognition toolkits like Kaldi, however, it is not easy to use them. You have to build the system from scratch.

链接地址: http://www.djcxy.com/p/34372.html

上一篇: 如何训练一个语音识别系统

下一篇: 如何使用System.Speech进行编程式多语言语音识别