Tongue Twister-快速集成華為實時語音識別服務(wù)玩轉(zhuǎn)繞口令

前言

實不相瞞,小編作為了一個湖南人,時常被說普通話不夠標(biāo)準(zhǔn),不僅N/L不分,有時候還分不出來卷舌音,經(jīng)常鬧笑話,導(dǎo)致小編十分苦惱。有時看著電視里的主持人流利的口播,和完全標(biāo)準(zhǔn)的繞口令都羨慕不已,常?;孟胫约河幸惶煲材苷f一口流利的繞口令。恰巧,小編昨日上網(wǎng)的時候被推送了集成了華為HMS ML Kit實時語音服務(wù)的小游戲-Tongue Twister,這款游戲究竟是如何玩轉(zhuǎn)繞口令的,接下來就和小編一起一探究竟吧!


應(yīng)用場景

Tongue Twister 就是一款集成了華為HMS ML Kit實時語音識別服務(wù)的繞口令小游戲,游戲中一共有5個關(guān)卡,每一個關(guān)卡就是一段繞口令,通過秘訣就是憑借強(qiáng)大的實時語音識別,實時語音識別服務(wù)覆蓋日常生活及工作中的眾多領(lǐng)域,并且深度優(yōu)化了購物搜索、影視搜索、音樂搜索以及導(dǎo)航等場景中的識別能力,識別準(zhǔn)確率高,可輕松檢測闖關(guān)者的發(fā)音,如若發(fā)音清晰標(biāo)準(zhǔn)即可過關(guān),

下面我們一起來看看這個游戲的正確打開方式吧!


這么樣,是不是心動了,那就一起來試試自己定制屬于你的繞口令大闖關(guān)吧!


開發(fā)步驟

1.請參見云端鑒權(quán)信息實用須知,設(shè)置您應(yīng)用的鑒權(quán)信息。


中:https://developer.huawei.com/consumer/cn/doc/development/HMSCore-Guides-V5/sdk-data-security-0000001050040129-V5#ZH-CN_TOPIC_0000001050750251__section2688102310166


英:https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/sdk-data-security-0000001050040129-V5#EN-US_TOPIC_0000001050750251__section2688102310166




2.用戶調(diào)用接口創(chuàng)建一個語音識別器。


MLAsrRecognizer mSpeechRecognizer = MLAsrRecognizer.createAsrRecognizer(context);




3.創(chuàng)建語音識別結(jié)果監(jiān)聽器回調(diào)。


/**

?* Use the callback to implement the MLAsrListener API and methods in the API.

?*/

protected class SpeechRecognitionListener implements MLAsrListener {


@Override


public void onStartListening() {


// The recorder starts to receive speech.

??? }


@Override


public void onStartingOfSpeech() {


// The user starts to speak, that is, the speech recognizer detects that the user starts to speak.

??? }


@Override


public void onVoiceDataReceived(byte[] data, float energy, Bundle bundle) {


// Return the original PCM stream and audio power to the user.

??? }


@Override


public void onRecognizingResults(Bundle partialResults) {


// Receive the recognized text from MLAsrRecognizer.

??? }


@Override


public void onResults(Bundle results) {


// Text data of ASR.

??????? }

??? }


@Override


public void onError(int error, String errorMessage) {


// If you don't add this, there will be no response after you cut the network

??? }


@Override


public void onState(int state, Bundle params) {


// Notify the app status change.

??? }

}




4.將新建的結(jié)果監(jiān)聽器回調(diào)與語音識別器綁定


mSpeechRecognizer.setAsrListener(new SpeechRecognitionListener());




5.配置識別參數(shù),調(diào)用啟動語音識別


// Set parameters and start the audio device.

Intent mSpeechRecognizerIntent = new Intent(MLAsrConstants.ACTION_HMS_ASR_SPEECH);

mSpeechRecognizerIntent

??????? // Set the language that can be recognized to English. If this parameter is not set,

??????? // English is recognized by default. Example: "zh-CN": Chinese;"en-US": English;"fr-FR": French;"es-ES": Spanish;"de-DE": German;"it-IT": Italian.

??????? .putExtra(MLAsrConstants.LANGUAGE, language)


// Set to return the recognition result along with the speech. If you ignore the setting, this mode is used by default. Options are as follows:

??????? // MLAsrConstants.FEATURE_WORDFLUX: Recognizes and returns texts through onRecognizingResults.

?????? ?// MLAsrConstants.FEATURE_ALLINONE: After the recognition is complete, texts are returned through onResults.

??????? .putExtra(MLAsrConstants.FEATURE, MLAsrConstants.FEATURE_WORDFLUX);

mSpeechRecognizer.startRecognizing(mSpeechRecognizerIntent);




6.識別完成后,釋放資源


if (mSpeechRecognizer != null) {


mSpeechRecognizer.destroy();


mSpeechRecognizer = null;

}




maven地址

buildscript {

??? repositories {

??????? maven { url

'https://developer.huawei.com/repo/' }

??? }

}

allprojects {

??? repositories {

??????? maven { url

'https://developer.huawei.com/repo/' }

??? }

}




引入SDK

dependencies {

??? // Automatic speech recognition Long voice SDK.

??? implementation 'com.huawei.hms:ml-computer-voice-realtimetranscription:2.0.3.300'

??? // Automatic speech recognition SDK.

??? implementation 'com.huawei.hms:ml-computer-voice-asr:2.0.3.300'

??? // Automatic speech recognition plugin.

??? implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.0.3.300'

}




清單文件

<manifest

??? ...

??? <

meta-data


android:name="com.huawei.hms.ml.DEPENDENCY"


android:value="ocr/>

??? ...

</

manifest>


權(quán)限

<uses-permission android:name="android.permission.RECORD_AUDIO" />


動態(tài)權(quán)限申請

private void requestCameraPermission() {


final String[] permissions = new String[]{Manifest.permission.RECORD_AUDIO};


if (!ActivityCompat.shouldShowRequestPermissionRationale(this,

??????????? Manifest.permission.

RECORD_AUDIO)) { ActivityCompat.requestPermissions(this,

??????????????? permissions,

??????????????? TongueTwisterActivity.

AUDIO_CODE);


return;

??? }

}

總結(jié)

除了在游戲當(dāng)中的應(yīng)用,實時語音識別服務(wù)在使用購物類App搜索商品時,可以將語音描述的商品名稱或特征識別為文字從而搜索到目標(biāo)商品。同樣,在使用音樂類App時,可以將語音輸入的歌名或歌手識別為文字進(jìn)而搜索歌曲。另外,司機(jī)在駕駛過程中不方便輸入文字時,可以將輸入的語音轉(zhuǎn)換為文字繼而搜索目的地,讓行車更加安全。


GitHub Demo Code

欲了解更多詳情,請參閱:

華為開發(fā)者聯(lián)盟官網(wǎng):

https://developer.huawei.com/consumer/cn/hms/huawei-mlkit

獲取開發(fā)指導(dǎo)文檔:

https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/service-introduction-0000001050040017

參與開發(fā)者討論請到Reddit社區(qū):https://www.reddit.com/r/HuaweiDevelopers/

下載demo和示例代碼請到Github:https://github.com/HMS-Core

解決集成問題請到Stack Overflow:

https://stackoverflow.com/questions/tagged/huawei-mobile-services?tab=Newest

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容