Preview – Responsive Voice Recognition in Flutter

Overview

The world is spinning faster today than it was only a few years ago. Companies are growing faster. Memes are going viral faster. Packages are being shipped faster. And because of all this, people have developed an expectation for responsiveness. So how can you, a mobile application developer, give them what they want?

Voice Recognition.

That’s right. Adding voice recognition capabilities to your mobile app can make the overall user experience at least 12 times better. As pointed out in the article Advantages of Voice Control & How to Add It to Your Mobile App, voice offers several benefits as soon as it’s implemented:

  • Hands-free interaction – Regardless of where the user is or what they’re doing with their hands, if they are within speaking distance of their device they can interact with it.
  • Speed Speaking a sentence takes far less time than typing the same using a mobile device’s keyboard. Speech-to-text functionality can also be initiated with a wake word so users can spend more time inputting data and less time navigating through screens.
  • Accessibility – Speaking is something that we’ve all done a time or two in our lives. Even though some folks might not have technological prowess, they understand how to talk.
  • Platform-agnostic Data – Voice is voice is voice. Although the technical implementations may differ slightly from Android to iOS, the form of the data being processed is the same regardless of the platform you’re using.

The benefits aren’t the only reason to add voice, either. The entire field of Speech Recognition Technology (SRT) has been growing rapidly over the past decade as tech giants like Google, Microsoft, and Amazon compete to create the most fluent and errorless solutions. By 2025, it’s expected that the Speech and Voice Recognition market will be worth $24.9 billion; almost double what it’s worth now. Given this analysis, now is a better time than ever to start dabbling in voice. In this article, I’ll show you how to set up a few different voice features in Flutter, some that are admittedly pretty sneaky.

Respond to a Single Command

If your primary goal is to listen for a single command after a button is tapped, I have good news for you – you’re almost done. The speech_to_text package is designed to listen once and respond to the text it recognizes. Once the recognition event has passed, the device stops listening and privacy is restored.

If you need to continuously listen to commands or start listening when the user says a specific word, skip to the next section. Otherwise, I’ll walk you through the setup necessary to respond to a single command.

Setup

To start, create a new Flutter project and set up your preferred app infrastructure. My preferences are listed here and I’ll be using them throughout this tutorial.

Next, follow the install instructions on the speech_to_text package page (remembering to address the Android and iOS specific points). You can add the dependency to your pubspec.yaml by tapping the copy icon next to the package name.

Once that’s done, run flutter pub get.

If you read through the speech_to_text package documentation, you’ll notice that there should only ever be one SpeechToText object active during an app session. Once you initialize it, you can start listening for user input by calling listen() and stop listening by calling stop(). With this in mind, the best approach will be to create a global SpeechService that we can call from anywhere in our app.

Create a new file called speech_service.dart and paste in the following:

import 'package:injectable/injectable.dart';
import 'package:speech_to_text/speech_recognition_result.dart';
import 'package:speech_to_text/speech_to_text.dart' as stt;
@singleton
class SpeechService{
bool speechAvailable = false;
stt.SpeechToText speech = stt.SpeechToText();
/// Initialization function called when the app is first opened
void initializeSpeechService() async {
speechAvailable = await speech.initialize( onStatus: (status) {
print('Speech to text status: '+ status);
}, onError: (errorNotification) {
print('Speech to text error: '+ errorNotification.errorMsg);
}, );
}
/// Start listening for user input
/// resultCallback is specified by the caller
void startListening(Function(SpeechRecognitionResult result) resultCallback){
speech.listen(onResult: resultCallback,);
}
/// Stop listening to user input and free up the audio stream
Future<void> stopListening() async {
await speech.stop();
}
}

Finally, in main.dart, call the initializeSpeechService() method we just created.

Future<void> main() async {

  WidgetsFlutterBinding.ensureInitialized();
  configureDependencies();
  await speechService.initializeSpeechService();
  runApp(App());
}

Start Listening

As I mentioned at the beginning of this section, there really isn’t much that goes into listening to a single command. With our SpeechService initialized and ready to go, all we have to do is use it. Create a new folder called ‘ui’ and inside of it create two new files:

  • single_command_view
  • single_command_view_model

For anyone familiar with the MVVM architecture this should look familiar. The view will contain just our UI components while the view model will contain our business logic.

Here’s the code for single_command_view:

import 'package:flutter/material.dart';
import 'package:stacked/stacked.dart';
import 'single_command_view_model.dart';
class SingleCommandView extends StatelessWidget {
@override
Widget build(BuildContext context) {
return ViewModelBuilder<SingleCommandViewModel>.reactive(
viewModelBuilder: () => SingleCommandViewModel(),
onModelReady: (model) {
// model.initialize();
},
builder: (context, model, child) {
return Scaffold(
appBar: AppBar(
title: Text('Single Command'),
),
body: Center(
child: Column(
children: [
ElevatedButton(onPressed: (){
model.startListening();
}, child: Text('Start listening'),),
Text(model.recognizedWords)
],
),
)
);
},
);
}
}

And here’s the code for single_command_view_model.dart:

import 'package:desk_monkey/services/services.dart';
import 'package:stacked/stacked.dart';
class SingleCommandViewModel extends BaseViewModel{
String recognizedWords = '';
void initialize(){
}
void startListening(){
speechService.startListening((result) {
print('Recognized words: '+ result.recognizedWords);
if(result.recognizedWords != null){
recognizedWords = result.recognizedWords;
notifyListeners();
}
});
}
@override
void dispose() {
super.dispose();
}
}

With this code, you can tap on the button, speak, and see your words printed to the screen. Sweet!

Tech Tip: The SpeechToText object has a few properties you can tap into to beautify your UI, namely isListening and isNotListening. These will be updated as soon as they change so you can display an animation when the device is ready for speech input.

As cool as this is, let’s be honest – it’s not that cool. This method of listening requires the user to manually tap a button and immediately say what’s on their mind. Where’s the flexibility? Plus, once the SpeechToText object determines that the user has finished speaking, it stops listening. You can extend the amount of time it listens for with the listenFor property but this doesn’t necessarily fix the issue. The detector will listen once and then it quits. The issue tracking the continuous listening feature is here.

Respond to a Wake Word

………


To read the rest of this article, check out my product on Flurly!

https://flurly.com/p/responsive-voice-recognition-in-flutter

Below is a demo of the final app in action. This whole demo was recorded without touching the screen.


Subscribe to the “What the Flutter” Newsletter!

Subscription received!

Please check your email to confirm your newsletter subscription.

12 thoughts on “Preview – Responsive Voice Recognition in Flutter

  1. After buying this article, do you mean at the end that am not able to make it work in background and i want to make my own wake-up word (like in each user will have his own wake-up word) and i want to implement this service if only the user is logged in?!

    1. Hey Fadi, thanks for reading!

      Once you implement the wake word listener, it will continue to run while your app is opened.
      If you close the app, you should also pause the wake word listener. What is your use case for running the listener in the background?

      If you want each user to have their own custom wake word, you will need to use the Picovoice dashboard to create these.

      Which part of the article are you referring to?

      Thanks again!

      1. Hey Joe,

        Yes am using firebase, could you help me am trying to develop flutter app for Android and IOS that is able to protect women from danger with help of voice recognition. This is how it works, first user must create account and fill his information with a wake word and samples of his voice to train it. What i need now is able to keep listening for her voice input for her wake word. If the wake word is activated then i fire record audio with her location and send to the server. The server will process the audio and determine if it’s her voice. If it’s true it will inform her contact that she choosed them in the beginning and 911 that she is in danger and provide them her location. I want this feature of continuous recognition only if she is logged in her account.

        Thanks in advance.

      2. Joe, thank you for this article. I am new to flutter and i am developing a speech recognition with wake word that could also save the recognized text into firebase DB. I bought your PDF tutorial but i dont seem to understand what to do exactly, could you pls send me the source code or video step by step procedure to fix this, I really need your assistance on this. I await your response thanks.

  2. Could you provide your full code for me because i tested and it didn’t work well for me, maybe i did something wrong but just provide me your full code for better understanding.

    1. Hello, I would like to know if you got the full source code, i am seriously in need of it. The PDF tutorial is not help i need. please help thanks.

      1. Thanks a lot for the link. i am just wondering is there any tutorial on how to save the recognized word into firebase DB. i am actually developing a voice recognition app that save user input in to DB and give feedback base on the user voice message. pls help if you have any full tutorial on this, i am new to flutter. Thanks a lot!

  3. Can you at least tell me how i make it work in background, i tried to run in background and gave me error about missing plugin exception no implementation found for method listen on channel flutter_voice_processor_events.

  4. Hello Joe,

    I am unable to download your files on Flurly after buying it. It has errors like this:

    TypeError [ERR_INVALID_CHAR]: Invalid character in header content [“Content-Disposition”]
    at ServerResponse.setHeader (_http_outgoing.js:473:3)
    at /root/kaorenllc/kaoren.js:16:7
    at Layer.handle [as handle_request] (/root/kaorenllc/node_modules/express/lib/router/layer.js:95:5)
    at next (/root/kaorenllc/node_modules/express/lib/router/route.js:137:13)
    at Route.dispatch (/root/kaorenllc/node_modules/express/lib/router/route.js:112:3)
    at Layer.handle [as handle_request] (/root/kaorenllc/node_modules/express/lib/router/layer.js:95:5)
    at /root/kaorenllc/node_modules/express/lib/router/index.js:281:22
    at Function.process_params (/root/kaorenllc/node_modules/express/lib/router/index.js:335:12)
    at next (/root/kaorenllc/node_modules/express/lib/router/index.js:275:10)
    at jsonParser (/root/kaorenllc/node_modules/body-parser/lib/types/json.js:110:7)

Leave a Reply to Peter Cancel reply