Speech-to-Text Input
Feature Detail
Description
Speech-to-Text Input provides an in-app voice dictation capability that converts spoken words into text within activity summary and note fields, allowing peer mentors to narrate their report after an activity rather than typing it. The feature is triggered by a microphone button on text fields that support it; it does not record audio during the activity itself. Both Blindeforbundet and HLF explicitly requested this capability, with Blindeforbundet emphasising that recording during a home visit is unacceptable and that dictation is strictly a post-activity authoring tool.
Analysis
Many peer mentors - particularly those serving Blindeforbundet's visually impaired user base - find keyboard entry slow or inaccessible. Allowing voice dictation after an activity lowers the barrier to completing a report, directly increasing submission rates. Higher submission rates improve data quality for Bufdir reporting and coordinator oversight. For users with motor impairments (NHF's stroke-survivor cohort), voice input can be the difference between submitting a report and abandoning it entirely. The feature also reduces the time-per-registration metric, which is the primary adoption driver identified across all workshops.
Implemented using Flutter's speech_to_text plugin, which wraps iOS SFSpeechRecognizer and Android SpeechRecognizer. Transcription is performed on-device where the platform supports it, falling back to a cloud API (with user consent) where it does not. The Speech Input Widget is a reusable widget added to any TextFormField that opts in; it manages microphone permission state, recording indicator, and interim/final result display. The widget must clearly indicate when it is listening and when it has stopped - critical for accessibility and for Blindeforbundet's requirement that users know recording is not happening during a visit. No audio is stored; only the final text transcript is retained. The feature must degrade gracefully when the device does not support speech recognition or when the user denies microphone permission.
Components (42)
Shared Components
These components are reused across multiple features
User Interface (9)
Service Layer (15)
Data Layer (8)
Infrastructure (7)
User Stories
No user stories have been generated for this feature yet.