The Editor component is now implemented! Check it out.
v2.1.16
  • Get Started
  • Components
  • Composables
  • Typography
  • GitHub
  • Overview
  • defineLocale
  • defineShortcuts
  • extendLocale
  • extractShortcuts
  • useConfetti
  • useOverlay
  • useSpeechRecognition
  • useToast
  • b24icons
  • b24jssdk
Use our Nuxt starter
v2.1.16
  • Docs
  • Components
  • Composables
  • Typography

useSpeechRecognition

A speech recognition composable using the Web Speech API.
GitHub
Demo

Usage

The auto-imported composable useSpeechRecognition provides speech recognition functionality in the browser using the Web Speech API.

<script setup lang="ts">
const appLocale = useLocale()

const {
  state,
  isAvailable,
  isListening,
  start,
  stop,
  toggle,
  setLanguage
} = useSpeechRecognition(
  {
    lang: appLocale.locale.value.locale,
    continuous: true,
    interimResults: true
  },
  {
    onStart: () => console.log('Recognition started'),
    onEnd: () => console.log('Recognition ended'),
    onError: (error) => console.error('Error:', error),
    onResult: (result) => console.log('Result:', result.text)
  }
)
</script>

API

Parameters

useSpeechRecognition(options?: SpeechRecognitionOptions, events?: SpeechRecognitionEvents)

Creates a speech recognition instance with specified options and event handlers.

options:

lang
string
Recognition language. Default: 'en-US'.
continuous
boolean
Continuous recognition. If true, recognition continues until explicitly stopped. Default: true.
interimResults
boolean
Show interim results. Default: true.
maxAlternatives
number
Maximum number of alternatives for each result. Default: 1.

events:

onStart
() => void
Called when recognition starts.
onEnd
() => void
Called when recognition ends.
onError
(error: string) => void
Called when a recognition error occurs.
onResult
(result: SpeechRecognitionResult) => void
Called when a recognition result is received.

useSpeechRecognition returns an object with the following properties:

state

state: DeepReadonly<Ref<SpeechRecognitionState>>

The current speech recognition state.

isAvailable
boolean
Whether speech recognition is available in the current browser.
isListening
boolean
Whether recognition is currently active.
lastRecognizedText
string
The last recognized text (accumulated in continuous mode).

isAvailable

isAvailable: ComputedRef<boolean>

A computed property indicating speech recognition availability.

isListening

isListening: ComputedRef<boolean>

A computed property indicating whether recognition is active.

start()

start(): Promise<boolean>

Starts speech recognition.

Returns: Promise<boolean> - true if recognition started successfully, otherwise false.

stop()

stop(): Promise<boolean>

Stops speech recognition.

Returns: Promise<boolean> - true if recognition stopped successfully, otherwise false.

toggle()

toggle(): Promise<boolean>

Toggles the recognition state (start/stop).

Returns: Promise<boolean> - true if the operation was successful, otherwise false.

setLanguage()

setLanguage(lang: string): boolean

Sets the recognition language.

lang
string required
Language code in BCP 47 format (e.g., 'ru-RU', 'en-US').

Returns: boolean - true if the language was set successfully, otherwise false.

recognizer

recognizer: SpeechRecognition | undefined

Web Speech API Recognition instance for advanced use.

Example

Recognized text can be added to a Textarea or Input component.

<script setup lang="ts">
import MicrophoneOnIcon from '@bitrix24/b24icons-vue/outline/MicrophoneOnIcon'
import StopLIcon from '@bitrix24/b24icons-vue/outline/StopLIcon'

const input = ref('')

const appLocale = useLocale()

const {
  isAvailable,
  isListening,
  start,
  stop
} = useSpeechRecognition({
  lang: appLocale.locale.value.locale,
  continuous: true,
  interimResults: true
}, {
  onStart: () => {
    if (input.value === '') {
      return
    }

    input.value += ' '
  },
  onResult: (result) => {
    input.value += result.text
  }
})

const startDictation = async () => {
  await start()
}

const stopDictation = async () => {
  await stop()
}
</script>

<template>
  <div class="w-full relative flex items-end gap-2 bg-(--ui-color-bg-content-secondary) rounded-xs ring-1 ring-ai-250 hover:ring-ai-350 pr-2 pb-2">
    <B24Textarea
      v-model="input"
      :rows="2"
      autoresize
      placeholder="Try use speech recognition..."
      no-padding
      no-border
      class="flex-1 resize-none px-2.5"
    />
    <template v-if="isAvailable">
      <B24Button
        v-if="!isListening"
        :icon="MicrophoneOnIcon"
        color="air-tertiary-no-accent"
        size="sm"
        class="shrink-0"
        @click="startDictation"
      />
      <B24Button
        v-if="isListening"
        :icon="StopLIcon"
        color="air-secondary"
        size="sm"
        class="shrink-0 rounded-lg"
        @click="stopDictation"
      />
    </template>
  </div>
</template>

useOverlay

A composable to programmatically control overlays in App.

useToast

A composable for showing toast notifications in your app.

On this page

  • Usage
  • API
    • Parameters
    • state
    • isAvailable
    • isListening
    • start()
    • stop()
    • toggle()
    • setLanguage()
    • recognizer
  • Example
Releases
Published under MIT License.

Copyright © 2024-present Bitrix24