New to KendoReactStart a free 30-day trial

SpeechToTextButton Integration
Premium

The KendoReact SpeechToTextButton can be seamlessly integrated with other KendoReact components to create powerful voice-enabled user interfaces as well as third-party integration with external service providers. This article demonstrates common integration patterns and use cases.

KendoReact TextArea Integration

One of the most practical integrations is using the SpeechToTextButton as a prefix/suffix for text input components. This allows users to dictate text directly into text areas, enhancing accessibility and user experience.

TextArea with Speech-to-Text Suffix

The following example demonstrates how to integrate the SpeechToTextButton as a suffix for a TextArea component, creating a voice-enabled text input experience:

Change Theme
Theme
Loading ...

External Service Integration

The SpeechToTextButton can be integrated with external speech recognition services or APIs, such as Google Cloud Speech-to-Text or OpenAI Whisper.

Google Cloud Speech-to-Text Integration

The following example demonstrates how to integrate the SpeechToTextButton with the Google Cloud Speech-to-Text API by using async event handlers to record audio and send it to the Google Cloud service. To test the demo, copy it locally and set your API key to the GOOGLE_API_KEY variable.

tsx
import React, { useState, useRef } from 'react';
import { SpeechToTextButton } from '@progress/kendo-react-buttons';

const App = () => {
    const [transcript, setTranscript] = useState('');
    const [logs, setLogs] = useState<string[]>([]);
    const mediaRecorderRef = useRef<MediaRecorder | null>(null);
    const audioChunksRef = useRef<Blob[]>([]);

    const addLog = (message: any) => {
        setLogs((prevLogs) => [...prevLogs, message]);
    };

    const onStart = async () => {
        addLog('Speech-to-Text process started');
        await startRecording();
    };

    const onStop = async () => {
        await stopRecording();
        addLog('Speech-to-Text process stopped');
    };

    const startRecording = async () => {
        addLog('Starting recording...');
        try {
            const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
            const mediaRecorder = new MediaRecorder(stream, {
                mimeType: 'audio/webm;codecs=opus'
            });

            audioChunksRef.current = [];

            mediaRecorder.ondataavailable = (event) => {
                audioChunksRef.current.push(event.data);
            };

            mediaRecorder.onstop = async () => {
                const audioBlob = new Blob(audioChunksRef.current, { type: 'audio/webm' });
                await sendToGoogleCloud(audioBlob);
            };

            mediaRecorder.start();
            mediaRecorderRef.current = mediaRecorder;
        } catch (error) {
            addLog('Error starting recording: ' + (error instanceof Error ? error.message : 'Unknown error'));
            handleError({ errorMessage: error instanceof Error ? error.message : 'Unknown error' });
        }
    };

    const stopRecording = () => {
        addLog('Stopping recording...');
        if (mediaRecorderRef.current && mediaRecorderRef.current.state === 'recording') {
            mediaRecorderRef.current.stop();
            mediaRecorderRef.current.stream.getTracks().forEach((track) => track.stop());
        }
    };

    const sendToGoogleCloud = async (audioBlob) => {
        addLog('Sending audio to Google Cloud Speech-to-Text API...');

        try {
            const audioBase64 = await blobToBase64(audioBlob);

            const requestBody = {
                config: {
                    encoding: 'WEBM_OPUS',
                    sampleRateHertz: 48000,
                    languageCode: 'en-US',
                    enableAutomaticPunctuation: true,
                    model: 'latest_long'
                },
                audio: {
                    content: audioBase64
                }
            };

            // Replace with your own Google Cloud API key
            const GOOGLE_API_KEY = 'YOUR_GOOGLE_CLOUD_API_KEY';

            const response = await fetch(`https://speech.googleapis.com/v1/speech:recognize?key=${GOOGLE_API_KEY}`, {
                method: 'POST',
                headers: {
                    'Content-Type': 'application/json'
                },
                body: JSON.stringify(requestBody)
            });

            const result = await response.json();
            addLog('Google Cloud transcription result: ' + JSON.stringify(result));

            if (result.results && result.results.length > 0) {
                setTranscript(result.results[0].alternatives[0].transcript);
                addLog('result: ' + JSON.stringify(result));
            } else {
                addLog('No transcription results received');
            }
        } catch (error: any) {
            addLog('Error calling Google Cloud Speech-to-Text: ' + error.message);
            handleError({ errorMessage: error.message });
        }
    };

    const blobToBase64 = (blob) => {
        return new Promise((resolve, reject) => {
            const reader = new FileReader();
            reader.onloadend = () => {
                const base64 = typeof reader.result === 'string' ? reader.result.split(',')[1] : null;
                resolve(base64);
            };
            reader.onerror = reject;
            reader.readAsDataURL(blob);
        });
    };

    const handleError = (event) => {
        addLog('Speech recognition error: ' + event.errorMessage);
    };

    return (
        <div>
            <SpeechToTextButton integrationMode="none" onStart={onStart} onEnd={onStop} />
            {transcript && <div className="transcript">{transcript}</div>}
            <div className="log-area">
                {logs.map((log, idx) => (
                    <div key={idx} className="log-entry">
                        {log}
                    </div>
                ))}
            </div>
            <style>
                {`
                    .log-area {
                    margin-top: 1rem;
                    padding: 0.5rem;
                    border: 1px solid #eee;
                    background: #fafbfc;
                    font-size: 0.95em;
                    max-height: 200px;
                    overflow-y: auto;
                    }
                    .log-entry {
                    margin-bottom: 0.25rem;
                    color: #555;
                    }
                `}
            </style>
        </div>
    );
};

export default App;

When integrating the SpeechToTextButton with Google Cloud Speech-to-Text or other external services, several key configurations should be made:

  • Async event handlers: Configure the onStart and onEnd event handlers to return promises that resolve only after external service communication is complete. This ensures the component waits for your service calls.
  • Integration mode: Set the integrationMode prop to control how the component behaves with external services. Use none to disable built-in speech recognition when relying entirely on external services.

See Also