/transcribe WebSocket endpoint.
1
Authentication
Create an account and access token
Create an account and access token
See details here
2
Open a `/transcribe` WebSocket
Initiate real-time, bi-directional communication
Initiate real-time, bi-directional communication
Base URL: Example:
wss://api.{environment}.corti.app/audio-bridge/v2/transcribeRequired query parameters:tenant-nametoken(URL-encodedBearer <access_token>)
JavaScript
3
Send configuration
Required within 10s of opening connection
Required within 10s of opening connection
After the wss connection is opened, send a Send the configuration as soon as the socket opens:
config message within 10 seconds or the server closes the socket with CONFIG_TIMEOUT.Example configuration message:JavaScript
Wait for a message with
{"type": "CONFIG_ACCEPTED"} before sending audio. If you receive CONFIG_DENIED or CONFIG_TIMEOUT, close the socket and fix the configuration.4
Real-Time Stateless Dictation
Stream audio and receive transcripts
Stream audio and receive transcripts
Send audio frames
Send audio as binary WebSocket messages. See details on supported audio formats here.Send continuous stream of 250ms audio chunks while recording is active - no overlapping frames.
Handle responses
The server sends messages with differenttype values, for example:JavaScript
5
Flush the audio buffer (optional)
Force results to be returned from server
Force results to be returned from server
Use
flush to force pending transcript segments and/or dictation commands to be returned, without closing the session. This is useful to separate dictation into logical sections.Wait for
type: "flushed" before treating the section as complete.6
End the session
Sending the `end` message
Sending the `end` message
Send The server then:
end when you are done sending audio:- Emits any remaining
transcriptorcommandmessages. - Sends usage info, for example:
- Sends:
- Closes the WebSocket.
ended:7
Basic end-to-end example
JavaScript dictation app
JavaScript dictation app