Skip to main content
This page explains the behaviour of real-time facts extraction via the /stream API endpoint. In addition to the raw API reference, Corti also provides this endpoint wrapped into a JavaScript SDK ready for you to integrate with your app.

FactsR™

    Real-time clinical reasoning
    Humans-in-the-loop by design
    Intermediate documentation layer

This feature requires human oversight of its outputs

How to

Use this endpoint if your integration is centred around a real-time workflow, streaming live audio to return transcripts and extract facts over a WebSocket. This endpoint is stateful, with data storage related to a specific interactionId. Audio, transcripts and extracted facts are automatically saved to the database by the API. Enterprise-grade customers can disable data persistence.
You are in full control to delete individual records pertaining to a given interaction (e.g. only audio, only facts), or can simply delete the overarching interaction which will cascade delete to all collection resources.
Steps:
1

Create an interaction
2

Open the WebSocketSecure pertaining to that interaction and ensure to enable mode.facts and the desired outputLanguage of the facts, then commit your config.
3

As you stream audio to the server, transcript phrases will be returned and based on those, facts will be extracted. Facts extracted by the LLM will be assigned the factGroup deemed most relevant. See available fact Groups.
4

After you send the type.end event, the server will process any remaining transcripts, extract any remaining facts, return those and then a type.ENDED event to close the WSS.
This workflow is also enabled via the JavaScript SDK to enable even faster integration
5

Now it is up to your integration to bring the clinician into the loop to review the extracted facts, add any additional missing aspects, or discard facts not relevant to generate documentation from.Utilize the batch PATCH facts or single PATCH fact endpoints to reflect the clinician modifications.
Extracted facts will be marked a source: core. Use source: user to reflect a clinician edit or addition. Use isDiscarded: true to indicate a fact as not relevant to the documentation. However, note, that you need to still filter those facts out and not include them in your request to generate a document. The API merely facilitates record-keeping at this point.

Prior context

As facts can be added via API, this also enables adding important prior context information to the interaction. Imagine you want to ensure that the LLM has the following information that is as prior context availabe in the EHR, available to consider when extracting facts during the ambient consultation:
  • 56 year-old, male
  • Chief complaint: Struck by a turtle (W59.22)
  • Existing diagnoses: diabetes type II
Add these three facts before starting the ambient /stream WebSocket if you want fact extraction to be influenced by these.
Watch out for release notes about enhanced support for this workflow coming soon.
You can also only include these in the POST documents request when generating documentation.

Additional information

As illustrated below, here are some additional relevant aspects to be aware of:
  • A sliding context window is fed to the LLM
  • Facts are extracted at roughly 60 second intervals
  • Extracted facts undergo agentic quality-assuring refinement and are then returned over the WebSocket
  • As facts are extracted, those are also exposed to the LLM to lead to the most relevant sequential extraction
Currently, facts posted via API while an ambient /stream is ongoing, are not exposed to the LLM and can not be returned over the WebSocket
I