# Introduction to the Administration API
Source: https://docs.corti.ai/about/admin-api
Programmatic access to manage your Corti API Console account
## What is the Admin API?
The `Admin API` lets you manage your Corti API Console programmatically. It is built for administrators who want to automate account operations.
This `Admin API` is separate from the `Corti API` used for speech to text, text generation, and agentic workflows:
* Authentication and scope for the `Admin API` uses email-and-password to obtain a bearer token via `/auth/token`. This token is only used for API administration.
* The `Admin API` endpoints `/customers` and `/users` are only enabled and exposed for projects with Embedded Assistant.
Please [contact us](https://help.corti.app) if you have interest in this functionality or further questions.
### Use Cases
The following functionality is currently supported by the `Admin API`:
| Feature | Functionality | Scope |
| :------------------- | :----------------------------------------------------------------------- | :------------------------------- |
| **Authentication** | Authenticate user and get access token | All projects |
| **Manage Customers** | Create, update, list, and delete customer accounts within your project | Projects with Embedded Assistant |
| **Manage Users** | Create, update, list, and delete users associated with customer accounts | Projects with Embedded Assistant |
Permissions mirror the Corti API Console - only project admins or owners can create, update, or delete resources.
## Quickstart
* Sign up or log in at [console.corti.app](https://corti-api-console.web.app/)
* Ensure your account has a password set
Best practice: use a dedicated service account for Admin API automation. Assign only the minimal required role and rotate credentials regularly.
Call `/auth/token` with your Console email and password to obtain a JWT access token.
See API Reference: [Authenticate user and get access token](/api-reference/admin/auth/authenticate-user-and-get-access-token)
```bash theme={null}
curl -X POST https://api.console.corti.app/functions/v1/public/auth/token \
-H "Content-Type: application/json" \
-d '{
"email": "your-email@example.com",
"password": "your-password"
}'
```
Example response:
```json theme={null}
{
"accessToken": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"tokenType": "bearer",
"expiresIn": 3600
}
```
Include the token in the Authorization header for subsequent requests:
```bash theme={null}
curl -X GET https://api.console.corti.app/functions/v1/public/projects/{projectId}/customers \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
```
Tokens expire after `expiresIn` seconds. Once expired, call the `auth/token` endpoint again to obtain a new token.
***
## Top Pages
Obtain an access token
Create a new customer in a project
Create a new user within a customer
Please [contact us](https://help.corti.app) for support or more information
# Compliance & Trust
Source: https://docs.corti.ai/about/compliance
# Help Center
Source: https://docs.corti.ai/about/help
# Introduction to the Corti API
Source: https://docs.corti.ai/about/introduction
Overview of Corti application programming interfaces (API)
Corti is the all-in-one AI stack for healthcare, built for medical accuracy, compliance, and scale. Healthcare's complex language and specialized knowledge demands purpose-built AI infrastructure.
The Corti AI clinical speech understanding, LLMs, and agentic automation are delivered through a single platform that makes it easy to embed AI directly into healthcare workflows. From documentation and coding to billing and referrals, boosting care and fostering provider wellbeing, Corti enables product teams surface critical insights, improve patient outcomes, and ship faster with less effort.
Corti's goal is to be the most complete and accurate **AI infrastructure platform for healthcare developers** building products that demand medical-grade reasoning and enterprise reliability, without any compromises on integration speed or regulatory compliance.
Learn more about what makes the Corti AI platform the right choice for developers and healthcare organizations building the next generation of clinical applications.
***
## Why Choose the Corti API?
| | |
| :-------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Purpose-built for healthcare** | Optimized for the unique needs and compliance standards of the medical field. |
| **Real-time processing** | Live data streaming with highly accurate fact generation enables instantaneous AI-driven support to integrated applications and healthcare professionals. |
| **Seamless workflow integration** | Designed to work across multiple modalities within clinical and operational workflows. |
| **Customizable and scalable** | Robust and adaptable capabilities to fit your organizational needs. |
Bespoke API integrations SDKs and web components Embeddable UI elements Medical language proficiency Secure and compliant Real-time or asynchronous
***
## Integrate seamlessly
Corti AI can be integrated via the Corti API, allowing organizations to build bespoke AI-powered solutions that meet their specific needs. The same API powers [Corti Assistant](/assistant/welcome) - a fully packaged, EHR agnostic, real-time ambient AI scribe that automates documentation, information lookup, medical coding and more. If, however, you want to embed Corti AI into your workflow or customize the interactions, then take a deeper dive into the API documentation.
Corti’s **model network and orchestration layer** for Text and Audio, powering Speech to Text, Text Generation, and Agent capabilities.
The power of reasoning and contextual inference unlocks critical functionality to power healthcare workflows.
With the Corti API you can build any speech-enabled or text-based workflow for healthcare.
The capabilities of the Corti AI platform can be accessed directly via the API or with the help of SDKs, Web Components, and embeddable applications (as desired).
***
## Core Capabilities
Real-time and asynchronous medical speech recognition.
Reliable healthcare-focused LLM output and documentation.
Build adaptive agents for clinical and operational workflows.
***
## Bringing it All Together
This documentation site outlines how to use the API and provides example workflows.
* Each component of the Corti AI Platform has their own tab for detailed documentation: [Speech to Text](/stt/overview), [Text Generation](/textgen/overview), [Agentic Framework](/agentic/overview), and [Corti Assistant](/assistant/introduction).
* The [API Reference](/api-reference/welcome) tab provides detailed specification for each available endpoint.
* Go to the [API Console](/get_started/getaccess/) to create an account and client credentials to begin your journey.
* The Guides at the top of the Home tab outline how to quickly get started working with the API. Start with [authentication](/authentication/overview)!
* The [Javascript SDK](/sdk/js-sdk/) page walks through how to get stated quickly with Corti API.
* The [Release Notes](/release-notes/overview/) page provides information about the API change policy, scheduled upcoming changes, and links to release notes per product.
If you have any questions about how to implement Corti AI in your healthcare environment or application, then [contact us](https://help.corti.app) for more information.
# Languages
Source: https://docs.corti.ai/about/languages
Learn about how languages are supported in Corti APIs
Corti speech to text and text generation are specifically designed for use in the healthcare domain. Speech to text (STT) language models are designed to balance recognition speed, performance, and accuracy. Text generation LLMs accept various inputs depending on the workflow (e.g., transcripts or facts) and have defined guardrails to support quality assurance of facts and documents outputs.
The `language codes` listed below are used in API requests to define output language for speech to text and document generation.
* Learn more about speech to text endpoints [here](/stt/overview).
* Learn how to query the API for document templates available by language [here](/textgen/templates#retrieving-available-templates).
***
## Speech to Text Performance Tiers
Corti speech to text uses a tier system to categorize functionality and performance that is available per language:
| Tier | Description | Medical Terminology Validation |
| :----------- | :--------------------------------------------------------------------------------------------------------------- | :-----------------------------------------------: |
| **Base** | AI-powered speech recognition, ready to integrate with healthcare IT solutions | `Up to 1,000` |
| **Enhanced** | Base plus optimized medical vocabulary for a variety of specialties and improved support for real-time dictation | `1,000-99,999` |
| **Premier** | Enhanced plus speech to text models delivering the best performance in terms of accuracy, quality, and latency | `100,000+` |
***
## Language Availability per Endpoint
The table below summarizes languages supported by the Corti API and how they can be used with speech to text endpoints (`Transcribe`, `Stream`, and `Transcripts`) and text generation endpoints (`Documents`):
| Language | Language Code | ASR Performance | [Transcribe](/api-reference/transcribe) | [Stream](/api-reference/stream) | [Transcripts](/api-reference/transcripts/create-transcript) | [Documents1](/api-reference/documents/generate-document) |
| :---------------- | :------------------: | :--------------------------------: | :--------------------------------------------------------: | :------------------------------------------------: | :----------------------------------------------------------------------------: | :------------------------------------------------------------------------------------: |
| Arabic | `ar` | Base | | | | |
| Danish | `da` | Premier | | | 2 | |
| Dutch | `nl` | Enhanced | | | | |
| English (US) | `en` or `en-US` | Premier | | | 2 | |
| English (UK) | `en-GB` | Premier | | | 2 | |
| French | `fr` | Premier | | | 2 | |
| German | `de` | Premier | | | 2 | |
| Hungarian | `hu` | Enhanced | | | | 5 |
| Italian | `it` | Base | | | | |
| Norwegian | `no` | Enhanced | | | | |
| Portuguese | `pt` | Base | | | | |
| Spanish | `es` | Base | | | | |
| Swedish | `sv` | Enhanced | | | | |
| Swiss German | `gsw-CH`3 | Enhanced | | 4 | 2 | |
| Swiss High German | `de-CH`3 | Premier | | 4 | 2 | |
**Notes:** 1 Use the language codes listed above for the `outputLanguage` parameter in `POST/documents` requests. Template(s) or section(s) in the defined output must be available for successful document generation.
2 Speech to text accuracy for async audio file processing via `/transcripts` endpoint may be degraded as compared to real-time recognition via the `/transcribe` and `/stream` endpoints. Further model updates are in progress to address the performance limitation.
3 Use language code `gsw-CH` for dialectical Swiss German workflows (e.g., conversational AI scribing), and language code `de-CH` when Swiss High German is spoken (e.g., dictation).
4 For Swiss German `/stream` configuration: Use `gsw-CH` for `primaryLanguage` as you transcribe dialectical spoken to written Swiss High German, and use `de-CH` for the facts `outputLanguage`.
5 Hungarian document generation via default templates upon request.
***
## Languages Available for Exploration
The table below summarizes languages that, upon request, can enabled with `base` tier functionality and performance.
Corti values the opportunity to expand to new markets, but we need your collaboration and partnership in speech-to-text validation and functionality refinement.
Please [contact us](https://help.corti.app) to discuss further.
| Language | Language Code |
| :--------- | :-----------: |
| Bulgarian | `bg` |
| Croatian | `hr` |
| Czech | `cs` |
| Estonian | `et` |
| Finnish | `fi` |
| Greek | `el` |
| Hebrew | `he` |
| Japanese | `ja` |
| Latvian | `lv` |
| Lithuanian | `lt` |
| Maltese | `mt` |
| Mandarin | `cmn` |
| Polish | `pl` |
| Romanian | `ro` |
| Russian | `ru` |
| Slovakian | `sk` |
| Slovenian | `sl` |
| Ukrainian | `uk` |
***
## Language Translation
* Translation (audio capture in one language with transcript output in a different language) is not officially supported in the Corti API at this time.
* Some general support for translation of `transcripts` in English to `facts` in other languages (e.g. German, French, Danish, etc.) is available in [stream](/textgen/facts_realtime#using-the-api) or [extract Facts](/api-reference/facts/extract-facts) requests.
* Additional translation language-pair combinations are not quality assessed or performance benchmarked.
Please [contact us](https://help.corti.app) if you are interested in a language that is not listed here, need help with tiers and endpoint definitions, or have questions about how to use language codes in API requests.
# Public Roadmap
Source: https://docs.corti.ai/about/roadmap
# A2A Protocol (Agent-to-Agent)
Source: https://docs.corti.ai/agentic/a2a-protocol
Learn about the Agent-to-Agent protocol for inter-agent communication
### What is the A2A Protocol
The **Agent-to-Agent (A2A)** protocol is an open standard that enables secure, framework-agnostic communication between autonomous AI agents. Instead of building bespoke integrations whenever you want agents to collaborate, A2A gives Corti-Agentic and other systems a **common language** agents can use to discover, talk to, and delegate work to one another.
For the full technical specification, see the official A2A project docs at [a2a-protocol.org](https://a2a-protocol.org/latest/).
Originally developed by Google and now stewarded under the Linux Foundation, A2A solves a core problem in multi-agent systems: interoperability across ecosystems, languages, and vendors. It lets you connect agents built in Python, JavaScript, Java, Go, .NET, or other languages and have them cooperate on complex workflows without exposing internal agent state or proprietary logic.
### Why Corti-Agentic Uses A2A
We chose A2A because it:
* **Standardizes agent communication.** Agents can talk to each other without siloed, point-to-point integrations. That makes composite workflows easier to build and maintain.
* **Supports real workflows.** A2A includes discovery, task negotiation, and streaming updates, so agents can coordinate long-running or multi-step jobs.
* **Preserves security and opacity.** Agents exchange structured messages without sharing internal memory or tools. That protects intellectual property and keeps interactions predictable.
* **Leverages open tooling.** There are open source SDKs in multiple languages and example implementations you can reuse.
In Corti-Agentic, A2A is the backbone for agent collaboration. Whether you’re orchestrating specialist agents, chaining reasoning tasks, or integrating external agent services, A2A gives you a robust, open foundation you don’t have to reinvent.
### Open Source SDKs and Tooling
For links to Corti’s official SDK and the official A2A project SDKs (Python, JavaScript/TypeScript, Java, Go, and .NET), see **[SDKs & Integrations](/agentic/sdks-integrations)**.
Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework.
# Create Agent
Source: https://docs.corti.ai/agentic/agents/create-agent
agentic/auto-generated-openapi.yml post /agents
This endpoint allows the creation of a new agent that can be utilized in the `POST /agents/{id}/v1/message:send` endpoint.
# Delete Agent by ID
Source: https://docs.corti.ai/agentic/agents/delete-agent-by-id
agentic/auto-generated-openapi.yml delete /agents/{id}
This endpoint deletes an agent by its identifier. Once deleted, the agent can no longer be used in threads.
# Get Agent by ID
Source: https://docs.corti.ai/agentic/agents/get-agent-by-id
agentic/auto-generated-openapi.yml get /agents/{id}
This endpoint retrieves an agent by its identifier. The agent contains information about its capabilities and the experts it can call.
# Get Agent Card
Source: https://docs.corti.ai/agentic/agents/get-agent-card
agentic/auto-generated-openapi.yml get /agents/{id}/agent-card.json
This endpoint retrieves the agent card in JSON format, which provides metadata about the agent, including its name, description, and the experts it can call.
# Get Context by ID
Source: https://docs.corti.ai/agentic/agents/get-context-by-id
agentic/auto-generated-openapi.yml get /agents/{id}/v1/contexts/{contextId}
This endpoint retrieves all tasks and top-level messages associated with a specific context for the given agent.
# Get Task by ID
Source: https://docs.corti.ai/agentic/agents/get-task-by-id
agentic/auto-generated-openapi.yml get /agents/{id}/v1/tasks/{taskId}
This endpoint retrieves the status and details of a specific task associated with the given agent. It provides information about the task's current state, history, and any artifacts produced during its execution.
# List Agents
Source: https://docs.corti.ai/agentic/agents/list-agents
agentic/auto-generated-openapi.yml get /agents
This endpoint retrieves a list of all agents that can be called by the Corti Agent Framework.
# List Registry Experts
Source: https://docs.corti.ai/agentic/agents/list-registry-experts
agentic/auto-generated-openapi.yml get /agents/registry/experts
This endpoint retrieves the experts registry, which contains information about all available experts that can be referenced when creating agents through the AgentsCreateExpertReference schema.
# Send Message to Agent
Source: https://docs.corti.ai/agentic/agents/send-message-to-agent
agentic/auto-generated-openapi.yml post /agents/{id}/v1/message:send
This endpoint sends a message to the specified agent to start or continue a task. The agent processes the message and returns a response. If the message contains a task ID that matches an ongoing task, the agent will continue that task; otherwise, it will start a new task.
# Update Agent by ID
Source: https://docs.corti.ai/agentic/agents/update-agent-by-id
agentic/auto-generated-openapi.yml patch /agents/{id}
This endpoint updates an existing agent. Only the fields provided in the request body will be updated; other fields will remain unchanged.
# System Architecture
Source: https://docs.corti.ai/agentic/architecture
Learn about the Agentic Framework system architecture
The Corti Agentic Framework adopts a **multi-agent architecture** to power development of healthcare AI solutions. As compared to a monolithic LLM, the Corti Agentic Framework allows for improved specialization and protocol-based composition.
## Architecture Components
The architecture consists of three core components working together:
* **[Orchestrator](/agentic/orchestrator)** — The central coordinator that receives user requests and delegates tasks to specialized Experts via the A2A protocol.
* **[Experts](/agentic/experts)** — Specialized sub-agents that perform domain-specific work, potentially calling external services through MCP.
* **[Memory](/agentic/context-memory)** — Maintains persistent context and state, enabling the Orchestrator to make informed decisions and ensuring continuity across conversations.
Together, this architecture enables complex workflows through protocol-based composition while maintaining strict data isolation and stateless reasoning agents.
## Interaction mechanisms in Corti
The A2A Protocol supports various interaction patterns to accommodate different needs for responsiveness and persistence. Corti builds on these patterns so you can choose the right interaction model for your product:
* **Request/Response (Polling)**: Used for many synchronous Corti APIs where you send input and wait for a single response. For long‑running Corti tasks, your client can poll the task endpoint for status and results.
* **Streaming with Server-Sent Events (SSE)**: Used by Corti for real‑time experiences (for example, ambient notes or live guidance). Your client opens an SSE stream to receive incremental tokens, events, or status updates over an open HTTP connection.
Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework.
# Beginners' Guide to Agents
Source: https://docs.corti.ai/agentic/beginners-guide
How LLM agents work in the Corti Agentic Framework
In healthcare, an **LLM agent** is not a chatbot trying to answer everything on its own. The language model is used primarily for reasoning and planning, understanding a request, breaking it down, and deciding which experts, tools, or data sources are best suited to handle each part of the task.
Instead of relying on internal knowledge, agents retrieve information from trusted external knowledge bases, clinical systems, and customer-owned data at runtime. When appropriate, they can also take controlled actions, such as writing structured data back to an EHR, triggering downstream workflows, or sending information to other systems.
The **Corti Agentic Framework** is the healthcare-grade platform that makes this possible in production. It provides the orchestration layer that allows agents to delegate work to specialized experts, operate within strict safety and governance boundaries, and remain fully auditable. This enables AI systems that can reason, look things up, and act, without guessing or bypassing clinical control.
# Context & Memory
Source: https://docs.corti.ai/agentic/context-memory
Learn how context and memory work in the Corti Agentic Framework
A **context** in the Corti Agentic Framework makes use of memory from previous text and data in the conversation so far—think of it as a thread that maintains conversation history. Understanding how context works is essential for building effective integrations that maintain continuity across multiple messages.
## What is Context?
A `Context` (identified by a server-generated `contextId`) is a logical grouping of related `Messages`, `Tasks`, and `Artifacts`, providing context across a multi-turn "conversation"". It enables you to associate multiple tasks and agents with a single patient encounter, call, or workflow, ensuring continuity and proper scoping of shared knowledge throughout.
The `contextId` is **always created on the server**. You never generate it client-side. This ensures proper state management and prevents conflicts.
### Data Isolation and Scoping
**Contexts provide strict data isolation**: Data can **NEVER** leak across contexts. Each `contextId` creates a completely isolated conversation scope. Messages, tasks, artifacts, and any data within one context are completely inaccessible to agents working in a different context. This ensures:
* **Privacy and security**: Patient data from one encounter cannot accidentally be exposed to another encounter
* **Data integrity**: Information from different workflows remains properly separated
* **Compliance**: You can confidently scope sensitive data to specific contexts without risk of cross-contamination
When you need to share information across contexts, you must explicitly pass it via `DataPart` objects in your messages—there is no automatic data sharing between contexts.
## Using Context for Automatic Memory Management
The simplest way to use context is to let the framework automatically manage conversation memory:
### Workflow Pattern
1. **First message**: Send your message **without** a `contextId`. The server will create a new context automatically.
2. **Response**: The server's response includes the newly created `contextId`.
3. **Subsequent messages**: Include that `contextId` in your requests. Memory from previous messages in that context is automatically managed and available to the agent.
When you include a `contextId` in your request, the agent has access to all previous messages, artifacts, and state within **that specific context only**. Data from other contexts is completely isolated and inaccessible. This enables natural, continuous conversations without manually passing history, while maintaining strict data boundaries between different encounters or workflows.
### Standalone Requests
If you don't want automatic memory management, always send messages **without** a `contextId`. Each message will then be treated as a standalone request without access to prior conversation history. This is useful for:
* One-off queries that don't depend on prior context
* Testing and debugging individual requests
* Scenarios where you want explicit control over what context is included
## Passing Additional Context with Each Request
In addition to automatic memory management via `contextId`, you can pass additional context in each request by including `DataPart` objects in your message. This is useful when you want to provide specific structured data, summaries, or other context that should be considered for that particular request.
```json theme={null}
{
"contextId": "ctx_abc123",
"messages": [
{
"role": "user",
"parts": [
{
"type": "text",
"text": "Generate a summary of this patient encounter"
},
{
"type": "data",
"data": {
"patientId": "pat_12345",
"encounterDate": "2025-12-15",
"chiefComplaint": "Chest pain",
"vitalSigns": {
"bloodPressure": "120/80",
"heartRate": 72,
"temperature": 98.6
}
}
}
]
}
]
}
```
This approach allows you to:
* Provide structured data (patient records, clinical facts, etc.) alongside text
* Include summaries or distilled information from external sources
* Pass metadata or configuration that should be considered for this specific request
* Combine automatic memory (via `contextId`) with explicit context (via `DataPart`)
## How Memory Works
The Corti Agentic Framework uses an intelligent memory system that automatically indexes and stores all content within a context, enabling semantic retrieval when needed.
### Automatic Indexing
Every `TextPart` and `DataPart` you send in messages is automatically indexed and stored in the context's memory. This includes:
* Text content from user and agent messages
* Structured data from `DataPart` objects (patient records, clinical facts, metadata, etc.)
* Artifacts generated by tasks
* Any other content that flows through the context
### Semantic Retrieval
The memory system operates like a RAG (Retrieval Augmented Generation) pipeline. When an agent processes a new message:
1. **Semantic search**: The system performs semantic search across all indexed content in the context's memory
2. **Relevant retrieval**: It retrieves the most semantically relevant information based on the current query or task
3. **Just-in-time injection**: This relevant context is automatically injected into the agent's prompt, ensuring it has access to the right information at the right time
This means you don't need to manually pass all relevant history with each request—the system intelligently retrieves what's needed based on semantic similarity. For example, if you ask "What was the patient's chief complaint?" in a later message, the system will automatically retrieve and include the relevant information from earlier in the conversation, even if it was mentioned many messages ago.
### Benefits
* **Efficient**: Only relevant information is retrieved and used, reducing token usage
* **Automatic**: No need to manually manage what context to include
* **Semantic**: Works based on meaning, not just keyword matching
* **Comprehensive**: All content in the context is searchable and retrievable
## Context vs. Reference Task IDs
The framework provides two mechanisms for linking related work:
* **`contextId`** – Groups multiple related `Messages`, `Tasks`, and `Artifacts` together (think of it as the encounter/call/workflow bucket). This provides automatic memory management and is sufficient for most use cases.
* **`referenceTaskIds`** – An optional list of specific past `Task` IDs within the same context that should be treated as explicit inputs or background. Note that `referenceTaskIds` are scoped to a context—they reference tasks within the same `contextId`.
**In most situations, you can ignore `referenceTaskIds`** since the automatic memory provided by `contextId` is sufficient. Only use `referenceTaskIds` when you need to explicitly direct the agent to pay attention to specific tasks or artifacts within the context, such as in complex multi-step workflows where you want to ensure certain outputs are prioritized.
## Context and Interaction IDs
If you're using contexts alongside Corti's internal interaction representation (for example, when integrating with Corti Assistant or other Corti products that use `interactionId`), note that **these two concepts are currently not linked**.
* **`contextId`** (from the Agentic Framework) and **`interactionId`** (from Corti's internal systems) are separate concepts that you will need to map yourself in your application.
* There is no automatic association between a Corti `interactionId` and an Agentic Framework `contextId`.
**Recommended approach:**
* **Use a fresh context per interaction**: When working with a Corti interaction, create a new `contextId` for that interaction. This keeps data properly scoped and isolated per interaction.
* Store the mapping between your `interactionId` and `contextId`(s) in your own application state or metadata.
* If you need to share data across multiple contexts within the same interaction, explicitly pass it via `DataPart` objects.
We're looking into ways to make the relationship between interactions and contexts more ergonomic if this is relevant to your use case. For now, maintaining your own mapping and using one context per interaction is the recommended pattern.
For more details on how context relates to other core concepts, see [Core Concepts](/agentic/core-concepts).
Please [contact us](https://help.corti.app/) if you need more information about context and memory in the Corti Agentic Framework.
# Core Concepts
Source: https://docs.corti.ai/agentic/core-concepts
Learn the fundamental building blocks of the Corti Agentic Framework
This page adds Corti-specific detail on top of the core A2A concepts. We have tried to adhere as closely as possible to the intended A2A protocol specification — for the canonical definition of these concepts, see the A2A documentation on [Core Concepts and Components in A2A](https://a2a-protocol.org/latest/topics/key-concepts).
The Corti Agentic Framework uses a set of core concepts that define how Corti agents, tools, and external systems interact. Understanding these building blocks is essential for developing on the Corti platform and for integrating your own systems using the A2A Protocol.
## Core Actors
At Corti, these actors typically map to concrete products and integrations:
* **User**: A clinician, contact-center agent, knowledge worker, or an automated service in your environment. The user initiates a request (for example, “summarize this consultation” or “triage this patient”) that requires assistance from one or more Corti-powered agents.
* **A2A Client (Client Agent)**: The application that calls Corti. This is your application/server. The client initiates communication using the A2A Protocol and orchestrates how results are used in your product.
* **A2A Server (Remote Agent)**: A Corti agent or agentic system that exposes an HTTP endpoint implementing the A2A Protocol. It receives requests from clients, processes tasks, and returns results or status updates.
## Fundamental Communication Elements
The following elements are fundamental to A2A communication and how Corti uses them:
A JSON metadata document describing an agent's identity, capabilities, endpoint, skills, and authentication requirements.
**Key Purpose:** Enables Corti and your applications to discover agents and understand how to call them securely and effectively.
A stateful unit of work initiated by an agent, with a unique ID and defined lifecycle.
**Key Purpose:** Powers long‑running operations in Corti (for example, document generation or multi‑step workflows) and enables tracking and collaboration.
A single turn of communication between a client and an agent, containing content and a role ("user" or "agent").
**Key Purpose:** Carries instructions, clinical context, user questions, and agent responses between your application, Corti Assistant, and remote agents.
The fundamental content container (for example, TextPart, FilePart, DataPart) used within Messages and Artifacts.
**Key Purpose:** Lets Corti exchange text, audio transcripts, structured JSON, and files in a consistent way across agents and tools.
A tangible output generated by an agent during a task (for example, a document, image, or structured data).
**Key Purpose:** Represents concrete Corti results such as SOAP notes, call summaries, recommendations, or other structured outputs.
A server-generated identifier (`contextId`) that logically groups multiple related `Task` objects, providing context across a series of interactions.
**Key Purpose:** Enables you to associate multiple tasks and agents with a single patient encounter, call, or workflow, ensuring continuity and proper scoping of shared knowledge throughout an interaction.
## Agent Cards in Corti
The Agent Card is a JSON document that serves as a digital business card for initial discovery and interaction setup. It provides essential metadata about an agent. Clients parse this information to determine if an agent is suitable for a given task, how to structure requests, and how to communicate securely. Key information includes identity, service endpoint (URL), A2A capabilities, authentication requirements, and a list of skills.
Within Corti, Agent Cards are how you:
* Discover first‑party Corti agents and their capabilities.
* Register and describe your own remote agents so Corti workflows can call them.
* Declare authentication and compliance requirements up front, before any PHI or sensitive data is exchanged.
## Messages and Parts in Corti
A message represents a single turn of communication between a client and an agent. It includes a role ("user" or "agent") and a unique `messageId`. It contains one or more Part objects, which are granular containers for the actual content. This design allows A2A to be modality independent and lets Corti mix clinical text, transcripts, and structured data safely in a single exchange.
The primary part kinds are:
* `TextPart`: Contains plain textual content, such as instructions, questions, or generated notes.
* `DataPart`: Carries structured JSON data. This is useful for clinical facts, workflow parameters, EHR identifiers, or any machine‑readable information you exchange with Corti.
* `FilePart`: Represents a file (for example, a PDF discharge letter or an audio recording). It can be transmitted either inline (Base64 encoded) or through a URI. It includes metadata like "name" and "mimeType". This is not yet fully supported.
## Artifacts in Corti
An artifact represents a tangible output or a concrete result generated by a remote agent during task processing. Unlike general messages, artifacts are the actual deliverables. An artifact has a unique `artifactId`, a human-readable name, and consists of one or more part objects. Artifacts are closely tied to the task lifecycle and can be streamed incrementally to the client.
In Corti, artifacts typically correspond to business outputs such as:
* Clinical notes (for example, SOAP notes, discharge summaries).
* Extracted clinical facts or coding suggestions.
* Generated documents, checklists, or other workflow‑specific artifacts.
## Agent response: Task or Message
The agent response can be a new `Task` (when the agent needs to perform a long-running operation) or a `Message` (when the agent can respond immediately).
On the Corti platform this means:
* For quick operations (for example, a short completion or a classification), your agent often responds with a `Message`.
* For longer workflows (for example, generating a full clinical document, coordinating multiple tools, or waiting on downstream systems), your agent responds with a `Task` that you can monitor and later retrieve artifacts from.
# Experts
Source: https://docs.corti.ai/agentic/experts
Learn about Experts available for use with the AI Agent
An **Expert** is an LLM-powered capability that an AI agent can utilize. Experts are designed to complete small, discrete tasks efficiently, enabling the Orchestrator to compose complex workflows by chaining multiple experts together.
## Expert Registry
Corti maintains a **registry of experts** that includes both first-party experts built by Corti and third-party integrations. You can browse and discover available experts through the [Expert Registry API](/agentic/agents/list-agent) endpoint, which returns information about all available experts including their capabilities, descriptions, and configuration requirements.
The registry includes experts for various healthcare use cases such as:
* Clinical reference lookups
* Medical coding
* Document generation
* Data extraction
* And more
## Bring Your Own Expert
You can create custom experts by exposing an MCP (Model Context Protocol) server. When you register your MCP server, Corti wraps it in a custom LLM agent with a system prompt that you can control. This allows you to:
* Integrate your own tools and data sources
* Create domain-specific experts tailored to your workflows
* Maintain control over the expert's behavior through custom system prompts
* Leverage Corti's orchestration and memory management while using your own tools
### Expert Configuration
When creating a custom expert, you provide configuration that includes:
* **Expert metadata**: ID, name, and description
* **System prompt**: Controls how the LLM agent behaves and reasons about tasks
* **MCP server configuration**: Details about your MCP server including transport type, authorization, and connection details (see [MCP Authentication](/agentic/mcp-authentication) for details)
```json Expert Configuration expandable theme={null}
[
{
"type": "expert",
"id": "ecg_interpreter",
"name": "ECG Interpreter",
"description": "Interprets 12 lead ECGs.",
"systemPrompt": "You are an expert ECG interpreter.",
"mcpServers": [
{
"id": "srv1",
"name": "ECG API Svc",
"transportType": "streamable_http",
"authorizationType": "none",
"url": "https://api.ecg.com/x"
}
]
}
]
```
### MCP Server Requirements
Your MCP server must:
* Implement the [Model Context Protocol](https://modelcontextprotocol.io/) specification
* Expose tools via the standard MCP `tools/list` and `tools/call` endpoints
* Handle authentication
Once registered, your custom expert becomes available to the Orchestrator and can be used alongside Corti's built-in experts in multi-expert workflows.
## Multi-Agent Composition
This feature is coming soon.
We're working on exposing A2A (Agent-to-Agent) endpoints that will allow you to attach multiple agents together, enabling more sophisticated multi-agent workflows. This will provide:
* Direct agent-to-agent communication using the A2A protocol
* Composition of complex workflows across multiple agents
* Fine-grained control over agent interactions and data flow
For now, the Orchestrator handles expert composition automatically. When A2A endpoints are available, you'll be able to build custom agent networks while still leveraging Corti's orchestration capabilities.
## Direct Expert Calls
This feature is coming soon.
We're also working on enabling direct calls to experts, allowing you to use them directly in your workflows rather than only through agents. This will provide:
* Direct API access to individual experts
* Integration of experts into custom workflows
* More flexible composition patterns beyond agent-based orchestration
**While AI chat is a useful mechanism, it's not the only option!**
The Corti Agentic Framework is API-first, enabling synchronous or async usage across a range of modalities: scheduled batch jobs, clinical event triggers, UI widgets, and direct EHR system calls.
[Let us know](https://help.corti.app) what types of use cases you're exploring, from doctor-facing chat bots to system-facing automation backends.
Please [contact us](https://help.corti.app/) if you need more information about Experts or creating custom experts in the Corti Agentic Framework.
# Amboss Researcher
Source: https://docs.corti.ai/agentic/experts/amboss-researcher
Learn about how the Amboss Researcher expert works
Agent framework and tools are currently `under development`. API details subject to change ahead of general release.
The **Amboss Researcher** expert provides access to Amboss's comprehensive medical knowledge base, enabling AI agents to retrieve evidence-based clinical information, medical concepts, and educational content.
Amboss Researcher is particularly useful for clinical decision support, medical education, and accessing up-to-date medical knowledge during patient care workflows.
## Capabilities
The Amboss Researcher expert can:
* Search and retrieve medical concepts and clinical information
* Access evidence-based medical content
* Provide structured medical knowledge for clinical workflows
* Support medical education and training scenarios
## Use Cases
* Clinical decision support during patient consultations
* Medical education and training
* Quick reference lookups for medical concepts
* Evidence-based practice support
## Detailed information
The Amboss Researcher expert integrates with Amboss's medical knowledge platform to provide reliable, evidence-based medical information. When invoked by an AI agent, it can search Amboss's database and return structured medical content that can be used to inform clinical decisions or provide educational context.
# ClinicalTrials.gov
Source: https://docs.corti.ai/agentic/experts/clinicaltrials-gov
Learn about how the ClinicalTrials.gov expert works
Agent framework and tools are currently `under development`. API details subject to change ahead of general release.
The **ClinicalTrials.gov** expert enables AI agents to search and retrieve information from ClinicalTrials.gov, the U.S. National Library of Medicine's database of privately and publicly funded clinical studies.
ClinicalTrials.gov is the primary resource for finding ongoing and completed clinical trials, helping connect patients with research opportunities.
## Capabilities
The ClinicalTrials.gov expert can:
* Search ClinicalTrials.gov's database of clinical studies
* Retrieve trial information including eligibility criteria, locations, and status
* Find relevant clinical trials based on medical conditions or interventions
* Access trial protocols and study details
## Use Cases
* Finding relevant clinical trials for patients
* Research study discovery
* Accessing trial protocols and eligibility criteria
* Clinical research support
## Detailed information
The ClinicalTrials.gov expert integrates with the ClinicalTrials.gov database, which contains information about clinical studies conducted around the world. When invoked by an AI agent, it can search for relevant trials based on medical conditions, interventions, or other criteria, helping healthcare providers identify research opportunities for their patients and access detailed trial information.
# DrugBank
Source: https://docs.corti.ai/agentic/experts/drugbank
Learn about how the DrugBank expert works
Agent framework and tools are currently `under development`. API details subject to change ahead of general release.
The **DrugBank** expert provides AI agents with access to DrugBank, a comprehensive database containing detailed drug and drug target information.
DrugBank is an essential resource for drug information, interactions, pharmacology, and medication-related queries.
## Capabilities
The DrugBank expert can:
* Search DrugBank's database for drug information
* Retrieve drug interactions, contraindications, and warnings
* Access pharmacological data and drug properties
* Find medication-related information and dosing guidelines
## Use Cases
* Drug interaction checking
* Medication information lookups
* Pharmacological research
* Clinical decision support for prescribing
## Detailed information
The DrugBank expert integrates with DrugBank's comprehensive pharmaceutical knowledge base, which contains detailed information about drugs, their mechanisms of action, interactions, pharmacokinetics, and pharmacodynamics. When invoked by an AI agent, it can retrieve critical medication information to support safe prescribing practices and clinical decision-making.
# Medical Calculator
Source: https://docs.corti.ai/agentic/experts/medical-calculator
Learn about how the Medical Calculator expert works
Agent framework and tools are currently `under development`. API details subject to change ahead of general release.
The **Medical Calculator** expert enables AI agents to perform medical calculations, including clinical scores, dosing calculations, risk assessments, and other healthcare-related computations.
Medical Calculator ensures accurate clinical calculations, reducing the risk of manual calculation errors in critical healthcare scenarios.
## Capabilities
The Medical Calculator expert can:
* Perform clinical scoring calculations (e.g., CHADS2-VASc, APACHE, etc.)
* Calculate medication dosages based on patient parameters
* Compute risk assessments and probability scores
* Execute various medical formulas and algorithms
## Use Cases
* Clinical risk scoring and assessment
* Medication dosing calculations
* Laboratory value interpretations
* Clinical decision support calculations
## Detailed information
The Medical Calculator expert provides a comprehensive set of medical calculation capabilities, including clinical scores, dosing formulas, risk assessments, and other healthcare computations. It ensures accuracy and consistency in calculations that are critical for patient care, reducing the risk of errors that can occur with manual calculations.
# Medical Coding
Source: https://docs.corti.ai/agentic/experts/medical-coding
Learn about how the Medical Coding expert works
Agent framework and tools are currently `under development`. API details subject to change ahead of general release.
The **Medical Coding** expert provides AI agents with the ability to assign appropriate medical codes (such as ICD-10, CPT, or other coding systems) based on clinical documentation and patient information.
Medical Coding is essential for billing, claims processing, and maintaining accurate medical records that comply with healthcare coding standards.
## Capabilities
The Medical Coding expert can:
* Assign appropriate medical codes from various coding systems
* Analyze clinical documentation to identify codeable conditions
* Suggest codes based on diagnoses, procedures, and clinical findings
* Ensure compliance with coding standards and guidelines
## Use Cases
* Automated medical coding for billing and claims
* Clinical documentation coding assistance
* Code validation and verification
* Revenue cycle management support
## Detailed information
The Medical Coding expert analyzes clinical documentation, patient records, and medical narratives to identify and assign appropriate medical codes. It supports various coding systems including ICD-10, CPT, HCPCS, and others. The expert can help ensure accurate coding, reduce manual coding errors, and improve efficiency in healthcare administrative workflows.
# Posos
Source: https://docs.corti.ai/agentic/experts/posos
Learn about how the Posos expert works
Agent framework and tools are currently `under development`. API details subject to change ahead of general release.
The **Posos** expert provides AI agents with access to Posos's medical knowledge platform, enabling retrieval of clinical information and medical reference data.
Posos offers comprehensive medical reference information that supports clinical decision-making and medical education.
## Capabilities
The Posos expert can:
* Access Posos's medical knowledge database
* Retrieve clinical reference information
* Provide medical content and educational materials
* Support clinical workflows with authoritative medical data
## Use Cases
* Clinical reference lookups
* Medical information retrieval
* Supporting clinical decision-making
* Medical education and training
## Detailed information
The Posos expert integrates with Posos's medical knowledge platform to provide access to their comprehensive database of medical information. When invoked by an AI agent, it can search and retrieve relevant clinical reference data, medical content, and educational materials that support healthcare workflows and clinical decision-making processes.
# PubMed
Source: https://docs.corti.ai/agentic/experts/pubmed
Learn about how the PubMed expert works
Agent framework and tools are currently `under development`. API details subject to change ahead of general release.
The **PubMed** expert provides AI agents with access to PubMed, the comprehensive database of biomedical literature maintained by the National Library of Medicine.
PubMed is the go-to resource for accessing peer-reviewed medical research, clinical studies, and scientific publications.
## Capabilities
The PubMed expert can:
* Search PubMed's database of biomedical literature
* Retrieve research papers, clinical studies, and scientific articles
* Access abstracts and metadata for publications
* Find relevant research based on medical queries
## Use Cases
* Literature reviews and research
* Evidence-based practice support
* Finding relevant clinical studies
* Accessing peer-reviewed medical research
## Detailed information
The PubMed expert integrates with PubMed's extensive database, which contains millions of citations from biomedical literature, life science journals, and online books. When invoked by an AI agent, it can search for relevant research papers, clinical studies, and scientific publications, providing access to the latest evidence-based medical research to support clinical decision-making.
# Questionnaire Interviewing Expert
Source: https://docs.corti.ai/agentic/experts/questionnaire-interviewing
Learn about how the Questionnaire Interviewing expert works
Agent framework and tools are currently `under development`. API details subject to change ahead of general release.
The **Questionnaire Interviewing** expert enables AI agents to conduct structured interviews and questionnaires, guiding conversations to collect specific information in a systematic manner.
This expert is ideal for patient intake, clinical assessments, and any scenario where structured data collection is required.
## Capabilities
The Questionnaire Interviewing expert can:
* Conduct structured interviews following predefined questionnaires
* Guide conversations to collect specific information
* Adapt questioning based on responses
* Ensure comprehensive data collection
## Use Cases
* Patient intake and history taking
* Clinical assessments and screenings
* Research data collection
* Structured information gathering workflows
## Documentation
### Questionnaire
* `questionnaireId`: `string` (required)
* `version`: `string | number` (required)
* `startQuestion` : `string` (required, but will be optional soon; ID of a question)
* `questions`: `Question[]` – list of unique IDs (required)
* `title`, `description`, `meta`: optional
### Question (Discriminated by `type`)
Common fields for **all** question types:
* `id`: `string` (required, unique)
* `type`: one of the literals below (required)
* `boolean`
* `text_short`
* `text_long`
* `number`
* `date_time` (with `mode`: `date` | `time` | `datetime`)
* `scale` (with `min`, `max`, optional `step`, optional `labels[]`)
* `single_choice` (with `options[]`)
* `multi_choice` (with `options[]`, optional `maxSelections`)
* `text`: `string` (required)
* `guideline`: `string` (optional, instructs model to provide guidelines to the end user, can be used to augment question text)
* `facets`: `string` (optional, hints for the LLM on how to populate and interpret user answers)
* `required`: `boolean` (optional; if `true`, agent must collect a valid answer)
* `conditions`: `Condition[]` (optional; controls visibility/flow)
* `defaultNext`: `string` (optional; ID of the next question if no option-level `next` applies)
* `meta`: `object` (optional)
### Options (for choice questions)
* `value`: `string | number | boolean`
* `label`: `string`
* `guideline`: `string` (optional)
* `next`: `string` (optional, overrides `defaultNext` when chosen)
### Conditions
**Operators:** `=`, `!=`, `<`, `<=`, `>`, `>=`, `contains`, `not_contains`, `in`, `not_in`, `exists`, `not_exists`
* `question`: `string` – the **source** question ID to evaluate
* `operator`: as above
* `value`: required for all operators **except** `exists`/`not_exists`
## Example requests
### 1. Initial Request
User provides an incomplete or ambiguous answer (starts the flow):
```json theme={null}
{
"message": {
"role": "user",
"kind": "message",
"messageId": {generateUUID()},
"parts": [
{
"kind": "text",
"text": "Answer to the questionnaire: I'm ok satisfied"
},
{
"kind": "data",
"data": {
"type": "questionnaire",
"questionnaire": {
"questionnaireId": "sleep-survey-v1",
"version": "1.0",
"startQuestion": "question-1",
"questions": [
{
"id": "question-1",
"type": "scale",
"text": "How satisfied are you?",
"min": 1,
"max": 5
},
{
"id": "question-2",
"type": "scale",
"text": "How many hours of sleep did you have?",
"min": 0,
"max": 12
},
{
"id": "question-3",
"type": "text_short",
"text": "How well rested do you feel?"
},
{
"id": "question-4",
"type": "single_choice",
"text": "How many interruptions did you have?",
"options": [
{"value": "none", "label": "None at all"},
{"value": "few", "label": "A few"},
{"value": "lots", "label": "Lots"}
]
}
]
}
}
}
]
}
}
```
**Agent response**
```json theme={null}
{
"task": {
"id": "",
"contextId": "",
"status": {
"state": "completed",
"message": {
"role": "agent",
"parts": [
{
"kind": "text",
"text": "Thanks, how many hours of sleep did you have?"
}
],
"messageId": "", // For client use to track messages
"taskId": "",
"contextId": "",
"kind": "message"
}
},
"artifacts": [{
"artifactId": "",
"parts": [
{
"kind": "data",
"data": {
"answers": {
"question-1": 3
},
"is_completed": false,
"next_question_id": "question-2",
"questionnaire_id": "sleep-survey-v1",
"version": "1.0"
},
"metadata": {
"type": "questionnaire_response"
}
}
]
}],
"history": [
// ...
],
"metadata": {
// contains metadata related to task execution
// not relevant for purposes of input/output
},
"kind": "task"
}
}
```
### 2. Second Request
User clarifies and completes the missing information:
```json theme={null}
{
"message": {
"role": "user",
"kind": "message",
"messageId": {generateUUID()},
"contextId": "",
"parts": [
{
"kind": "text",
"text": "I had 7 hours of sleep"
}
]
}
}
```
Agent response
```json theme={null}
{
"task": {
"id": "",
"contextId": "",
"status": {
"state": "completed",
"message": {
"role": "agent",
"parts": [
{
"kind": "text",
"text": "Thanks, how many interruptions did you have while sleeping?"
}
],
"messageId": "", // For client use to track messages
"taskId": "",
"contextId": "",
"kind": "message"
}
},
"artifacts": [{
"artifactId": "",
"parts": [
{
"kind": "data",
"data": {
"answers": {
"question-1": 3,
"question-2": 7
},
"is_completed": false,
"next_question_id": "question-3",
"questionnaire_id": "sleep-survey-v1",
"version": "1.0"
},
"metadata": {
"type": "questionnaire_response",
}
}
]
}],
"history": [
// ...
],
"metadata": {
// contains metadata related to task execution
// not relevant for purposes of input/output
},
"kind": "task"
}
}
```
### 3. Final Request
User clarifies and completes the missing information:
```json theme={null}
{
"message": {
"role": "user",
"kind": "message",
"messageId": {generateUUID()},
"contextId": "",
"parts": [
{
"kind": "text",
"text": "I feel well rested and was not interrupted"
}
]
}
}
```
**Agent response (final result):**
```json theme={null}
{
"task": {
"id": "",
"contextId": "",
"status": {
"state": "completed"
},
"artifacts": [{
"artifactId": "",
"parts": [
{
"kind": "data",
"data": {
"type": "questionnaire_response",
"answers": {
"question-1": 3,
"question-2": 7,
"question-3": "Well rested",
"question-4": "none"
},
"is_completed": true,
"next_question_id": null,
"questionnaire_id": "sleep-survey-v1",
"version": "1.0"
},
"metadata": {
"type": "questionnaire_response"
}
}
]
}],
"history": [
//...
],
"metadata": {},
"kind": "task"
}
}
```
# Thieme
Source: https://docs.corti.ai/agentic/experts/thieme
Learn about how the Thieme expert works
Agent framework and tools are currently `under development`. API details subject to change ahead of general release.
The **Thieme** expert provides access to Thieme's medical reference materials and educational content, enabling AI agents to retrieve authoritative medical information from Thieme's publications.
Thieme is a trusted source for medical reference materials, textbooks, and educational content used by healthcare professionals worldwide.
## Capabilities
The Thieme expert can:
* Access Thieme's medical reference database
* Retrieve information from medical textbooks and publications
* Provide authoritative medical content
* Support clinical reference and education workflows
## Use Cases
* Clinical reference lookups
* Medical education and training
* Accessing authoritative medical information
* Supporting evidence-based practice
## Detailed information
The Thieme expert integrates with Thieme's medical knowledge platform, providing access to their extensive collection of medical reference materials, textbooks, and educational content. When invoked by an AI agent, it can search Thieme's database and return structured medical information that supports clinical decision-making and medical education.
# Web Search
Source: https://docs.corti.ai/agentic/experts/web-search
Learn about how the Web Search expert works
Agent framework and tools are currently `under development`. API details subject to change ahead of general release.
The **Web Search** expert enables AI agents to perform real-time web searches, allowing them to access current information from the internet that may not be available in static knowledge bases.
Web Search is essential for accessing the most up-to-date information, recent research, news, and dynamic content that changes frequently.
## Capabilities
The Web Search expert can:
* Perform real-time web searches across the internet
* Retrieve current information and recent updates
* Access news, research papers, and dynamic content
* Provide context from multiple online sources
## Use Cases
* Finding recent medical research and publications
* Accessing current news and updates
* Verifying information with multiple sources
* Retrieving information not available in static databases
## Detailed information
The Web Search expert connects AI agents to real-time web search capabilities, enabling them to query the internet and retrieve relevant information. This is particularly valuable for accessing information that changes frequently or is too recent to be included in static knowledge bases. The expert can aggregate results from multiple sources to provide comprehensive answers.
# FAQ
Source: https://docs.corti.ai/agentic/faq
Frequently asked questions about the Corti Agentic Framework
Common questions and answers to help you get the most out of the Corti Agentic Framework and the underlying A2A-based APIs.
The **Orchestrator** is the central coordinator of the Corti Agentic Framework. It receives user requests, reasons about what needs to be done, and delegates work to specialized Experts. The Orchestrator doesn't perform specialized work itself—instead, it plans, selects appropriate Experts, and coordinates their activities to accomplish complex workflows.
An **Expert** is a specialized sub-agent that performs domain-specific tasks. Experts are designed to complete small, discrete tasks efficiently, such as clinical reference lookups, medical coding, or document generation. The Orchestrator composes complex workflows by chaining multiple Experts together.
In summary: the Orchestrator coordinates and delegates; Experts execute specialized work.
For more details, see [Orchestrator](/agentic/orchestrator) and [Experts](/agentic/experts).
**A2A (Agent-to-Agent)** is the protocol used for accessing the Corti API and for communication between agents. It's the standard protocol that your application uses to interact with Corti agents, send messages, receive tasks, and manage the agent lifecycle. A2A enables secure, framework-agnostic communication between autonomous AI agents.
**MCP (Model Context Protocol)** is the way to connect additional Experts. When you create custom Experts by exposing an MCP server, Corti wraps it in a custom LLM agent. MCP handles agent-to-tool interactions, allowing Experts to interact with external systems and resources.
In the Corti Agentic Framework: A2A handles agent-to-agent communication (including your API calls to Corti), while MCP handles agent-to-tool interactions for Expert integrations.
For more information, see [A2A Protocol](/agentic/a2a-protocol) and [MCP Protocol](/agentic/mcp-protocol).
The Corti agent typically returns **Tasks** rather than Messages. A Task represents a stateful unit of work with a unique ID and defined lifecycle, which is ideal for most operations in the Corti Agentic Framework.
Tasks are used for:
* Long-running operations (for example, generating a full clinical document)
* Multi-step workflows that coordinate multiple Experts
* Operations that may need to wait on downstream systems
* Any work that benefits from tracking and monitoring
Messages (with immediate responses) are less common and typically used only for very quick operations like simple classifications or completions that can be resolved immediately without any asynchronous processing.
For more details, see [Core Concepts](/agentic/core-concepts).
Use **`TextPart`** for messages that will be directly exposed to the Orchestrator and the LLM. TextPart content is immediately available for reasoning and response generation.
Use **`DataPart`** for structured JSON data that will be stored in memory first and accessed through more indirect manipulation. DataPart content is automatically indexed and stored in the context's memory, enabling semantic retrieval when needed. DataPart is JSON-only and is useful for structured data like patient records, clinical facts, workflow parameters, or EHR identifiers.
You can combine both in a single message: use TextPart for the primary instruction or question, and DataPart to provide structured context that will be semantically retrieved when relevant.
For more details, see [Core Concepts](/agentic/core-concepts) and [Context & Memory](/agentic/context-memory).
Both `Message` and `Artifact` use the same underlying `Part` primitives, but they serve different roles:
* **`Message` (with `role: "agent"`)**
* Represents a **single turn of communication** from the agent to the client.
* Best for ephemeral conversational output, intermediate reasoning, clarifications, or status updates.
* Typically tied to a particular task step but not necessarily considered a durable business deliverable.
* **`Artifact`**
* Represents a **tangible, durable output** of a task (for example, a SOAP note, coding suggestions, a structured fact bundle, or a generated document).
* Has its own `artifactId`, name/metadata, and lifecycle; can be streamed, versioned, and reused by later tasks.
* Is what downstream systems, UIs, or audits usually consume as the final result.
A useful mental model is: **Messages are how agents "talk"; Artifacts are what they "produce".** You might see several `agent` messages during a task (status, intermediate commentary), but only a small number of artifacts that represent the completed work.
The Corti Agentic Framework provides automatic memory management through contexts. The `contextId` is always created on the server—send your first message without a `contextId`, and the server will return one in the response. Include that `contextId` in subsequent messages to maintain conversation history automatically.
You can also pass additional context in each request using `DataPart` objects to include structured data, summaries, or other specific context alongside the automatic memory.
For comprehensive guidance on context and memory management, see [Context & Memory](/agentic/context-memory).
No, you cannot share data between different contexts. Contexts provide strict data isolation—data can **never** leak across contexts. Each `contextId` creates a completely isolated conversation scope where messages, tasks, artifacts, and any data within one context are completely inaccessible to agents working in a different context.
This isolation ensures:
* **Privacy and security**: Patient data from one encounter cannot accidentally be exposed to another encounter
* **Data integrity**: Information from different workflows remains properly separated
* **Compliance**: You can confidently scope sensitive data to specific contexts without risk of cross-contamination
If you need to share information across contexts, you must explicitly pass it via `DataPart` objects in your messages—there is no automatic data sharing between contexts.
For more details, see [Context & Memory](/agentic/context-memory).
The current time-to-live (TTL) for context memory is **30 days**. After this period, the context and its associated memory are automatically cleaned up.
For more information about context lifecycle and memory management, see [Context & Memory](/agentic/context-memory).
The Orchestrator analyzes incoming requests and uses reasoning to determine which Expert(s) are needed to fulfill the task. It considers the nature of the request, the available Experts, and their capabilities.
You can control Expert selection by writing additional system prompts, both in the Orchestrator configuration and in individual Expert configurations. System prompts guide how the Orchestrator reasons about task decomposition and Expert selection, and how Experts interpret and execute their assigned work.
The Orchestrator can compose multiple Experts together, calling them in sequence or parallel as needed to accomplish complex workflows. For more information, see [Orchestrator](/agentic/orchestrator) and [Experts](/agentic/experts).
Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework.
# MCP Authentication
Source: https://docs.corti.ai/agentic/mcp-authentication
Learn how to authenticate MCP server calls in the Agentic Framework
This document covers how to register MCP servers and how to pass authentication data in A2A message DataParts so MCP tools can be registered.
## MCP server registration
Each MCP server record includes an `authorizationType` field that controls how the Agent API authenticates when registering tools and calling that server. DataParts provide credentials at runtime but do not change the configured authorization type.
### authorizationType = none
**Meaning**: MCP server is callable without authentication.
**Behavior**: No Authorization header or OAuth flow is used. Auth DataParts for this server are ignored.
**Registration example:**
```json theme={null}
{
"name": "medical-calculator",
"transportType": "streamable_http",
"authorizationType": "none",
"url": "http://mcp-server-medical-calculator.agents:80/mcp"
}
```
### authorizationType = inherit
**Meaning**: Reuse the incoming Agent API bearer token.
**Behavior**: Uses the token from the request `Authorization` header. The API request must include a valid bearer token or the request fails with `missing_inherited_token`.
**DataPart override**: If a token DataPart is supplied for this server name, that token is used instead of the inherited token.
**Registration example:**
```json theme={null}
{
"name": "medical-coding",
"transportType": "streamable_http",
"authorizationType": "inherit",
"url": "http://mcp-server-medical-coding.agents/mcp"
}
```
### authorizationType = bearer
**Meaning**: MCP server expects a bearer token.
**Behavior**: Uses the token from a matching DataPart (type=token). If the token is missing or invalid, the MCP server typically returns 401 and the task becomes `auth-required`.
**Registration example:**
```json theme={null}
{
"name": "medical-coding",
"transportType": "streamable_http",
"authorizationType": "bearer",
"url": "http://mcp-server-medical-coding.agents/mcp"
}
```
### authorizationType = oauth2.0
**Meaning**: MCP server expects OAuth client credentials.
**Behavior**: Uses `client_id` and `client_secret` from a matching DataPart (type=credentials) and performs a client\_credentials flow. Supported for `streamable_http` transport only; `sse` is not supported.
**Registration example:**
```json theme={null}
{
"name": "medical-coding",
"transportType": "streamable_http",
"authorizationType": "oauth2.0",
"url": "http://mcp-server-medical-coding.agents/mcp"
}
```
## Authorization via message DataParts
Authentication is supplied as an A2A DataPart with `kind: "data"` and the auth payload under `data`. The following fields are used:
* `type`: `token` or `credentials` (case-insensitive)
* `mcp_name`: MCP server name as registered (case-sensitive, trimmed)
* `token`: required when `type=token`
* `client_id` and `client_secret`: required when `type=credentials`
### Token example (for authorizationType=bearer or inherit override)
```json theme={null}
{
"kind": "data",
"data": {
"type": "token",
"mcp_name": "crm-mcp",
"token": "eyJhbGciOi..."
}
}
```
### Credentials example (for authorizationType=oauth2.0)
```json theme={null}
{
"kind": "data",
"data": {
"type": "credentials",
"mcp_name": "crm-mcp",
"client_id": "abc",
"client_secret": "def"
}
}
```
## Processing rules and errors
* `type` is normalized to lowercase; only `token` and `credentials` are extracted
* DataParts do not change the MCP server `authorizationType`—make sure the DataPart type matches the server configuration
* Unknown or invalid auth DataParts are left in the message as normal parts
* `mcp_name` must be unique per message; duplicates return `mcp_auth_duplicate_name`
* Missing fields return:
* `mcp_auth_missing_name`
* `mcp_auth_missing_token`
* `mcp_auth_missing_credentials`
* If `mcp_name` does not match any configured server, the DataPart is ignored
## When DataParts are used
* MCP tools are registered when a new thread is created (the first message). Include auth DataParts on that first message
* Later messages on the same thread do not re-register tools, so auth DataParts will be ignored for MCP registration
* In the API flow, extracted auth DataParts are removed from the message before it is stored or sent to reasoning
For more information about the A2A protocol and DataParts, see [A2A Protocol](/agentic/a2a-protocol).
For general information about MCP, see [MCP Protocol](/agentic/mcp-protocol).
Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework.
# Orchestrator
Source: https://docs.corti.ai/agentic/orchestrator
Learn about the Orchestration Agent at the center of the Agentic Framework
The **Orchestrator** is the central intelligence layer of the Corti Agentic Framework. It serves as the primary interface between users and the multi-agent system, coordinating the flow of conversations and tasks.
## What the Orchestrator Does
The Orchestrator reasons about incoming requests and determines how to fulfill them by coordinating with specialized [Experts](/agentic/experts). Its core responsibilities include:
* **Reasoning and planning**: Analyzes user requests and determines the necessary steps to complete them
* **Expert selection**: Decides which Expert(s) to call, in what order, and with what data
* **Task decomposition**: Breaks complex requests into discrete tasks that can be handled by individual Experts
* **Response generation**: Aggregates results from Experts and typically generates the final response to the user
* **Context management**: Has full access to the [context](/agentic/context-memory), while Experts typically only have scoped access to relevant portions
* **Safety enforcement**: Enforces guardrails, type validation, and policy-driven constraints to ensure safe operation in clinical environments
The Orchestrator does not perform specialized work itself—instead, it delegates to appropriate Experts and coordinates their activities to accomplish complex workflows.
***
For more information about how the Orchestrator fits into the overall architecture, see [Architecture](/agentic/architecture). To understand how context and memory work, see [Context & Memory](/agentic/context-memory).
Please [contact us](https://help.corti.app/) if you need more information about the Orchestrator in the Corti Agentic Framework.
# Overview to the Corti Agentic Framework
Source: https://docs.corti.ai/agentic/overview
AI for every healthcare app
The Corti Agentic Framework is a modular artificial intelligence system for software developers to build advanced AI agents that perform high-quality clinical and operational tasks, without having to spend months on complex architecture work.
The AI Agent is designed to support use cases across the healthcare spectrum, from chat-based assistants for doctors to automating EHR data entry and powering clinical decision support workflows.
## What Problems It Solves
Modern LLMs are powerful, but on their own they are insufficient and unsafe for clinical use.
The Corti Agent Platform addresses two fundamental gaps:
### 1. LLMs Do Not Have Reliable Access to Clinical Data
LLMs cannot be trusted to rely on internal knowledge alone. In healthcare, responses must be grounded in clinically validated reference sources, real-time patient and system data, and customer-owned systems and APIs. Without access to these sources at runtime, models are forced to infer or guess, which is unacceptable in clinical settings.
The Corti Agentic Framework addresses this by enabling agents to retrieve information directly from trusted external tools as part of their reasoning process. Instead of hallucinating answers, agents are designed to look things up, verify context, and base their outputs on authoritative data.
### 2. LLMs Cannot Safely Act on the World
Clinical workflows require more than generating text. They involve interacting with real systems: querying EHRs, drafting and updating documentation, preparing prescriptions, and triggering downstream processes.
The framework provides a controlled execution layer that allows agents to plan actions, invoke tools, and coordinate multi-step workflows while remaining within clearly defined safety boundaries. Where necessary, agents can pause execution, request human approval, and resume only once explicit consent is given. This ensures that automation enhances clinical workflows without bypassing governance or control.
***
## What You Can Build With It
Using the Corti Agent Platform, teams can build:
* **Clinician-facing assistants**
* Documentation editing
* Guideline and reference lookup
* Coding and administrative support
* **Programmatic agent endpoints**
* Embedded into existing clinical software
* Triggered by events, APIs, or workflows
* **Customer-embedded agents**
* Customers bring their own tools and systems
* Agents combine Corti, third-party, and customer capabilities
All of these share the same underlying agent runtime, safety model, and orchestration layer.
***
## Built for Healthcare by Design
Healthcare is not a general-purpose domain, and this platform reflects that reality.
**Key design principles include:**
Typed inputs and outputs, explicit tool schemas, and guardrails around action-taking ensure safe operation in clinical environments.Every decision and tool call is observable with replayable traces and structured logs for transparency, compliance, and quality assurance.Fine-tuned reasoning layers optimized for healthcare language, workflows, and compliance needs.Corti Agentic Framework uses a state-of-the-art multi-agent architecture to enable greater scale, accuracy, and resilience in AI-driven workflows.Maintain persistent, context-aware conversations and manage multiple active contexts (threads) without losing information throughout the session.Access a library of prebuilt Experts: specialized agents that connect to data sources, tools, and services to execute clinical and operational tasks.Plug directly into EHRs, clinical decision support systems, and medical knowledge bases with minimal setup.Pass relevant context with each query, including structured data formats like FHIR resources, enabling Experts to work with rich, domain-specific information.
***
## Who It’s For
The Corti Agent Platform is built for teams working on healthcare software.
It is intended for:
* **Healthcare software companies** embedding intelligent automation directly into their products
* **Enterprise customers** building internal, AI-powered clinical workflows
* **Advanced engineering teams** that need flexibility, control, and strong safety guarantees without building bespoke agent infrastructure from scratch
The platform is not limited to simple prompt-based chatbots. It is designed to make it easy to go from demo to **production-grade clinical AI systems** that operate safely in real-world healthcare environments.
***
## Agents vs. Workflows
Understanding the difference between agents and workflows helps you choose the right approach for your use case:
**Agents** are autonomous systems that can think, reason, and adapt to new situations. They use AI to understand context, make decisions dynamically, and take actions based on the task at hand—even when encountering scenarios they haven't seen before. Like a chef who can create a meal based on what's available, agents excel at handling unpredictable, open-ended tasks that require flexibility and judgment.
**Workflows** are structured, step-by-step processes that follow predefined paths. They execute tasks in a fixed order, like following a recipe or checklist. Workflows are ideal for repeatable processes that require consistency and compliance, such as automated approval processes or scheduled maintenance tasks.
For workflow-oriented needs, you can leverage our toolkit of other APIs to orchestrate well-defined, repeatable flows throughout your solution.
In the Corti Agentic Framework, agents leverage the Orchestrator to compose experts dynamically, adapting their approach based on the situation. Workflows, on the other hand, provide deterministic execution paths for tasks with well-defined steps and requirements—supported by our robust library of workflow APIs and integrations.
Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework.
# Quickstart & First Build
Source: https://docs.corti.ai/agentic/quickstart
Get started with the Corti Agentic Framework
This guide will walk you through setting up your first agent and getting it running end-to-end.
After completing this quickstart, you'll have a working agent and know where to go next based on your use case.
### Prerequisites
* API access credentials
* Development environment set up
* Basic understanding of REST APIs
Start by creating a project in the Corti console. This gives you a workspace and access to manage your clients and credentials. If you haven't set up authentication before, follow the Creating Clients and authentication quickstart guides.
Use the Corti Agentic API to create your first agent. You'll need an access token (obtained using your client credentials) and your tenant name.
```js JavaScript theme={null}
const myAgent = await client.agents.create({
name: "My First Agent",
description: "A simple agent to get started with the Corti Agentic Framework"
});
```
```py Python expandable theme={null}
import requests
BASE_URL = "https://api..corti.app/v2" # e.g. "https://api.eu.corti.app/v2"
HEADERS = {
"Authorization": "Bearer ",
"Tenant-Name": "", # e.g. "base"
"Content-Type": "application/json",
}
def create_agent():
"""Create a new agent using the Corti Agentic API."""
url = f"{BASE_URL}/agents"
payload = {
"name": "My First Agent",
"description": "A simple agent to get started with the Corti Agentic Framework",
}
response = requests.post(url, json=payload, headers=HEADERS)
response.raise_for_status()
return response.json()
```
Use your stored credentials to authenticate, then run your agent end-to-end and verify it processes input and returns the expected outputs.
```js JavaScript theme={null}
const agentResponse = await client.agents.messageSend(myAgent.id, {
message: {
role: "user",
parts: [{
kind: "text",
text: "Hello there. This is my first message."
}],
messageId: crypto.randomUUID(),
kind: "message"
}
});
console.log(agentResponse.task.status.message.parts[0].text)
```
```py Python expandable theme={null}
import uuid
import requests
BASE_URL = "https://api..corti.app/v2" # e.g. "https://api.eu.corti.app/v2"
HEADERS = {
"Authorization": "Bearer ",
"Tenant-Name": "", # e.g. "base"
"Content-Type": "application/json",
}
def send_message(agent_id: str):
"""Send a message to an existing agent and return the task response."""
url = f"{BASE_URL}/agents/{agent_id}/v1/message:send"
payload = {
"message": {
"role": "user",
"parts": [
{
"kind": "text",
"text": "Hello there. This is my first message.",
}
],
"messageId": str(uuid.uuid4()),
"kind": "message",
}
}
response = requests.post(url, json=payload, headers=HEADERS)
response.raise_for_status()
task_response = response.json()
# Print the task status message text (equivalent to agentResponse.task.status.message.parts[0].text)
print(task_response["task"]["status"]["message"]["parts"][0]["text"])
return task_response
# Assuming you created the agent in the previous step:
# my_agent = create_agent()
# send_message(my_agent["id"])
```
### Next Steps
Depending on your use case:
* **Building custom agents**: See [Core Concepts](/agentic/core-concepts)
* **Integrating with existing systems**: See [SDKs & Integrations](/agentic/sdks-integrations)
* **Understanding the architecture**: See [Architecture Overview](/agentic/architecture)
* **Working with protocols**: See [A2A Protocol](/agentic/a2a-protocol) and [MCP Protocol](/agentic/mcp-protocol)
Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework.
# SDKs & Integrations
Source: https://docs.corti.ai/agentic/sdks-integrations
Official SDKs and integration options for the Corti Agentic Framework
The Corti Agentic Framework provides official SDKs and supports community integrations to help you build quickly.
## Official Corti SDK
Official Corti Agentic SDK for Node and browser environments. @corti/sdk on npm →
## Official A2A Project SDKs
Build A2A-compliant agents and servers in Python. a2a-python (Stable) →
Official JavaScript/TypeScript SDK for A2A. a2a-js (Stable) →
Build A2A-compliant agents and services in Java. a2a-java (Stable) →
Implement A2A agents and servers in Go. a2a-go (Stable) →
Build A2A-compatible agents in .NET ecosystems. a2a-dotnet (Stable) →
## Other libraries
* **shadcn/ui component library for chatbots**: [`ai-elements` on npm](https://www.npmjs.com/package/ai-elements)
* **A2A Inspector**: [a2a-inspector on GitHub](https://github.com/a2aproject/a2a-inspector)
* **Awesome A2A**: [awesome-a2a on GitHub](https://github.com/ai-boost/awesome-a2a)
# Authenticate user and get access token
Source: https://docs.corti.ai/api-reference/admin/auth/authenticate-user-and-get-access-token
api-reference/admin/admin-openapi.yml post /auth/token
Authenticate using email and password to receive an access token.
This `Admin API` is separate from the `Corti API` used for speech recognition, text generation, and agentic workflows:
Authentication and scope for the `Admin API` uses email-and-password to obtain a bearer token via `/auth/token`. This token is only used for API administration.
Please [contact us](https://help.corti.app) if you have interest in this functionality or further questions.
# Create a new customer
Source: https://docs.corti.ai/api-reference/admin/customers/create-a-new-customer
api-reference/admin/admin-openapi.yml post /projects/{projectId}/customers
# Delete a customer
Source: https://docs.corti.ai/api-reference/admin/customers/delete-a-customer
api-reference/admin/admin-openapi.yml delete /projects/{projectId}/customers/{customerId}
# Get quotas for a customer
Source: https://docs.corti.ai/api-reference/admin/customers/get-quotas-for-a-customer
api-reference/admin/admin-openapi.yml get /projects/{projectId}/customers/{customerId}/quotas
# List customers for a project
Source: https://docs.corti.ai/api-reference/admin/customers/list-customers-for-a-project
api-reference/admin/admin-openapi.yml get /projects/{projectId}/customers
# Update a customer
Source: https://docs.corti.ai/api-reference/admin/customers/update-a-customer
api-reference/admin/admin-openapi.yml patch /projects/{projectId}/customers/{customerId}
Update specific fields of a customer. Only provided fields will be updated.
# Get quotas for all tenants within a project
Source: https://docs.corti.ai/api-reference/admin/projects/get-quotas-for-all-tenants-within-a-project
api-reference/admin/admin-openapi.yml get /projects/{projectId}/quotas
# Create a new user and add it to the customer
Source: https://docs.corti.ai/api-reference/admin/users/create-a-new-user-and-add-it-to-the-customer
api-reference/admin/admin-openapi.yml post /projects/{projectId}/customers/{customerId}/users
# Delete a user
Source: https://docs.corti.ai/api-reference/admin/users/delete-a-user
api-reference/admin/admin-openapi.yml delete /projects/{projectId}/customers/{customerId}/users/{userId}/consumption
# Get usage consumption for a user within a time window
Source: https://docs.corti.ai/api-reference/admin/users/get-usage-consumption-for-a-user-within-a-time-window
api-reference/admin/admin-openapi.yml get /projects/{projectId}/customers/{customerId}/users/{userId}/consumption
# List users for a customer
Source: https://docs.corti.ai/api-reference/admin/users/list-users-for-a-customer
api-reference/admin/admin-openapi.yml get /projects/{projectId}/customers/{customerId}/users
# Update a user
Source: https://docs.corti.ai/api-reference/admin/users/update-a-user
api-reference/admin/admin-openapi.yml patch /projects/{projectId}/customers/{customerId}/users/{userId}
# Generate Codes
Source: https://docs.corti.ai/api-reference/codes/generate-codes
api-reference/auto-generated-openapi.yml post /interactions/{id}/codes/
`Limited Access - Contact us for more information`
Generate codes within the context of an interaction. This endpoint is only accessible within specific customer tenants. It is not available in the public API.
For stateless code prediction based on input text string or documentId, please refer to the [Predict Codes](/api-reference/codes/predict-codes) API, or [contact us](https://help.corti.app) for more information.
# List Codes
Source: https://docs.corti.ai/api-reference/codes/list-codes
api-reference/auto-generated-openapi.yml get /interactions/{id}/codes/
`Limited Access - Contact us for more information`
List predicted codes within the context of an interaction. This endpoint is only accessible within specific customer tenants. It is not available in the public API.
For stateless code prediction based on input text string or documentId, please refer to the [Predict Codes](/api-reference/codes/predict-codes) API, or [contact us](https://help.corti.app) for more information.
# Predict Codes
Source: https://docs.corti.ai/api-reference/codes/predict-codes
api-reference/auto-generated-openapi.yml post /tools/coding/
Predict medical codes from provided context. This is a stateless endpoint, designed to predict ICD-10-CM, ICD-10-PCS, and CPT codes based on input text string or documentId.
More than one code system may be defined in a single request, and the maximum number of codes to return per system can also be defined.
Code prediction requests have two possible values for context: - `text`: One set of code prediction results will be returned based on all input text defined. - `documentId`: Code prediction will be based on that defined document only.
The response includes two sets of results: - `Codes`: Highest confidence bundle of codes, as selected by the code prediction model - `Candidates`: Full list of candidate codes as predicted by the model, rank sorted by model confidence with maximum possible value of 50.
All predicted code results are based on input context defined in the request only (not other external data or assets associated with an interaction).
# Select Codes
Source: https://docs.corti.ai/api-reference/codes/select-codes
api-reference/auto-generated-openapi.yml put /interactions/{id}/codes/
`Limited Access - Contact us for more information`
Select predicted codes within the context of an interaction. This endpoint is only accessible within specific customer tenants. It is not available in the public API.
For stateless code prediction based on input text string or documentId, please refer to the [Predict Codes](/api-reference/codes/predict-codes) API, or [contact us](https://help.corti.app) for more information.
# Delete Document
Source: https://docs.corti.ai/api-reference/documents/delete-document
api-reference/auto-generated-openapi.yml delete /interactions/{id}/documents/{documentId}
# Generate Document
Source: https://docs.corti.ai/api-reference/documents/generate-document
api-reference/auto-generated-openapi.yml post /interactions/{id}/documents/
This endpoint offers different ways to generate a document. Find guides to document generation [here](/textgen/documents-standard).
# Get Document
Source: https://docs.corti.ai/api-reference/documents/get-document
api-reference/auto-generated-openapi.yml get /interactions/{id}/documents/{documentId}
Get Document.
# List Documents
Source: https://docs.corti.ai/api-reference/documents/list-documents
api-reference/auto-generated-openapi.yml get /interactions/{id}/documents/
List Documents
# Update Document
Source: https://docs.corti.ai/api-reference/documents/update-document
api-reference/auto-generated-openapi.yml patch /interactions/{id}/documents/{documentId}
# Add Facts
Source: https://docs.corti.ai/api-reference/facts/add-facts
api-reference/auto-generated-openapi.yml post /interactions/{id}/facts/
Adds new facts to an interaction.
# Extract Facts
Source: https://docs.corti.ai/api-reference/facts/extract-facts
api-reference/auto-generated-openapi.yml post /tools/extract-facts
Extract facts from provided text, without storing them.
# List Fact Groups
Source: https://docs.corti.ai/api-reference/facts/list-fact-groups
api-reference/auto-generated-openapi.yml get /factgroups/
Returns a list of available fact groups, used to categorize facts associated with an interaction.
# List Facts
Source: https://docs.corti.ai/api-reference/facts/list-facts
api-reference/auto-generated-openapi.yml get /interactions/{id}/facts/
Retrieves a list of facts for a given interaction.
# Update Fact
Source: https://docs.corti.ai/api-reference/facts/update-fact
api-reference/auto-generated-openapi.yml patch /interactions/{id}/facts/{factId}
Updates an existing fact associated with a specific interaction.
# Update Facts
Source: https://docs.corti.ai/api-reference/facts/update-facts
api-reference/auto-generated-openapi.yml patch /interactions/{id}/facts/
Updates multiple facts associated with an interaction.
# Create Interaction
Source: https://docs.corti.ai/api-reference/interactions/create-interaction
api-reference/auto-generated-openapi.yml post /interactions/
Creates a new interaction.
# Delete Interaction
Source: https://docs.corti.ai/api-reference/interactions/delete-interaction
api-reference/auto-generated-openapi.yml delete /interactions/{id}
Deletes an existing interaction.
# Get Existing Interaction
Source: https://docs.corti.ai/api-reference/interactions/get-existing-interaction
api-reference/auto-generated-openapi.yml get /interactions/{id}
Retrieves a previously recorded interaction by its unique identifier (interaction ID).
# List All Interactions
Source: https://docs.corti.ai/api-reference/interactions/list-all-interactions
api-reference/auto-generated-openapi.yml get /interactions/
Lists all existing interactions. Results can be filtered by encounter status and patient identifier.
# Update Interaction
Source: https://docs.corti.ai/api-reference/interactions/update-interaction
api-reference/auto-generated-openapi.yml patch /interactions/{id}
Modifies an existing interaction by updating specific fields without overwriting the entire record.
# Delete Recording
Source: https://docs.corti.ai/api-reference/recordings/delete-recording
api-reference/auto-generated-openapi.yml delete /interactions/{id}/recordings/{recordingId}
Delete a specific recording for a given interaction.
# Get Recording
Source: https://docs.corti.ai/api-reference/recordings/get-recording
api-reference/auto-generated-openapi.yml get /interactions/{id}/recordings/{recordingId}
Retrieve a specific recording for a given interaction.
# List Recordings
Source: https://docs.corti.ai/api-reference/recordings/list-recordings
api-reference/auto-generated-openapi.yml get /interactions/{id}/recordings/
Retrieve a list of recordings for a given interaction.
# Upload Recording
Source: https://docs.corti.ai/api-reference/recordings/upload-recording
api-reference/auto-generated-openapi.yml post /interactions/{id}/recordings/
Upload a recording for a given interaction. There is a maximum limit of 60 minutes in length and 150MB in size for recordings.
# Real-time conversational transcript generation and fact extraction (FactsR™)
Source: https://docs.corti.ai/api-reference/stream
WebSocket Secure (WSS) API Documentation for /stream endpoint
## Overview
The WebSocket Secure (WSS) `/stream` API enables real-time, bidirectional communication with the Corti system for interaction streaming. Clients can send and receive structured data, including transcripts and fact updates. Learn more about [FactsR™ here](/textgen/factsr/).
This documentation provides a structured guide for integrating the Corti WSS API for real-time interaction streaming.
This `/stream` endpoint supports real-time ambient documentation interactions and clinical decision support workflows.
* If you are looking for a stateless endpoint that is geared towards front-end dictation workflows you should use the [/transcribe WSS](/api-reference/transcribe/)
* If you are looking for asynchronous ambient documentation interactions, then please refer to the [/documents endpoint](/api-reference/documents/generate-document/)
***
## 1. Establishing a Connection
Clients must initiate a WebSocket connection using the `wss://` scheme and provide a valid interaction ID in the URL.
When creating an interaction, the 200 response provides a `websocketUrl` for that interaction including the `tenant-name` as url parameter.
The authentication for the WSS stream requires in addition to the `tenant-name` parameter a `token` parameter to pass in the Bearer access token.
### Path Parameters
Unique interaction identifier
### Query Parameters
`eu` or `us`
Specifies the tenant context
Bearer \$token
#### Using SDK
You can use the Corti SDK (currently in "beta") to connect to a stream endpoint.
```ts title="JavaScript (Beta)" theme={null}
import { CortiClient, CortiEnvironment } from "@corti/sdk";
const cortiClient = new CortiClient({
tenantName: "YOUR_TENANT_NAME",
environment: CortiEnvironment.Eu,
auth: {
accessToken: "YOUR_ACCESS_TOKEN"
},
});
const streamSocket = await cortiClient.stream.connect({
id: ""
});
```
***
## 2. Handshake Responses
### 101 Switching Protocols
Indicates a successful WebSocket connection.
Upon successful connection, send a `config` message to define the configuration: Specify the input language and expected output preferences.
The config message must be sent within 10 seconds of the web socket being opened to prevent `CONFIG-TIMEOUT`, which will require establishing a new wss connection.
***
## 3. Sending Messages
### Configuration
Declare your `/stream` configuration using the message `"type": "config"` followed by defining the `"configuration": {}`.
Defining the type is required along with `transcription: primaryLanguage` and `mode: type and outputLocale` configuration parameters. The other parameters are optional for use, depending on your need and workflow.
Configuration notes:
* Clients must send a stream configuration message and wait for a response of type `CONFIG_ACCEPTED` before transmitting other data.
* If the configuration is not valid it will return `CONFIG_DENIED`.
* The configuration must be committed within 10 seconds of opening the WebSocket, else it will time-out with `CONFIG_TIMEOUT`.
Define parameters for speech to text processing:
The primary spoken language for transcription. See supported languages codes and more information [here](/about/languages).
Set to true to enable speaker diarization in mono channel audio.
Set to true to enable transcription of a multi-channel audio stream.
List of participants with roles assigned to a channel
Audio channel number (e.g., 0 or 1)
Label for audio channel participant (e.g., Doctor, patient, or multiple)
Define facts or transcript as desired output, depending on workflow need:
Set as `facts` to receive structured facts output along with transcripts, or `transcription` to only receive transcript output
Output language for extracted `facts` (required for `type: "facts"`). See supported languages codes and more information [here](/about/languages). (Note: This may be different than the `primaryLanguage` defined for transcript output; see details [here](/textgen/facts_realtime).)
#### Example
```json wss:/stream configuration example theme={null}
{
"type": "config",
"configuration": {
"transcription": {
"primaryLanguage": "en",
"isDiarization": false,
"isMultichannel": false,
"participants": [
{
"channel": 0,
"role": "multiple"
}
]
},
"mode": {
"type": "facts",
"outputLocale": "en"
}
}
}
```
#### Using SDK
You can use the Corti SDK (currently in "beta") to send stream configuration.
You can provide the configuration either directly when connecting, or send it as a separate message after establishing the connection:
```ts title="JavaScript (Beta, recommended)" theme={null}
const configuration = {
transcription: {
primaryLanguage: "en",
isDiarization: false,
isMultichannel: false,
participants: [
{
channel: 0,
role: "multiple"
}
]
},
mode: {
type: "facts",
outputLocale: "en"
}
};
const streamSocket = await cortiClient.stream.connect({
id: "",
configuration
});
```
```ts title="JavaScript (Beta, handle configuration manually)" theme={null}
const streamSocket = await cortiClient.stream.connect({
id: ""
});
streamSocket.on("open", () => {
streamSocket.sendConfiguration({
type: "config",
configuration
});
});
```
### Sending Audio
Ensure that your configuration was accepted before sending audio, and that the initial audio chunk is not too small as it needs to contain the headers to properly decode the audio.
We recommend sending audio in chunks of 250-500ms. In terms of buffering, the limit is 64000 bytes per chunk.
Audio data should be sent as raw binary without JSON wrapping.
A variety of common audio formats are supported; audio will be passed through a transcoder before speech-to-text processing. Similarly, specification of sample rate, depth or other audio settings is not required at this time.
See more details on supported audio formats [here](/stt/audio).
#### Channels, participants, and speakers
In most workflows, especially **in-person settings**, mono-channel audio should be used. If the microphone is a stereo-microphone, then ensure to set `isMultichannel: false` and audio will be converted to mono-channel, preventing duplicate transcripts from being returned.
In a telehealth workflow, or other **virtual setting**, the virtual audio may be on one channel (e.g., from webRTC) with audio from the microphone of the local client on a separate channel. In this scenario, define `isMultichannel: true` and assign each channel the relevant participant role (e.g., if the doctor is on the local client, then set that to channel 0 with participant defined as `doctor` and the virtual audio for patient on channel `defined as participant`patient\`).
**Diarization** is independent of audio channels and participant roles as it enables speaker separation for mono audio.
With configuration `isDiarization: true`, transcript segments will be assigned to automatically with first speaker identified being channel 0, second on channel 1, etc. If `isDiarization:false`, then transcript segments will all be assigned with `speakerId: -1`.
Read more [here](/stt/diarization).
#### Using SDK
You can use the Corti SDK (currently in "beta") to send audio data to the stream.
To send audio, use the sendAudio method on the stream socket. Audio should be sent as binary chunks (e.g., ArrayBuffer):
```ts title="JavaScript (Beta)" theme={null}
streamSocket.sendAudio(chunk); // method doesn't do the chunking
```
### Flush the Audio Buffer
To flush the audio buffer, forcing transcript segments to be returned over the web socket (e.g., when turning off or muting the microphone for the patient to share something private, not to be recorded, during the conversation), send a message -
```json theme={null}
{
"type":"flush"
}
```
The server will return text for audio sent before the `flush` message and then respond with message -
```json theme={null}
{
"type":"flushed"
}
```
The web socket will remain open so recording can continue.
FactsR generation (i.e., when working in `configuration.mode: facts`) is not impacted by the `flush` event and will continue to process as normal.
Client side considerations:
1 If you rely on a `flush` event to separate data (e.g., for different sections in an EHR template), then be sure to receive the `flushed` event before moving on to the next data field.
2 When using a web browser `MediaRecorder` API, audio is buffered and only emitted at the configured timeslice interval. Therefore, *before* sending a `flush` message, call `MediaRecorder.requestData()` to force any remaining buffered audio on the client to be transmitted to the server. This ensures all audio reaches the server before the `flush` is processed.
### Ending the Session
To end the `/stream` session, send a message -
```json theme={null}
{
"type": "end"
}
```
This will signal the server to send any remaining transcript segments and facts (depending on `mode` configuration). Then, the server will send two messages -
```json theme={null}
{
"type":"usage",
"credits":0.1
}
```
```json theme={null}
{
"type":"ENDED"
}
```
Following the message type `ENDED`, the server will close the web socket.
You can at any time open the WebSocket again by sending the configuration.
#### Using SDK
You can use the Corti SDK (currently in "beta") to control the stream status.
When using automatic configuration (passing configuration to connect),
the socket will close itself without reconnecting when it receives an ENDED message.
When using manual configuration, the socket will attempt to reconnect after the server closes the connection. To prevent this,
you must subscribe to the ENDED message and manually close the connection.
```ts title="JavaScript (Beta, recommended)" theme={null}
const streamSocket = await cortiClient.stream.connect({
id: "",
configuration
});
streamSocket.sendEnd({ type: "end" });
streamSocket.on("message", (message) => {
if (message.type === "usage") {
console.log("Usage:", message);
}
// message is received, but connection closes automatically
if (message.type === "ENDED") {
console.log("ENDED:", message);
}
});
```
```ts title="JavaScript (Beta, manual configuration)" theme={null}
const streamSocket = await cortiClient.stream.connect({
id: ""
});
streamSocket.sendEnd({ type: "end" });
streamSocket.on("message", (message) => {
if (message.type === "usage") {
console.log("Usage:", message);
}
if (message.type === "ENDED") {
streamSocket.close(); // Prevents unwanted reconnection
}
});
```
***
## 4. Responses
### Configuration
Returned when sending a valid configuration.
Returned when sending a valid configuration.
### Transcripts
Interaction ID that the transcript segments are associated with
Transcript text segments
Start time of the transcript segment in seconds
End time of the transcript segment in seconds
Indicates whether the transcript text results are final or interim (Note: only final transcripts are supported in Stream workflows)
Speaker identification (Note: value of `-1` is returned when diarization is disabled)
Audio channel number (e.g., 0 or 1)
```json Transcript response theme={null}
{
"type": "transcript",
"data": [
{
"id": "UUID",
"transcript": "Patient presents with fever and cough.",
"time": { "start": 1.71, "end": 11.296 },
"final": true,
"speakerId": -1,
"participant": { "channel": 0 }
}
]
}
```
### Facts
Unique identifier for the fact
Text description of the fact
Categorization of the fact (e.g., "medical-history")
Unique identifier for the group
Indicates if the fact was discarded
Indicates the source of the fact (e.g., "core", "user")
Timestamp when the fact was created
Timestamp when the fact was last updated
```json Fact response theme={null}
{
"type": "facts",
"fact": [
{
"id": "UUID",
"text": "Patient has a history of hypertension.",
"group": "medical-history",
"groupId": "UUID",
"isDiscarded": false,
"source": "core",
"createdAt": "2024-02-28T12:34:56Z",
"updatedAt": ""
}
]
}
```
By default, incoming audio and returned data streams are persisted on the server, associated with the interactionId. You may query the interaction to retrieve the stored `recordings`, `transcripts`, and `facts` via the relevant REST endpoints. Audio recordings are saved as .webm format; transcripts and facts as json objects.
Data persistence can be disabled by Corti upon request when needed to support compliance with your applicable regulations and data handling preferences.
#### Using SDK
You can use the Corti SDK (currently in "beta") to subscribe to stream messages.
```ts title="JavaScript (Beta)" expandable theme={null}
streamSocket.on("message", (message) => {
// Distinguish message types
switch (message.type) {
case "transcript":
// Handle transcript message
console.log("Transcript:", message);
break;
case "facts":
// Handle facts message
console.log("Facts:", message);
break;
case "error":
// Handle error message
console.error("Error:", message);
break;
default:
// Handle other message types
console.log("Other message:", message);
}
});
streamSocket.on("error", (error) => {
// Handle error
console.error(error);
});
streamSocket.on("close", () => {
// Handle socket close
console.log("Stream closed");
});
```
### Flushed
Returned by server, after processing `flush` event from client, to return transcript segments
```json theme={null}
{
"type":"flushed"
}
```
### Ended
Returned by server, after processing `end` event from client, to convey amount of credits consumed
```json theme={null}
{
"type":"usage",
"credits":0.1
}
```
Returned by server, after processing `end` event from client, before closing the web socket
```json theme={null}
{
"type":"ENDED"
}
```
***
## 5. Error Handling
In case of an invalid or missing interaction ID, the server will return an error before opening the WebSocket.
From opening the WebSocket, you need to commit the configuration within 15 seconds, else the WebSocket will close again
At the beginning of a WebSocket session the following messages related to configuration can be returned.
```json theme={null}
{"type": "CONFIG_DENIED"} // in case the configuration is not valid
{"type": "CONFIG_MISSING"}
{"type": "CONFIG_NOT_PROVIDED"}
{"type": "CONFIG_ALREADY_RECEIVED"}
```
In addition, a reason will be supplied, e.g. `reason: language unavailable`
Once configuration has been accepted and the session is running, you may encounter runtime or application-level errors.
These are sent as JSON objects with the following structure:
```json theme={null}
{
"type": "error",
"error": {
"id": "error id",
"title": "error title",
"status": 400,
"details": "error details",
"doc":"link to documentation"
}
}
```
In some cases, receiving an "error" type message will cause the stream to end and send a message of type `usage` and type `ENDED`.
#### Using SDK
You can use the Corti SDK (currently in "beta") to handle error messages.
With recommended configuration, configuration errors (e.g., CONFIG\_DENIED, CONFIG\_MISSING, etc.) and runtime errors will both trigger the error event and automatically close the socket. You can also inspect the original message in the message handler. With manual configuration, configuration errors are only received as messages (not as error events), and you must close the socket manually to avoid reconnection.
```ts title="JavaScript (Beta, recommended)" theme={null}
const streamSocket = await cortiClient.stream.connect({
id: "",
configuration
});
streamSocket.on("error", (error) => {
// Emitted for both configuration and runtime errors
console.error("Error event:", error);
// The socket will close itself automatically
});
// still can be accessed with normal "message" subscription
streamSocket.on("message", (message) => {
if (
message.type === "CONFIG_DENIED" ||
message.type === "CONFIG_MISSING" ||
message.type === "CONFIG_NOT_PROVIDED" ||
message.type === "CONFIG_ALREADY_RECEIVED" ||
message.type === "CONFIG_TIMEOUT"
) {
console.log("Configuration error (message):", message);
}
if (message.type === "error") {
console.log("Runtime error (message):", message);
}
});
```
```ts title="JavaScript (Beta, manual configuration)" theme={null}
const streamSocket = await cortiClient.stream.connect({
id: ""
});
streamSocket.on("message", (message) => {
if (
message.type === "CONFIG_DENIED" ||
message.type === "CONFIG_MISSING" ||
message.type === "CONFIG_NOT_PROVIDED" ||
message.type === "CONFIG_ALREADY_RECEIVED" ||
message.type === "CONFIG_TIMEOUT"
) {
console.error("Configuration error (message):", message);
streamSocket.close(); // Must close manually to avoid reconnection
}
if (message.type === "error") {
console.error("Runtime error (message):", message);
streamSocket.close(); // Must close manually to avoid reconnection
}
});
```
# Get Template
Source: https://docs.corti.ai/api-reference/templates/get-template
api-reference/auto-generated-openapi.yml get /templates/{key}
Retrieves template by key.
# List Template Sections
Source: https://docs.corti.ai/api-reference/templates/list-template-sections
api-reference/auto-generated-openapi.yml get /templateSections/
Retrieves a list of template sections with optional filters for organization and language.
# List Templates
Source: https://docs.corti.ai/api-reference/templates/list-templates
api-reference/auto-generated-openapi.yml get /templates/
Retrieves a list of templates with optional filters for organization, language, and status.
# Real-time stateless dictation
Source: https://docs.corti.ai/api-reference/transcribe
WebSocket Secure (WSS) API Documentation for /transcribe endpoint
## Overview
The WebSocket Secure (WSS) `/transcribe` API enables real-time, bidirectional communication with the Corti system for stateless speech to text. Clients can send and receive structured data, including transcripts and detected commands.
This documentation provides a comprehensive guide for integrating these capabilities.
This `/transcribe` endpoint supports real-time stateless dictation.
* If you are looking for real-time ambient documentation interactions, you should use the [/stream WSS](/api-reference/stream/)
* If you are looking for transcript generation based on a pre-recorded audio file, then please refer to the [/transcripts endpoint](/api-reference/transcripts/create-transcript/)
***
## 1. Establishing a Connection
Clients must initiate a WebSocket connection using the `wss://` scheme.
The authentication for the WSS stream requires in addition to the `tenant-name` parameter a `token` parameter to pass in the Bearer access token.
### Query Parameters
`eu` or `us`
Specifies the tenant context
Bearer \$token
```bash Example wss:/transcribe request theme={null}
curl --request GET \
--url wss://api.${environment}.corti.app/audio-bridge/v2/transcribe?tenant-name=${tenant}&token=Bearer%20${accessToken}
```
#### Using SDK
You can use the Corti SDK (currently in "beta") to connect to the /transcribe endpoint.
```ts title="JavaScript (Beta)" theme={null}
import { CortiClient, CortiEnvironment } from "@corti/sdk";
const cortiClient = new CortiClient({
tenantName: "YOUR_TENANT_NAME",
environment: CortiEnvironment.Eu,
auth: {
accessToken: "YOUR_ACCESS_TOKEN"
},
});
const transcribeSocket = await cortiClient.transcribe.connect();
```
***
## 2. Handshake Response
### 101 Switching Protocols
Indicates a successful WebSocket connection.
Upon successful connection, send a `config` message to define the configuration: Specify the input language and expected output preferences.
The config message must be sent within 10 seconds of the web socket being opened to prevent `CONFIG-TIMEOUT`, which will require establishing a new wss connection.
***
## 3. Sending Messages
### Configuration
Declare your `/transcribe` configuration using the message `"type": "config"` followed by defining the `"configuration": {}`.
Defining the type is required along with the `primaryLanguage` configuration parameter. The other parameters are optional for use, depending on your need and workflow.
Configuration notes:
* Clients must send a stream configuration message and wait for a response of type `CONFIG_ACCEPTED` before transmitting other data.
* If the configuration is not valid it will return `CONFIG_DENIED`.
* The configuration must be committed within 10 seconds of opening the WebSocket, else it will time-out with `CONFIG_TIMEOUT`.
The locale of the primary spoken language. See supported languages codes and more information [here](/about/languages).
When true, converts spoken punctuation such as `period` or `slash` into `.`or `/`. Read more about supported punctuation [here](/stt/punctuation).
When true, automatically punctuates and capitalizes in the final transcript.
Spoken and Automatic Punctuation are mutually exclusive - only one should be set to true in a given configuration request. If both are included and set to `true`, then `spokenPunctuation` will take precedence and override `automaticPunctuation`.
Provide the commands that should be registered and detected - Read more about commands [here](/stt/commands).
Unique value to identify the command. This, along with the command phrase, will be returned by the API when the command is recognized during dictation.
One or more word sequence(s) that can be spoken to trigger the command. At least one phrase is required per command.
Placeholders that can (optionally) be added in `phrases` to define multiple words that should trigger the command.
Define the variable used in command phrase here.
The only variable type supported at this time is `enum`.
List of values that should be recognized for the defined variable.
Define each type of formatting preferences using the `enum` options described below. Formatting configuration is `optional`, and when no properties are configured, the values listed as `default` will be applied automatically. Read more about formatting [here](/stt/formatting).
Formatting is currently in `beta` testing. API details subject to change ahead of general release.
| Option | Format | Example | Default |
| :------------ | :------------------------- | :--------------------------------------------------------- | :--------------------------: |
| `as_dictated` | Preserve spoken phrasing | "February third twenty twenty five" -> "February 3rd 2025" | |
| `long_text` | Long date | "3 February 2025" | |
| `eu_slash` | Short date (EU) | "03/02/2025" | |
| `us_slash` | Short date (US) | "02/03/2025" | |
| `iso_compact` | ISO (basic, no separators) | "20250302" | |
| Option | Format | Example | Default |
| :------------ | :---------- | :-------------------------------------------------------- | :--------------------------: |
| `as_dictated` | As dictated | "Four o'clock" or "four thirty five" or "sixteen hundred" | |
| `h12` | 12-hour | "4:00 PM" | |
| `h24` | 24-hour | "16:00" | |
| Option | Format | Example | Default |
| :-------------------- | :------------------------------------------- | :--------------------------------------- | :--------------------------: |
| `as_dictated` | As dictated | "one, two, ... nine, ten, eleven, twelve | |
| `numerals_above_nine` | Single digit as words, multi-digit as number | "One, two ... nine, 10, 11, 12" | |
| `numerals` | Numbers only | "1, 2, ... 9, 10, 11, 12" | |
| Option | Format | Example | Default |
| :------------ | :---------- | :------------------------------------------------------------------------ | :--------------------------: |
| `as_dictated` | As dictated | "Millimeters, centimeters, inches; Blood pressure one twenty over eighty" | |
| `abbreviated` | Abbreviated | "mm, cm, in; BP 120/80" | |
[Click here](/stt/formatting#units-and-measurements) to see a full list of supported units and measurements
| Option | Format | Example | Default |
| :------------ | :---------- | :----------- | :--------------------------: |
| `as_dictated` | As dictated | "one to ten" | |
| `numerals` | As numbers | "1-10" | |
| Option | Format | Example | Default |
| :------------ | :---------- | :--------------------- | :--------------------------: |
| `as_dictated` | As dictated | "First, second, third" | |
| `numerals` | Abbreviated | "1st, 2nd, 3rd" | |
#### Example
Here is an example configuration for transcription of dictated audio in English with spoken punctuation enabled, two commands defined, and (default) formatting options defined:
```json wss:/transcribe configuration example theme={null}
{
"type": "config",
"configuration":{
"primaryLanguage": "en",
"spokenPunctuation": true,
"commands": [
{
"id": "next_section",
"phrases": [
"next section", "go to next section"
]
},
{
"id": "insert_template",
"phrases": [
"insert my {template_name} template", "insert {template_name} template"
],
"variables": [
{
"key": "template_name",
"type": "enum",
"enum": [
"soap", "radiology", "referral"
]
}
]
}
],
"formatting": {
"dates": "long_text",
"times": "h24",
"numbers": "numerals_above_nine",
"measurements": "abbreviated",
"numericRanges": "numerals",
"ordinals": "numerals"
}
}
}
```
#### Using SDK
You can use the Corti SDK (currently in "beta") to send configuration.
You can provide the configuration either directly when connecting, or send it as a separate message after establishing the connection:
```ts title="JavaScript (Beta, recommended)" theme={null}
const configuration = {
primaryLanguage: "en",
spokenPunctuation: true,
commands: [
{
id: "next_section",
phrases: ["next section", "go to next section"]
},
]
};
const transcribeSocket = await cortiClient.transcribe.connect(
{ configuration }
);
```
```ts title="JavaScript (Beta, handle configuration manually)" theme={null}
const transcribeSocket = await cortiClient.transcribe.connect();
transcribeSocket.on("open", () => {
transcribeSocket.sendConfiguration({
type: "config",
configuration: config
});
});
```
### Sending Audio
Ensure that your configuration was accepted before sending audio, and that the initial audio chunk is not too small as it needs to contain the headers to properly decode the audio.
We recommend sending audio in chunks of 250-500ms. In terms of buffering, the limit is 64000 bytes per chunk.
Audio data should be sent as raw binary without JSON wrapping.
A variety of common audio formats are supported; audio will be passed through a transcoder before speech-to-text processing. Similarly, specification of sample rate, depth or other audio settings is not required at this time.
See more details on supported audio formats [here](/stt/audio).
#### Using SDK
You can use the Corti SDK (currently in "beta") to send audio data.
```ts title="JavaScript (Beta)" theme={null}
transcribeSocket.sendAudio(audioChunk); // method doesn't do the chunking
```
### Flush the Audio Buffer
To flush the audio buffer, forcing transcript segments and detected commands to be returned over the web socket (e.g., when turning off or muting the microphone in a "hold-to-talk" dictation workflow, or in applications that support mic "go to sleep"), send a message -
```json theme={null}
{
"type":"flush"
}
```
The server will return text/commands for audio sent before the `flush` message and then respond with message -
```json theme={null}
{
"type":"flushed"
}
```
The web socket will remain open so dictation can continue.
Client side considerations:
1 If you rely on a `flush` event to separate data (e.g., for different sections in an EHR template), then be sure to receive the `flushed` event before moving on to the next data field.
2 When using a web browser `MediaRecorder` API, audio is buffered and only emitted at the configured timeslice interval. Therefore, *before* sending a `flush` message, call `MediaRecorder.requestData()` to force any remaining buffered audio on the client to be transmitted to the server. This ensures all audio reaches the server before the `flush` is processed.
### Ending the Session
To end the `/transcribe` session, send a message -
```json theme={null}
{
"type": "end"
}
```
This will signal the server to send any remaining transcript segments and/or detected commands. Then, the server will send two messages -
```json theme={null}
{
"type":"usage",
"credits":0.1
}
```
```json theme={null}
{
"type":"ended"
}
```
Following the message type `ended`, the server will close the web socket.
#### Using SDK
You can use the Corti SDK (currently in "beta") to end the /transcribe session.
When using automatic configuration (passing configuration to connect),
the socket will close itself without reconnecting when it receives an ended message.
When using manual configuration, the socket will attempt to reconnect after the server closes the connection. To prevent this,
you must subscribe to the ended message and manually close the connection.
```ts title="JavaScript (Beta, recommended)" theme={null}
const transcribeSocket = await cortiClient.transcribe.connect({
configuration
});
transcribeSocket.sendEnd({ type: "end" });
```
```ts title="JavaScript (Beta, manual configuration)" theme={null}
const transcribeSocket = await cortiClient.transcribe.connect();
transcribeSocket.sendEnd({ type: "end" });
transcribeSocket.on("message", (message) => {
if (message.type === "ended") {
transcribeSocket.close(); // Prevents unwanted reconnection
}
});
```
***
## 4. Responses
### Configuration
Returned when sending a valid configuration.
Returned when sending a valid configuration.
### Transcripts
Transcript segment with punctuations applied and command phrases removed
The raw transcript without spoken punctuation applied and without command phrases removed
Start time of the transcript segment in seconds
End time of the transcript segment in seconds
If false, then interim transcript result
```jsx Transcript response theme={null}
{
"type": "transcript",
"data": {
"text": "patient reports mild chest pain.",
"rawTranscriptText": "patient reports mild chest pain period",
"start": 0.0,
"end": 3.2,
"isFinal": true
}
}
```
**[Click here](/stt/best-practices-transcribe)** for detailed guide on how to properly insert transcript segments with proper handling of whitespace, interim vs. final results, and `text` vs. `rawTranscriptText` fields.
### Commands
To identify the command when it gets detected and returned over the WebSocket
The variables identified
The raw transcript without spoken punctuation applied and without command phrases removed
Start time of the transcript segment in seconds
End time of the transcript segment in seconds
```jsx Command response theme={null}
{
"type": "command",
"data": {
"id": "insert_template",
"variables": {
"template_name": "radiology"
},
"rawTranscriptText": "insert my radiology template",
"start": 2.3,
"end": 2.9,
}
}
```
### Flushed
Returned by server, after processing `flush` event from client, to return transcript segments/ detected commands
```json theme={null}
{
"type":"flushed"
}
```
### Ended
Returned by server, after processing `end` event from client, to convey amount of credits consumed
```json theme={null}
{
"type":"usage",
"credits":0.1
}
```
Returned by server, after processing `end` event from client, before closing the web socket
```json theme={null}
{
"type":"ended"
}
```
#### Using SDK
You can use the Corti SDK (currently in "beta") to subscribe to responses from the /transcribe endpoint.
```ts title="JavaScript (Beta)" theme={null}
transcribeSocket.on("message", (message) => {
switch (message.type) {
case "transcript":
console.log("Transcript:", message.data.text);
break;
case "command":
console.log("Command detected:", message.data.id, message.data.variables);
break;
case "error":
console.error("Error:", message.error);
break;
case "usage":
console.log("Usage credits:", message.credits);
break;
default:
// handle other messages
break;
}
});
```
***
## 5. Error Responses
Returned when sending an invalid configuration.
Possible errors `CONFIG_DENIED`, `CONFIG_TIMEOUT`
The reason the configuration is invalid.
The session ID.
Once configuration has been accepted and the session is running, you may encounter runtime or application-level errors.
These are sent as JSON objects with the following structure:
```json theme={null}
{
"type": "error",
"error": {
"id": "error id",
"title": "error title",
"status": 400,
"details": "error details",
"doc":"link to documentation"
}
}
```
In some cases, receiving an "error" type message will cause the stream to end and send a message of type `usage` and type `ENDED`.
#### Using SDK
You can use the Corti SDK (currently in "beta") to handle error messages.
With recommended configuration, configuration errors (e.g., `CONFIG_DENIED`, etc.) and runtime errors will both trigger the error event and automatically close the socket. You can also inspect the original message in the message handler. With manual configuration, configuration errors are only received as messages (not as error events), and you must close the socket manually to avoid reconnection.
```ts title="JavaScript (Beta, recommended)" theme={null}
const transcribeSocket = await cortiClient.transcribe.connect({
configuration
});
transcribeSocket.on("error", (error) => {
// Emitted for both configuration and runtime errors
console.error("Error event:", error);
// The socket will close itself automatically
});
// still can be accessed with normal "message" subscription
transcribeSocket.on("message", (message) => {
if (
message.type === "CONFIG_DENIED" ||
message.type === "CONFIG_TIMEOUT"
) {
console.log("Configuration error (message):", message);
}
if (message.type === "error") {
console.log("Runtime error (message):", message);
}
});
```
```ts title="JavaScript (Beta, manual configuration)" theme={null}
const transcribeSocket = await cortiClient.transcribe.connect();
transcribeSocket.on("message", (message) => {
if (
message.type === "CONFIG_DENIED" ||
message.type === "CONFIG_TIMEOUT"
) {
console.error("Configuration error (message):", message);
transcribeSocket.close(); // Must close manually to avoid reconnection
}
if (message.type === "error") {
console.error("Runtime error (message):", message);
transcribeSocket.close(); // Must close manually to avoid reconnection
}
});
```
# Create Transcript
Source: https://docs.corti.ai/api-reference/transcripts/create-transcript
api-reference/auto-generated-openapi.yml post /interactions/{id}/transcripts/
Create a transcript from an audio file attached, via `/recordings` endpoint, to the interaction. Each interaction may have more than one audio file and transcript associated with it. While audio files up to 60min in total duration, or 150MB in total size, may be attached to an interaction, synchronous processing is only supported for audio files less than ~2min in duration.
If an audio file takes longer to transcribe than the 25sec synchronous processing timeout, then it will continue to process asynchronously. In this scenario, an incomplete or empty transcript with `status=processing` will be returned with a location header that can be used to retrieve the final transcript.
The client can poll the Get Transcript endpoint (`GET /interactions/{id}/transcripts/{transcriptId}/status`) for transcript status changes: - `200 OK` with status `processing`, `completed`, or `failed` - `404 Not Found` if the `interactionId` or `transcriptId` are invalid
The completed transcript can be retrieved via the Get Transcript endpoint (`GET /interactions/{id}/transcripts/{transcriptId}/`).
# Delete Transcript
Source: https://docs.corti.ai/api-reference/transcripts/delete-transcript
api-reference/auto-generated-openapi.yml delete /interactions/{id}/transcripts/{transcriptId}
Deletes a specific transcript associated with an interaction.
# Get Transcript
Source: https://docs.corti.ai/api-reference/transcripts/get-transcript
api-reference/auto-generated-openapi.yml get /interactions/{id}/transcripts/{transcriptId}
Retrieve a transcript from a specific interaction. Each interaction may have more than one transcript associated with it. Use the List Transcript request (`GET /interactions/{id}/transcripts/`) to see all transcriptIds available for the interaction.
The client can poll this Get Transcript endpoint (`GET /interactions/{id}/transcripts/{transcriptId}/status`) for transcript status changes: - `200 OK` with status `processing`, `completed`, or `failed` - `404 Not Found` if the `interactionId` or `transcriptId` are invalid
Status of `completed` indicates the transcript is finalized. If the transcript is retrieved while status is `processing`, then it will be incomplete.
# Get Transcript Status
Source: https://docs.corti.ai/api-reference/transcripts/get-transcript-status
api-reference/auto-generated-openapi.yml get /interactions/{id}/transcripts/{transcriptId}/status
Poll for transcript creation status. Status of `completed` indicates the transcript is finalized. If the transcript is retrieved while status is `processing`, then it will be incomplete. Status of `failed` indicate the transcript was not created successfully; please retry.
# List Transcripts
Source: https://docs.corti.ai/api-reference/transcripts/list-transcripts
api-reference/auto-generated-openapi.yml get /interactions/{id}/transcripts/
Retrieves a list of transcripts for a given interaction.
# Welcome to the Corti API Reference
Source: https://docs.corti.ai/api-reference/welcome
AI platform for healthcare developers
This API Reference provides detailed specifications for integrating with the Corti API, enabling organizations to build bespoke healthcare AI solutions that meet their specific needs.
Walkthrough and link to the repo for the Javascript SDK
Download the Corti API Postman collection to start building
***
#### Most Popular
Detailed spec for real-time dictation and voice commands
Detailed spec for real-time conversational intelligence
Start here for opening a contextual messaging thread
Detailed spec for creating an interaction and setting appropriate context
Attach an audio file to the interaction
Start or continue your contextual chat and agentic tasks
Convert audio files to text
Create one to many documents for an interaction
Retrieve information about all available experts for use with your agents
***
### Resources
Learn about upcoming changes and recent API, language models, and app updates
View help articles and documentation, contact the Corti team, and manage support tickets
Learn about how to use OAuth based on your workflow needs
Review detailed compliance standards and security certifications
Please [contact us](https://help.corti.app/) if you need more information about the Corti API
# API Reference
Source: https://docs.corti.ai/assistant/api-reference
Complete reference for all API actions, events, message types, and return values
This page provides a complete reference for all API actions, events, message types, and return values available in the Corti Embedded Assistant API.
## Message Types
### Outgoing Messages (Parent → Embedded)
All messages sent to the embedded app use this structure:
```typescript theme={null}
{
type: 'CORTI_EMBEDDED',
version: 'v1',
action: string,
requestId?: string, // Optional but recommended (see below)
payload?: object
}
```
**`requestId` is optional but strongly recommended**
While the `requestId` field is currently optional for backwards compatibility, omitting it is **deprecated** and will become required in a future API version. Always include a unique `requestId` to enable proper request tracking, error correlation, and response matching.
### Incoming Messages (Embedded → Parent)
#### Responses
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_RESPONSE',
requestId: string,
success: boolean,
data?: any,
error?: {
message: string,
code?: string
}
}
```
#### Events
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT',
event: string,
payload?: object
}
```
## Request IDs
The `requestId` field is used to correlate API requests with their corresponding responses. While currently optional, **omitting `requestId` is deprecated** and it will become required in a future API version.
### Why Use Request IDs?
* **Response Matching**: Correlate responses with their originating requests, especially important when multiple requests are in flight
* **Error Tracking**: Identify which specific request failed when debugging issues
* **Request Tracing**: Track the full lifecycle of a request through logs and monitoring
* **Timeout Handling**: Implement proper timeout logic by tracking pending requests
### Best Practices
1. **Always include a `requestId`**: Even though it's optional now, always provide a unique identifier
2. **Use unique values**: Generate a new unique ID for each request (e.g., UUID, timestamp-based, or incremental)
3. **Keep them short but unique**: UUIDs work well, or use a pattern like `req-${Date.now()}-${counter}`
4. **Store for debugging**: Log request IDs to help troubleshoot issues in production
### Example
```javascript theme={null}
// Good: Include requestId
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "auth",
requestId: `req-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`,
payload: {
/* ... */
},
},
"*",
);
// Deprecated: Omitting requestId (will log a warning)
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "auth",
// requestId missing - this is deprecated!
payload: {
/* ... */
},
},
"*",
);
```
When `requestId` is omitted, the embedded app will log a console warning and
use an empty string internally. This behavior is provided for backwards
compatibility only and should not be relied upon.
## Error Codes
All API actions may return errors with the following structure:
```typescript theme={null}
{
message: string,
code: 'UNAUTHORIZED' | 'NOT_READY' | 'NOT_FOUND' | 'INVALID_PAYLOAD' | 'INTERNAL_ERROR',
details?: unknown
}
```
| Code | Description | Common Causes |
| ----------------- | -------------------------------------------- | --------------------------------------------------------------- |
| `UNAUTHORIZED` | User is not authenticated or session expired | Must call `auth` first, or session has expired |
| `NOT_READY` | Required precondition not met | Missing interaction, not in recording session, etc. |
| `NOT_FOUND` | Requested resource does not exist | Invalid interaction ID, user not found |
| `INVALID_PAYLOAD` | Request payload validation failed | Missing required fields, invalid formats, constraint violations |
| `INTERNAL_ERROR` | Unexpected server or client error | Retry the request or contact support |
***
## Actions
### auth
Authenticate the user session with the embedded app. All payload properties are required and can be found on the response from authentication service. For convenience, you can simply set the authentication response as the payload directly.
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "auth",
requestId: "unique-id",
payload: {
access_token: string,
refresh_token: string,
id_token: string,
token_type: string,
},
},
"*",
);
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
const user = await api.auth({
access_token: string,
refresh_token: string,
id_token: string,
token_type: string,
});
```
**Prerequisites:** None
**Input Validation:**
* All fields (`access_token`, `refresh_token`, `id_token`, `token_type`) are required
* Tokens must be valid JWT strings
**Possible Errors:**
* `INVALID_PAYLOAD`: Missing required authentication fields
* `UNAUTHORIZED`: Invalid or expired tokens
* `INTERNAL_ERROR`: Authentication service unavailable
**Returns:**
```typescript theme={null}
{
id: string,
email: string
}
```
***
### configure
Configure the Assistant interface for the current session, including toggling UI features, visual appearance, and locale settings.
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage({
type: "CORTI_EMBEDDED",
version: "v1",
action: "configure",
payload: {
features: {
interactionTitle: boolean,
aiChat: boolean,
documentFeedback: boolean,
navigation: boolean,
virtualMode: boolean,
syncDocumentAction: boolean,
templateEditor: boolean,
},
appearance: {
primaryColor: string | null,
},
locale: {
interfaceLanguage: string | null,
dictationLanguage: string,
overrides: Record,
},
},
}, "*");
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
const config = await api.configure({
features: {
interactionTitle: boolean,
aiChat: boolean,
documentFeedback: boolean,
navigation: boolean,
virtualMode: boolean,
syncDocumentAction: boolean,
templateEditor: boolean,
},
appearance: {
primaryColor: string | null,
},
locale: {
interfaceLanguage: string | null,
dictationLanguage: string,
overrides: Record,
},
});
```
**Prerequisites:** None (can be called anytime)
**Input Validation:**
* `appearance.primaryColor`: Must be valid CSS color (hex, rgb, hsl) or `null`
* `locale.interfaceLanguage`: Must be one of the supported interface languages (see Configuration Reference) or `null`
* `locale.dictationLanguage`: Must be one of the supported dictation languages (see Configuration Reference)
* `locale.overrides`: Keys must match known override strings (see Configuration Reference)
* `features.*`: Must be boolean values
* `network.apiUrl`: Must be valid HTTPS URL if provided
**Possible Errors:**
* `INVALID_PAYLOAD`: Invalid color format, unsupported language code, non-boolean feature flags
* `INTERNAL_ERROR`: Failed to apply configuration
**Returns:**
```typescript theme={null}
{
features: {
interactionTitle: boolean,
aiChat: boolean,
documentFeedback: boolean,
navigation: boolean,
virtualMode: boolean,
syncDocumentAction: boolean,
templateEditor: boolean,
},
appearance: {
primaryColor: string | null,
},
locale: {
interfaceLanguage: string | null,
dictationLanguage: string,
overrides: Record,
}
}
```
**Defaults:**
* `features.interactionTitle: true`
* `features.aiChat: true`
* `features.documentFeedback: true`
* `features.navigation: false`
* `features.virtualMode: true`
* `features.syncDocumentAction: false`
* `features.templateEditor: true`
* `appearance.primaryColor: null` (uses built-in styles)
* `locale.interfaceLanguage: null` (uses user's default or browser setting)
* `locale.dictationLanguage: "en"`
* `locale.overrides: {}`
**Note:** The command can be invoked with a partial object, and only the specified properties will take effect. Returns the full currently applied configuration object.
***
### createInteraction
Create a new interaction session.
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "createInteraction",
requestId: "unique-id",
payload: {
assignedUserId: string | null,
encounter: {
identifier: string,
status: string,
type: string,
period: {
startedAt: string,
},
title: string,
},
},
},
"*",
);
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
const interaction = await api.createInteraction({
assignedUserId: null,
encounter: {
identifier: `encounter-${Date.now()}`,
status: "planned",
type: "first_consultation",
period: {
startedAt: new Date().toISOString(),
},
title: "Initial Consultation",
},
});
```
**Prerequisites:** User must be authenticated (`auth` must be called first)
**Input Validation:**
* `encounter.identifier`: Required, non-empty string
* `encounter.status`: Must be one of: `"planned"`, `"in-progress"`, `"completed"`, `"cancelled"`
* `encounter.type`: Must be one of: `"ambulatory"`, `"emergency"`, `"field"`, `"first_consultation"`, `"home_health"`, `"inpatient_encounter"`, `"short_stay"`, `"virtual"`
* `encounter.period.startedAt`: Required, must be valid ISO 8601 datetime string
* `encounter.title`: Optional string
* `assignedUserId`: Optional string or `null`
* `patient`: Optional object with patient demographic information
* `patient.identifier`: Optional string (patient identifier from external system)
* `patient.name`: Optional string (patient's full name)
* `patient.gender`: Optional, must be one of: `"male"`, `"female"`, `"other"`, `"unknown"`
* `patient.birthDate`: Optional string or `null` (ISO 8601 date format)
* `patient.pronouns`: Optional string (preferred pronouns)
**Possible Errors:**
* `UNAUTHORIZED`: User not authenticated, call `auth` first
* `INVALID_PAYLOAD`: Missing required fields, invalid encounter status/type, invalid date format
* `INTERNAL_ERROR`: Failed to create interaction
**Returns:**
```typescript theme={null}
{
id: string,
createdAt: string,
status?: string
}
```
***
### addFacts
Add contextual facts to the current interaction.
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage({
type: 'CORTI_EMBEDDED',
version: 'v1',
action: 'addFacts',
requestId: 'unique-id',
payload: {
facts: Array<{
text: string,
group: string
}>
}
}, '*');
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
await api.addFacts({
facts: [
{ text: "Chest pain", group: "other" },
{ text: "Shortness of breath", group: "other" },
{ text: "Fatigue", group: "other" },
],
});
```
**Prerequisites:**
* User must be authenticated
* An interaction must exist (call `createInteraction` first)
**Input Validation:**
* `facts`: Required, must be non-empty array
* `facts[].text`: Required, non-empty string
* `facts[].group`: Optional string, defaults to `"other"` if not provided
* `facts[].source`: Optional, defaults to `"user"`
**Possible Errors:**
* `NOT_READY`: No active interaction, call `createInteraction` first
* `UNAUTHORIZED`: User not authenticated
* `INVALID_PAYLOAD`: Empty facts array, missing text field
* `INTERNAL_ERROR`: Failed to save facts
**Returns:** `void`
***
### configureSession
Set session-level defaults and preferences. **This method will not overwrite defaults set by the user.**
`defaultTemplateKey` and `defaultOutputLanguage` must be provided together. If
one is missing, the method returns `INVALID_PAYLOAD`.
`defaultOutputLanguage` has no effect on its own. It is only used together
with `defaultTemplateKey` to resolve the template.
`getTemplates` returns language-specific IDs (for example, `"corti-soap-en"`).
For `configureSession`, split this into: `defaultTemplateKey: "corti-soap"`
and `defaultOutputLanguage: "en"`.
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "configureSession",
requestId: "unique-id",
payload: {
defaultLanguage: string,
defaultOutputLanguage: string,
defaultTemplateKey: string,
defaultMode: "virtual" | "in-person",
},
},
"*",
);
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
await api.configureSession({
defaultLanguage: "en",
defaultOutputLanguage: "en",
defaultTemplateKey: "corti-soap",
defaultMode: "virtual",
});
```
```javascript theme={null}
// If getTemplates().templates[i].id is "corti-soap-en"
await api.configureSession({
defaultTemplateKey: "corti-soap",
defaultOutputLanguage: "en",
});
```
**Prerequisites:** User must be authenticated
**Input Validation:**
* `defaultLanguage`: Must be valid language code (e.g., `"en-US"`, `"da-DK"`)
* `defaultOutputLanguage`: Must be valid language code
* `defaultTemplateKey`: Must be a language-agnostic template identifier
* `defaultMode`: Must be either `"virtual"` or `"in-person"`
* All fields are optional, but if either `defaultTemplateKey` or `defaultOutputLanguage` is provided, both must be provided together
* `defaultTemplateKey` and `defaultOutputLanguage` must be set together
**Possible Errors:**
* `UNAUTHORIZED`: User not authenticated
* `INVALID_PAYLOAD`: Invalid payload (including when only one of `defaultTemplateKey` or `defaultOutputLanguage` is provided)
* `NOT_FOUND`: No template matches `defaultTemplateKey` + `defaultOutputLanguage`. Call `getTemplates` first to verify the template exists for the selected language.
* `INTERNAL_ERROR`: Failed to update session settings
**Returns:** `void`
***
### navigate
Navigate to a specific path within the embedded app.
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "navigate",
requestId: "unique-id",
payload: {
path: string,
},
},
"*",
);
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
await api.navigate({
path: "/session/interaction-123",
});
```
**Prerequisites:** None
**Input Validation:**
* `path`: Required, must start with `/`
* `path`: Cannot be a full URL (no `http://`, `https://`, or `//` protocol)
* `path`: Must be a relative path within the application
**Valid Path Patterns:**
* `/` – Root/home
* `/session/` – Specific session
* `/templates` – Template browser
* `/settings/*` – Settings pages
**Possible Errors:**
* `INVALID_PAYLOAD`:
* Path does not start with `/` → `"Path must be a relative path starting with '/'"`
* Path is a full URL → `"Path must be a relative path, not a full URL"`
* `INTERNAL_ERROR`: Navigation failed
**Returns:** `void`
**Navigable URLs:**
* `/` – start a new session
* `/session/` - go to an existing session identified by ``
* `/templates` - browse and create templates
* `/settings/preferences` - edit defaults like languages and default session settings
* `/settings/input` - edit dictation input settings
* `/settings/account` - edit general account settings
* `/settings/archive` - view items in and restore from archive (only relevant if navigation is visible)
***
### setCredentials
Change the credentials of the currently authenticated user. Can be used to set credentials for a user without a password (if only authenticated via identity provider) or to change the password of a user with an existing password.
**Password Policy:**
* At least 1 uppercase, 1 lowercase, 1 numerical and 1 special character
* At least 8 characters long
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "setCredentials",
requestId: "unique-id",
payload: {
password: string,
},
},
"*",
);
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
await api.setCredentials({ password: "YOUR_NEW_PASSWORD" });
```
**Prerequisites:** User must be authenticated
**Input Validation:**
* `password`: Required string
* Password must meet policy requirements:
* Minimum 8 characters
* At least 1 uppercase letter (A-Z)
* At least 1 lowercase letter (a-z)
* At least 1 number (0-9)
* At least 1 special character
**Possible Errors:**
* `UNAUTHORIZED`: User not authenticated
* `INVALID_PAYLOAD`:
* Password too short → `"Password does not meet requirements"`
* Missing required character types → `"Password does not meet requirements"`
* `INTERNAL_ERROR`: Failed to update password
**Returns:** `void`
***
### startRecording
Start recording within the embedded session.
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "startRecording",
requestId: "unique-id",
},
"*",
);
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
await api.startRecording();
```
**Prerequisites:**
* User must be authenticated (call `auth`)
* An interaction must exist (call `createInteraction`)
* Application must be in an interview/session context
**Input Validation:** None (no payload required)
**Possible Errors:**
* `NOT_READY`:
* `"Must be in an interview to start recording"` – Not in interview context
* `"No interaction ID found. Call createInteraction first"` – No active interaction
* `UNAUTHORIZED`:
* `"Not authenticated. Call auth first"` – No valid session
* `NOT_FOUND`: `"Interaction not found"` – Interaction ID is invalid
* `INTERNAL_ERROR`: Failed to connect to recording service (WebSocket error)
**Returns:** `void`
***
### stopRecording
Stop recording within the embedded session.
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "stopRecording",
requestId: "unique-id",
},
"*",
);
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
await api.stopRecording();
```
**Prerequisites:** Must be actively recording (call `startRecording` first)
**Input Validation:** None (no payload required)
**Possible Errors:**
* `NOT_READY`: `"Must be in an interview to stop recording"` – Not in active recording session
* `INTERNAL_ERROR`: Failed to disconnect from recording service
**Returns:** `void`
***
### getStatus
Request information about the current state of the application, including authentication status, current user, current URL and interaction details.
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "getStatus",
requestId: "unique-id",
},
"*",
);
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
const status = await api.getStatus();
```
**Prerequisites:** None (always available)
**Input Validation:** None (no payload required)
**Notes:**
* This action does not throw structured error codes
* Very rare, typically always succeeds
* May return partial data if specific queries fail (e.g., `interaction` will be `null` if fetch fails)
**Returns:**
```typescript theme={null}
{
auth: {
isAuthenticated: boolean,
user: {
id: string,
email: string
} | null
},
currentUrl: string,
interaction: {
id: string,
title: string,
state: "planned" | "ongoing" | "paused" | "disconnected" | "ending" | "parsing" | "ended",
startedAt: string,
endedAt: string | null,
endsAt: string | null,
transcripts: Array<{
utterances: Array<{
id: string,
start: number,
duration: number,
text: string,
isFinal: boolean,
participantId: string | undefined,
}>,
participants: Array<{
id: string,
channel: number,
role: "agent" | "patient" | "other" | "multiple"
}>,
isMultiChannel: boolean
}>,
documents: Array<{
id: string,
name: string,
templateRef: string,
isStream: boolean,
sections: Array<{
key: string,
name: string,
text: string,
sort: number,
createdAt: string,
updatedAt: string,
markdown: string | undefined | null,
htmlText: string | undefined,
plainText: string | undefined,
}>,
outputLanguage: string,
}>,
facts: Array<{
id: string,
text: string,
group: string,
isDiscarded: boolean,
source: "core" | "system" | "user",
createdAt: string | undefined,
updatedAt: string,
isNew: boolean,
isDraft: boolean | undefined,
}>,
websocketUrl: string
} | null
}
```
***
### getTemplates
Retrieve all document templates available to the authenticated user. This includes both built-in templates provided by Corti and custom templates created by the user themself.
Use this method before [`configureSession`](#configuresession) to verify the
template and language combination you want to set as default.
**PostMessage:**
```javascript theme={null}
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "getTemplates",
requestId: "unique-id",
},
"*",
);
```
**Window API:**
```javascript theme={null}
const api = window.CortiEmbedded.v1;
const response = await api.getTemplates();
console.log(response.templates); // Array of available templates
```
**Prerequisites:** User must be authenticated (`auth` must be called first)
**Input Validation:** None (no payload required)
**Possible Errors:**
* `UNAUTHORIZED`: User not authenticated, call `auth` first
* `INTERNAL_ERROR`: Failed to fetch templates from server
**Returns:**
```typescript theme={null}
{
templates: Array<{
id: string; // Language-specific ID (for example, "corti-soap-en")
name: string;
description?: string;
language: {
code: string;
name: string;
locale?: string;
};
sections: Array<{
id: string;
title: string;
}>;
isCustom: boolean;
}>;
}
```
**Example Response:**
```json theme={null}
{
"templates": [
{
"id": "corti-soap-en",
"name": "SOAP Note",
"description": "Standard SOAP format for clinical documentation",
"language": {
"code": "en",
"name": "English",
"locale": "en-US"
},
"sections": [
{ "id": "subjective", "title": "Subjective" },
{ "id": "objective", "title": "Objective" },
{ "id": "assessment", "title": "Assessment" },
{ "id": "plan", "title": "Plan" }
],
"isCustom": false
},
{
"id": "custom-template-123",
"name": "Emergency Department Note",
"description": "Custom template for ED visits",
"language": {
"code": "en",
"name": "English"
},
"sections": [
{ "id": "chief-complaint", "title": "Chief Complaint" },
{ "id": "hpi", "title": "History of Present Illness" },
{ "id": "treatment", "title": "Treatment & Disposition" }
],
"isCustom": true
}
]
}
```
**Use Cases:**
* Display a template picker UI for users to select their preferred documentation template
* Filter templates by language to match the current session settings
* Distinguish between built-in and custom templates using the `isCustom` flag
* Pre-populate template selection based on user preferences or defaults
***
## Common Workflows
### Initialize and Start Recording
```javascript theme={null}
const api = window.CortiEmbedded.v1;
try {
// 1. Authenticate
const user = await api.auth({
access_token: "...",
refresh_token: "...",
id_token: "...",
token_type: "Bearer",
});
console.log("Authenticated as:", user.email);
// 2. Configure app (optional)
await api.configure({
features: { virtualMode: true },
locale: { dictationLanguage: "en-US" },
});
// 3. Create interaction
const interaction = await api.createInteraction({
encounter: {
identifier: "enc-123",
status: "planned",
type: "ambulatory",
period: { startedAt: new Date().toISOString() },
title: "Patient Visit",
},
});
console.log("Created interaction:", interaction.id);
// 4. Navigate to session page
await api.navigate({
path: `/session/${interaction.id}`,
});
console.log("Navigated to session");
// 5. Start recording
await api.startRecording();
console.log("Recording started");
} catch (error) {
console.error("Workflow failed:", error.message, error.code);
}
```
### Error Handling Best Practices
```javascript theme={null}
const api = window.CortiEmbedded.v1;
async function startRecordingWithRetry() {
try {
await api.startRecording();
} catch (error) {
switch (error.code) {
case "UNAUTHORIZED":
// Re-authenticate user
console.log("Session expired, re-authenticating...");
await reauthenticate();
await api.startRecording(); // Retry
break;
case "NOT_READY":
// Ensure prerequisites are met
if (error.message.includes("createInteraction")) {
console.log("Creating interaction first...");
await api.createInteraction({
encounter: {
identifier: `enc-${Date.now()}`,
status: "planned",
type: "ambulatory",
period: { startedAt: new Date().toISOString() },
},
});
await api.startRecording(); // Retry
} else if (error.message.includes("interview")) {
console.error("Must be in interview context");
// Navigate user to session page or show error
}
break;
case "INTERNAL_ERROR":
// Retry once after delay
console.log("Internal error, retrying...");
await new Promise((resolve) => setTimeout(resolve, 2000));
try {
await api.startRecording();
} catch (retryError) {
console.error("Retry failed:", retryError.message);
// Show error to user
}
break;
default:
console.error("Unexpected error:", error);
}
}
}
```
### Complete Session Lifecycle
```javascript theme={null}
const api = window.CortiEmbedded.v1;
let currentInteractionId = null;
async function initializeSession() {
// Authenticate
await api.auth({
/* tokens */
});
// Configure
await api.configure({
features: {
virtualMode: true,
aiChat: true,
},
});
// Set defaults
await api.configureSession({
defaultLanguage: "en-US",
defaultMode: "virtual",
});
}
async function startNewInteraction(encounterData) {
const interaction = await api.createInteraction({
encounter: encounterData,
});
currentInteractionId = interaction.id;
// Add initial context
await api.addFacts({
facts: [{ text: "Chief complaint: Headache", group: "symptoms" }],
});
await api.startRecording();
return interaction;
}
async function endInteraction() {
await api.stopRecording();
// Get final status
const status = await api.getStatus();
return status.interaction;
}
// Usage
await initializeSession();
const interaction = await startNewInteraction({
identifier: "enc-001",
status: "planned",
type: "ambulatory",
period: { startedAt: new Date().toISOString() },
title: "Follow-up Visit",
});
// ... conduct session ...
const finalInteraction = await endInteraction();
console.log("Session ended:", finalInteraction);
```
## Configuration Reference
### Available Interface Languages
Updated as per February 2026
| Language code | Language |
| ------------- | ----------------- |
| `en` | English |
| `de-DE` | German |
| `fr-FR` | French |
| `it-IT` | Italian |
| `sv-SE` | Swedish |
| `da-DK` | Danish |
| `nb-NO` | Norwegian Bokmål |
| `nn-NO` | Norwegian Nynorsk |
### Available Dictation Languages
Updated as per February 2026
#### EU
| Language code | Language |
| ------------- | --------------- |
| `en` | English |
| `en-GB` | British English |
| `de` | German |
| `fr` | French |
| `sv` | Swedish |
| `da` | Danish |
| `nl` | Dutch |
| `no` | Norwegian |
#### US
| Language code | Language |
| ------------- | -------- |
| `en` | English |
### Known Strings to Override
Currently, only the following keys are exposed for override:
| Key | Default value | Purpose |
| --------------------------------------- | ------------------------ | ------------------------------------------------------------------- |
| `interview.document.syncDocument.label` | *"Synchronize document"* | The button text for the *"synchronize document"* button if enabled. |
## Appearance Configuration
Disclaimer - always ensure WCAG 2.2 AA conformance
Corti Assistant's default theme has been evaluated against WCAG 2.2 Level AA and meets applicable success criteria in our supported browsers.
This conformance claim applies only to the default configuration. Customer changes (e.g., color palettes, CSS overrides, third-party widgets, or content) are outside the scope of this claim. Customers are responsible for ensuring their customizations continue to meet WCAG 2.2 AA (including color contrast and focus visibility).
When supplying a custom accent or theme, Customers must ensure WCAG 2.2 AA conformance, including:
* 1.4.3 Contrast (Minimum): normal text ≥ 4.5:1; large text ≥ 3:1
* 1.4.11 Non-text Contrast: UI boundaries, focus rings, and selected states ≥ 3:1
* 2.4.11 Focus Not Obscured (Minimum): focus indicators remain visible and unobstructed
Corti provides accessible defaults. If you override them, verify contrast for all states (default, hover, active, disabled, focus) and on all backgrounds you use.
## Related Documentation
* [PostMessage API](/assistant/postmessage-api) - Learn how to use the PostMessage integration method
* [Window API](/assistant/window-api) - Learn how to use the Window API integration method
* [Embedded API Overview](/assistant/embedded-api) - General overview of the embedded API
Please [contact us](https://help.corti.app) for help or questions.
# Authentication for Embedded Users
Source: https://docs.corti.ai/assistant/authentication
Choosing the right OAuth2 flow for Corti Embedded integrations
## Background
OAuth (Open Authorization) is an open-standard framework for access delegation, allowing applications to securely access a user's protected resources without exposing their login credentials. By keeping passwords private and limiting access to sensitive information, OAuth improves both security and access management across web, mobile, and desktop applications. OAuth 2.0, the current and most widely adopted version, expands upon the original protocol to support APIs, mobile apps, and connected devices, offering multiple authorization flows tailored to different application types.
Corti Assistant requires user-based authentication. Client credentials flows
and other machine-to-machine authentication methods are NOT supported for
embedded Corti Assistant integrations. You must authenticate as an end user,
not as an application.
When embedding Corti Assistant in your application (via iFrame/WebView), **it's important to use the right OAuth2 grant type** that supports user-based authentication.
This guide explains the OAuth flows that work with embedded Corti Assistant, focusing on user-based authentication methods that are suitable for interactive scenarios.
***
## Supported OAuth Grant Types for Embedded Corti Assistant
The following OAuth flows support user-based authentication and can be used with embedded Corti Assistant. **Client credentials grant is NOT supported** as it does not provide user context.
### 1. Authorization Code Flow with PKCE (Recommended)
**Best for:** Native apps, single-page apps, or any browser-based integration where a user is present.
**Why:** This flow is secure, interactive, and doesn't require a client secret (ideal for public clients). Proof Key for Code Exchange (PKCE) protects against code interception attacks.
**How it works:**
1. Your app redirects the user to Corti's OAuth2 authorization server.
2. The user logs in and grants permission.
3. Corti redirects back with an authorization code.
4. Your app exchanges the code (with the PKCE verifier) for an access token.
**Key Advantages:**
* Secure and suitable for embedded web apps.
* No client secret is required.
* Enforces user interaction.
This example uses the [Corti JavaScript SDK](https://docs.corti.ai/quickstart/javascript-sdk) (`@corti/sdk`)
```javascript Step 1: Generate Authorization URL (Frontend) [expandable] theme={null}
import { CortiAuth, CortiEnvironment } from "@corti/sdk";
const auth = new CortiAuth({
environment: CortiEnvironment.Eu,
tenantName: "YOUR_TENANT_NAME",
});
// SDK automatically generates code verifier, stores it in localStorage and makes redirect
await auth.authorizePkceUrl({
clientId: "YOUR_CLIENT_ID",
redirectUri: "https://your-app.com/callback"
});
```
```javascript Step 2: Handle the Callback and Exchange Code for Tokens [expandable] theme={null}
// Extract the authorization code from URL parameters
const urlParams = new URLSearchParams(window.location.search);
const code = urlParams.get('code');
const error = urlParams.get('error');
if (error) {
console.error('Authorization failed:', error);
return;
}
if (code) {
// Exchange the authorization code for tokens using SDK
const tokenResponse = await auth.getPkceFlowToken({
clientId: "YOUR_CLIENT_ID",
redirectUri: "https://your-app.com/callback",
code: code,
});
const { accessToken, refreshToken } = tokenResponse;
}
```
This flow requires two stages: generating the code verifier/challenge and handling the token exchange after the redirect.
```bash Step 1: Redirect User to Authorize [expandable] theme={null}
// Generate code verifier + challenge
const code_verifier = crypto.randomUUID().replace(/-/g, '');
const code_challenge = await crypto.subtle.digest("SHA-256", new TextEncoder().encode(code_verifier));
const base64url = btoa(String.fromCharCode(...new Uint8Array(code_challenge)))
.replace(/\+/g, "-").replace(/\//g, "_").replace(/=+$/, "");
// Store verifier for use after redirect
localStorage.setItem('pkce_verifier', code_verifier);
// Redirect to authorization server
window.location.href = `https://auth.us.corti.app/realms//protocol/openid-connect/auth?response_type=code&client_id=YOUR_CLIENT_ID&redirect_uri=https://yourapp.com/callback&code_challenge=${base64url}&code_challenge_method=S256&scope=openid`;
```
```bash Step 2: Exchange Code for Token (after redirect) [expandable] theme={null}
const code = new URLSearchParams(window.location.search).get('code');
const code_verifier = localStorage.getItem('pkce_verifier');
const response = await fetch('https://auth.us.corti.app/realms//protocol/openid-connect/token', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
grant_type: 'authorization_code',
client_id: 'YOUR_CLIENT_ID',
redirect_uri: 'https://yourapp.com/callback',
code,
code_verifier
})
});
const data = await response.json();
console.log('Access Token:', data.access_token);
```
This example uses standard .NET libraries for OAuth2 PKCE flow
```csharp Step 1: Generate Authorization URL (Frontend) [expandable] theme={null}
using System;
using System.Security.Cryptography;
using System.Text;
using System.Web;
public class PkceAuth
{
private const string ClientId = "YOUR_CLIENT_ID";
private const string RedirectUri = "https://your-app.com/callback";
private const string AuthBaseUrl = "https://auth.us.corti.app/realms//protocol/openid-connect";
public static (string authUrl, string codeVerifier) GenerateAuthorizationUrl()
{
// Generate code verifier (random string)
var codeVerifier = GenerateCodeVerifier();
// Generate code challenge (SHA256 hash, base64url encoded)
var codeChallenge = GenerateCodeChallenge(codeVerifier);
// Store code verifier (e.g., in session or secure storage)
// In a real app, store this securely for use after redirect
HttpContext.Current.Session["pkce_verifier"] = codeVerifier;
// Build authorization URL
var authUrl = $"{AuthBaseUrl}/auth?" +
$"response_type=code&" +
$"client_id={HttpUtility.UrlEncode(ClientId)}&" +
$"redirect_uri={HttpUtility.UrlEncode(RedirectUri)}&" +
$"code_challenge={codeChallenge}&" +
$"code_challenge_method=S256&" +
$"scope=openid";
return (authUrl, codeVerifier);
}
private static string GenerateCodeVerifier()
{
var bytes = new byte[32];
using (var rng = RandomNumberGenerator.Create())
{
rng.GetBytes(bytes);
}
return Convert.ToBase64String(bytes)
.TrimEnd('=')
.Replace('+', '-')
.Replace('/', '_');
}
private static string GenerateCodeChallenge(string codeVerifier)
{
using (var sha256 = SHA256.Create())
{
var challengeBytes = sha256.ComputeHash(Encoding.UTF8.GetBytes(codeVerifier));
return Convert.ToBase64String(challengeBytes)
.TrimEnd('=')
.Replace('+', '-')
.Replace('/', '_');
}
}
}
```
```csharp Step 2: Exchange Code for Token (after redirect) [expandable] theme={null}
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks;
using System.Web;
public class TokenExchange
{
private const string ClientId = "YOUR_CLIENT_ID";
private const string RedirectUri = "https://your-app.com/callback";
private const string TokenUrl = "https://auth.us.corti.app/realms//protocol/openid-connect/token";
public static async Task ExchangeCodeForTokenAsync(string code, string codeVerifier)
{
using var httpClient = new HttpClient();
var formData = new List>
{
new KeyValuePair("grant_type", "authorization_code"),
new KeyValuePair("client_id", ClientId),
new KeyValuePair("redirect_uri", RedirectUri),
new KeyValuePair("code", code),
new KeyValuePair("code_verifier", codeVerifier)
};
var content = new FormUrlEncodedContent(formData);
var response = await httpClient.PostAsync(TokenUrl, content);
response.EnsureSuccessStatusCode();
var responseBody = await response.Content.ReadAsStringAsync();
var tokenData = JsonSerializer.Deserialize(responseBody);
return new TokenResponse
{
AccessToken = tokenData.GetProperty("access_token").GetString(),
RefreshToken = tokenData.GetProperty("refresh_token").GetString(),
ExpiresIn = tokenData.GetProperty("expires_in").GetInt32()
};
}
}
public class TokenResponse
{
public string AccessToken { get; set; }
public string RefreshToken { get; set; }
public int ExpiresIn { get; set; }
}
// Usage in callback handler (e.g., ASP.NET MVC Controller)
public class AuthController : Controller
{
public async Task Callback(string code)
{
if (string.IsNullOrEmpty(code))
{
return View("Error");
}
// Retrieve stored code verifier
var codeVerifier = Session["pkce_verifier"] as string;
if (string.IsNullOrEmpty(codeVerifier))
{
return View("Error");
}
var tokenResponse = await TokenExchange.ExchangeCodeForTokenAsync(code, codeVerifier);
// Store tokens securely (e.g., in session or secure cookie)
Session["access_token"] = tokenResponse.AccessToken;
return RedirectToAction("Index", "Home");
}
}
```
Use this gold standard for Corti Assistant embedded use cases.
***
### 2. Authorization Code Flow (without PKCE)
**Best for:** Server-side web applications embedding Corti Assistant where the client secret can be safely stored on the backend.
**Why:** Similar to PKCE, but requires storing a client secret — which is *not* safe in public or browser-based clients.
**Key Concerns:**
* Unsafe for apps where the frontend or iFrame can be inspected.
* Only acceptable in secure backend environments where the client secret can be protected.
This example uses the [Corti JavaScript SDK](https://docs.corti.ai/quickstart/javascript-sdk) (`@corti/sdk`)
```javascript Step 1: Create Authorization URL [expandable] theme={null}
import { CortiAuth, CortiEnvironment } from "@corti/sdk";
const auth = new CortiAuth({
environment: CortiEnvironment.Eu,
tenantName: "YOUR_TENANT_NAME",
});
// Generate authorization URL
await auth.authorizeURL({
clientId: "YOUR_CLIENT_ID",
redirectUri: "https://your-app.com/callback",
});
```
```javascript Step 2: Handle the Callback and Exchange Code for Tokens [expandable] theme={null}
// Extract the authorization code from URL parameters
const urlParams = new URLSearchParams(window.location.search);
const code = urlParams.get('code');
const error = urlParams.get('error');
if (error) {
console.error('Authorization failed:', error);
return;
}
if (code) {
// Exchange the authorization code for tokens using SDK (client-side)
const response = await fetch('http://localhost:3000/callback', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ code }),
});
if (!response.ok) {
throw new Error('Authentication failed');
}
const tokens = await response.json();
}
```
```javascript Step 3: Exchange Code for Access Token (Backend) [expandable] theme={null}
// server.js or routes/callback.js
import express from "express";
import { CortiAuth } from "@corti/sdk";
const app = express();
const CLIENT_ID = "YOUR_CLIENT_ID";
const CLIENT_SECRET = "YOUR_CLIENT_SECRET";
const TENANT_NAME = "YOUR_TENANT_NAME";
const ENVIRONMENT = "YOUR_ENVIRONMENT";
const REDIRECT_URI = "https://yourapp.com/callback"; // must match OAuth settings
app.get("/callback", async (req, res) => {
const authCode = req.query.code;
if (!authCode) {
return res.status(400).send("Missing authorization code");
}
try {
// Initialize CortiAuth SDK client
const auth = new CortiAuth({
environment: ENVIRONMENT,
tenantName: TENANT_NAME,
});
// Exchange code for token
const tokens = await auth.getCodeFlowToken({
clientId: CLIENT_ID,
clientSecret: CLIENT_SECRET,
redirectUri: REDIRECT_URI,
code: authCode,
});
// Example "do something": simply log the access token
console.log("Access Token:", tokens.accessToken);
// Redirect back to app with token in query param (example)
res.json(tokens);
} catch (err) {
console.error("OAuth error:", err);
return res.status(500).send("Failed to exchange authorization code");
}
});
app.listen(3000, () => {
console.log("Server running at http://localhost:3000");
});
```
This version assumes:
* Your app has a **frontend (e.g., React)** that initiates the login.
* Your app has a **backend (e.g., Node.js + Express)** that securely stores the `client_secret` and handles the token exchange.
Use this only if your backend can securely store the client secret (e.g. not in a browser or mobile app). Ideal for server-rendered or hybrid web apps.
```bash Step 1: Frontend – Redirect the User to Log In [expandable] theme={null}
// login.tsx or similar
const clientId = 'YOUR_CLIENT_ID';
const redirectUri = 'https://yourapp.com/callback'; // Must match what's registered in Corti OAuth
const login = () => {
const authUrl = new URL('https://auth.us.corti.app/realms//protocol/openid-connect/auth');
authUrl.searchParams.set('response_type', 'code');
authUrl.searchParams.set('client_id', clientId);
authUrl.searchParams.set('redirect_uri', redirectUri);
authUrl.searchParams.set('scope', 'openid profile');
window.location.href = authUrl.toString(); // Send user to Corti login page
};
```
```bash Step 2: Corti Redirects Back with a Code theme={null}
https://yourapp.com/callback?code=abc123xyz
```
```bash Step 3: Backend – Exchange Code for Access Token [expandable] theme={null}
// server.js or routes/callback.js
const express = require('express');
const fetch = require('node-fetch');
const app = express();
const CLIENT_ID = 'YOUR_CLIENT_ID';
const CLIENT_SECRET = 'YOUR_CLIENT_SECRET';
const REDIRECT_URI = 'https://yourapp.com/callback'; // Must match Step 1
app.get('/callback', async (req, res) => {
const authCode = req.query.code;
if (!authCode) {
return res.status(400).send('Missing authorization code');
}
// Exchange code for access token
const tokenResponse = await fetch('https://auth.us.corti.app/realms//protocol/openid-connect/token', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
grant_type: 'authorization_code',
code: authCode,
client_id: CLIENT_ID,
client_secret: CLIENT_SECRET,
redirect_uri: REDIRECT_URI
})
});
const tokenData = await tokenResponse.json();
if (tokenData.error) {
return res.status(500).send(`Token error: ${tokenData.error_description}`);
}
// Do something useful with the token, like create a session
console.log('Access Token:', tokenData.access_token);
// Optional: send token data to frontend or store in session
res.redirect(`/app?token=${tokenData.access_token}`);
});
app.listen(3000, () => console.log('App listening on http://localhost:3000'));
```
This example uses standard .NET libraries for OAuth2 authorization code flow
```csharp Step 1: Frontend – Redirect the User to Log In [expandable] theme={null}
using System;
using System.Web;
public class AuthHelper
{
private const string ClientId = "YOUR_CLIENT_ID";
private const string RedirectUri = "https://yourapp.com/callback";
private const string AuthBaseUrl = "https://auth.us.corti.app/realms//protocol/openid-connect";
public static string GetAuthorizationUrl()
{
var authUrl = $"{AuthBaseUrl}/auth?" +
$"response_type=code&" +
$"client_id={HttpUtility.UrlEncode(ClientId)}&" +
$"redirect_uri={HttpUtility.UrlEncode(RedirectUri)}&" +
$"scope=openid profile";
return authUrl;
}
}
// Usage in Razor view or controller
// Response.Redirect(AuthHelper.GetAuthorizationUrl());
```
```csharp Step 2: Corti Redirects Back with a Code theme={null}
https://yourapp.com/callback?code=abc123xyz
```
```csharp Step 3: Backend – Exchange Code for Access Token [expandable] theme={null}
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks;
using System.Web.Mvc;
public class TokenExchange
{
private const string ClientId = "YOUR_CLIENT_ID";
private const string ClientSecret = "YOUR_CLIENT_SECRET";
private const string RedirectUri = "https://yourapp.com/callback";
private const string TokenUrl = "https://auth.us.corti.app/realms//protocol/openid-connect/token";
public static async Task ExchangeCodeForTokenAsync(string code)
{
using var httpClient = new HttpClient();
var formData = new List>
{
new KeyValuePair("grant_type", "authorization_code"),
new KeyValuePair("code", code),
new KeyValuePair("client_id", ClientId),
new KeyValuePair("client_secret", ClientSecret),
new KeyValuePair("redirect_uri", RedirectUri)
};
var content = new FormUrlEncodedContent(formData);
var response = await httpClient.PostAsync(TokenUrl, content);
if (!response.IsSuccessStatusCode)
{
var errorBody = await response.Content.ReadAsStringAsync();
throw new Exception($"Token exchange failed: {errorBody}");
}
var responseBody = await response.Content.ReadAsStringAsync();
var tokenData = JsonSerializer.Deserialize(responseBody);
if (tokenData.TryGetProperty("error", out var error))
{
var errorDescription = tokenData.TryGetProperty("error_description", out var desc)
? desc.GetString()
: "Unknown error";
throw new Exception($"Token error: {errorDescription}");
}
return new TokenResponse
{
AccessToken = tokenData.GetProperty("access_token").GetString(),
RefreshToken = tokenData.TryGetProperty("refresh_token", out var refresh)
? refresh.GetString()
: null,
ExpiresIn = tokenData.GetProperty("expires_in").GetInt32()
};
}
}
public class TokenResponse
{
public string AccessToken { get; set; }
public string RefreshToken { get; set; }
public int ExpiresIn { get; set; }
}
// Usage in ASP.NET MVC Controller
public class AuthController : Controller
{
public async Task Callback(string code)
{
if (string.IsNullOrEmpty(code))
{
return new HttpStatusCodeResult(400, "Missing authorization code");
}
try
{
var tokenResponse = await TokenExchange.ExchangeCodeForTokenAsync(code);
// Store tokens securely (e.g., in session or secure cookie)
Session["access_token"] = tokenResponse.AccessToken;
// Redirect to app with token
return RedirectToAction("Index", "Home");
}
catch (Exception ex)
{
return new HttpStatusCodeResult(500, $"Token error: {ex.Message}");
}
}
}
```
Summary of the flow:
| Step | Component | Description |
| :--- | :-------- | :-------------------------------------------------------------------------------------------------- |
| 1 | Frontend | Redirects user to Corti login |
| 2 | Corti | Redirects back to your app with `code` |
| 3 | Backend | Exchanges code + client secret for tokens Responds with session/token, redirects to frontend |
Requirements:
* Must use **HTTPS** for `redirect_uri`
* Your `client_secret` **must not** be exposed to the frontend
* The code returned is **valid for one use and short-lived**
Use only if the OAuth2 flow is entirely server-to-server.
***
### 3. Resource Owner Password Credentials (ROPC) Grant (Use with caution)
**Best for:** Controlled environments embedding Corti Assistant with trusted clients (e.g., internal tools).
**Why:** Allows username/password login directly in the app — but **bypasses the authorization server UI**.
**Risks:**
* Trains users to enter passwords into third-party apps.
* Easy to misuse, violates best practices.
* Only viable where UI constraints prevent redirecting (e.g., native kiosk apps without browsers).
This example uses the [Corti JavaScript SDK](https://docs.corti.ai/quickstart/javascript-sdk) (`@corti/sdk`)
```javascript [expandable] theme={null}
import { CortiAuth, CortiClient, CortiEnvironment } from "@corti/sdk";
const CLIENT_ID = "YOUR_CLIENT_ID";
const USERNAME = "user@example.com";
const PASSWORD = "your-password";
// Step 1: Exchange credentials for tokens using ROPC flow
const auth = new CortiAuth({
environment: CortiEnvironment.Eu,
tenantName: "YOUR_TENANT_NAME",
});
const tokenResponse = await auth.getRopcFlowToken({
clientId: CLIENT_ID,
username: USERNAME,
password: PASSWORD,
});
const { accessToken, refreshToken } = tokenResponse;
```
Use only in trusted/internal scenarios.
Here's a simple fetch example:
```bash [expandable] theme={null}
const response = await fetch('https://auth.us.corti.app/realms//protocol/openid-connect/token', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
grant_type: 'password',
client_id: 'YOUR_CLIENT_ID',
username: 'user@example.com',
password: 'yourpassword'
})
});
const data = await response.json();
console.log('Access Token:', data.access_token);
```
This example uses standard .NET libraries for OAuth2 ROPC flowUse only in trusted/internal scenarios.
```csharp [expandable] theme={null}
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks;
public class RopcAuth
{
private const string ClientId = "YOUR_CLIENT_ID";
private const string TokenUrl = "https://auth.us.corti.app/realms//protocol/openid-connect/token";
public static async Task GetTokenAsync(string username, string password)
{
using var httpClient = new HttpClient();
var formData = new List>
{
new KeyValuePair("grant_type", "password"),
new KeyValuePair("client_id", ClientId),
new KeyValuePair("username", username),
new KeyValuePair("password", password)
};
var content = new FormUrlEncodedContent(formData);
var response = await httpClient.PostAsync(TokenUrl, content);
response.EnsureSuccessStatusCode();
var responseBody = await response.Content.ReadAsStringAsync();
var tokenData = JsonSerializer.Deserialize(responseBody);
if (tokenData.TryGetProperty("error", out var error))
{
var errorDescription = tokenData.TryGetProperty("error_description", out var desc)
? desc.GetString()
: "Unknown error";
throw new Exception($"Authentication failed: {errorDescription}");
}
return new TokenResponse
{
AccessToken = tokenData.GetProperty("access_token").GetString(),
RefreshToken = tokenData.TryGetProperty("refresh_token", out var refresh)
? refresh.GetString()
: null,
ExpiresIn = tokenData.GetProperty("expires_in").GetInt32()
};
}
}
public class TokenResponse
{
public string AccessToken { get; set; }
public string RefreshToken { get; set; }
public int ExpiresIn { get; set; }
}
// Example usage
class Program
{
static async Task Main()
{
try
{
var tokenResponse = await RopcAuth.GetTokenAsync(
"user@example.com",
"yourpassword"
);
Console.WriteLine($"Access Token: {tokenResponse.AccessToken}");
Console.WriteLine($"Expires in: {tokenResponse.ExpiresIn} seconds");
}
catch (Exception ex)
{
Console.WriteLine($"Error: {ex.Message}");
}
}
}
```
This authentication method is not recommended, but sometimes necessary.
***
## Final Guidance
**Important:** Embedded Corti Assistant requires user-based authentication.
**Client credentials grant is NOT supported** as it does not provide user
context.
Pick the flow that matches your interaction model as offered above. See further details and contact us for [support here](https://help.corti.app/en/articles/11156327-choosing-the-right-oauth2-flow-for-corti-integrations).
**What to Use When:**
* **For embedded Corti Assistant** (e.g., in an iFrame/WebView):
**Use Authorization Code Flow with PKCE** to authenticate the end user securely. This is the recommended flow for most embedded integrations.
* **For server-side integrations** where you can securely store a client secret:
**Authorization Code Flow (without PKCE)** may be used, but ensure the client secret is never exposed to the frontend.
* **For constrained environments** without redirect capabilities:
**ROPC** may be used with caution, but only in trusted internal environments.
* **Client Credentials is NOT supported** — it does not provide user context and will not work with embedded Corti Assistant.
**Additional Reference**:
* [OAuth 2.0 Authorization Framework (RFC 6749)](https://datatracker.ietf.org/doc/html/rfc6749)
* [IBM](https://www.ibm.com/think/topics/oauth)
# Configuration Guide
Source: https://docs.corti.ai/assistant/configuration
In-depth reference for configuring the embedded Assistant interface, features, appearance, and locale settings
The `configure` action allows you to customize the Corti Assistant interface to match your application's needs and user preferences. This guide provides a comprehensive reference for all available configuration options, their behavior, and best practices for customization.
## Overview
Configuration is applied using the `configure` action, which accepts three main categories of settings:
* **Features**: Toggle UI components and functionality
* **Appearance**: Customize visual styling and branding
* **Locale**: Set interface and dictation languages, plus custom string overrides
Configuration can be applied incrementally. You can call `configure` with only
the properties you want to change, and the rest will remain at their current
values. The action returns the full current configuration object, allowing you
to read the current state.
For implementation examples, see the [PostMessage API](/assistant/postmessage-api) or [Window API](/assistant/window-api) documentation.
## Feature Toggles
Feature toggles control which UI components and functionality are visible and available in the embedded Assistant. Each feature can be independently enabled or disabled, allowing you to create a tailored experience that matches your application's workflow and user needs.
### interactionTitle
**Type**: `boolean`\
**Default**: `true`
Controls whether the interaction title field is displayed in the interface. The title typically shows the encounter title that was provided when creating the interaction.
**When to disable:**
* You want a more minimal, streamlined interface
* You're managing the interaction title externally in your application
* The title is redundant with information already displayed in your host application
**When to keep enabled:**
* Users need to see or edit the interaction title within the Assistant
* The title provides important context for the current session
* You want users to be able to identify different interactions
**Impact**: When disabled, the title field is completely hidden from the interface, reducing visual clutter but removing the ability for users to see or modify the interaction title within the Assistant.
### aiChat
**Type**: `boolean`\
**Default**: `true`
Controls whether the AI chat feature is available to users. The AI chat allows users to interact with Corti's AI assistant for questions, clarifications, and document-related queries.
**When to disable:**
* You want to restrict AI chat functionality for compliance or policy reasons
* Your workflow doesn't require AI assistance features
* You're providing alternative support channels for user questions
**When to keep enabled:**
* Users benefit from AI-powered assistance during documentation
* You want to provide contextual help and clarifications
* AI chat enhances the user experience and productivity
**Impact**: When disabled, all AI chat UI elements and functionality are hidden. Users will not be able to access AI assistance features within the Assistant interface.
### documentFeedback
**Type**: `boolean`\
**Default**: `true`
Controls whether users can provide feedback on generated documents. This includes the ability to rate document quality, report issues, or provide corrections.
**When to disable:**
* You handle document feedback through external systems
* Feedback collection is not part of your workflow
* You want to simplify the interface by removing feedback mechanisms
**When to keep enabled:**
* You want to collect user feedback to improve document quality
* Feedback helps identify issues or areas for improvement
* You want users to have a way to report problems with generated documents
**Impact**: When disabled, all document feedback controls and related UI are hidden. Users will not be able to provide feedback on documents directly within the Assistant.
### navigation
**Type**: `boolean`\
**Default**: `false`
Controls whether the navigation sidebar is visible, providing access to templates, settings, archive, and other areas of the Assistant.
**When to enable:**
* You want users to have full access to Assistant features
* Users need to browse and manage templates
* Users should be able to access settings and preferences
* Archive functionality is needed for your workflow
**When to keep disabled:**
* You want a focused, minimal interface for specific workflows
* Navigation is handled externally in your application
* You're creating a single-purpose integration that doesn't need full navigation
**Impact**: When enabled, the navigation sidebar becomes visible, allowing users to access:
* `/templates` - Browse and create document templates
* `/settings/preferences` - Edit defaults like languages and default session settings
* `/settings/input` - Edit dictation input settings
* `/settings/account` - Edit general account settings
* `/settings/archive` - View items in and restore from archive
**Best Practice**: Enable navigation when you want users to have self-service access to Assistant features. Disable it for tightly controlled, workflow-specific integrations where you manage navigation externally.
### virtualMode
**Type**: `boolean`\
**Default**: `true`
Controls whether the interface operates in virtual mode (for remote/telemedicine consultations) or live mode (for in-person consultations). This affects the recording interface, audio handling, and user experience.
**When to set to `false` (live mode):**
* Consultations are conducted in-person
* Recording happens locally or through local audio devices
* You're integrating with physical clinic environments
* The consultation type is explicitly in-person
**When to keep as `true` (virtual mode):**
* Consultations are conducted remotely (telemedicine)
* Audio is captured through web-based or remote systems
* The default virtual consultation workflow applies
* Remote consultation is the primary or only mode
**Impact**:
* **Virtual mode (`true`)**: Optimized for remote consultations with web-based audio capture
* **Live mode (`false`)**: Optimized for in-person consultations with local audio handling
**Best Practice**: Match this setting to your actual consultation type. If your application supports both modes, you may want to configure this dynamically based on the specific encounter type.
### syncDocumentAction
**Type**: `boolean`\
**Default**: `false`
Controls whether the "Synchronize document" button is available for syncing documents directly to EHR systems or external systems.
**When to enable:**
* You want users to be able to sync documents directly from the Assistant interface
* Direct EHR synchronization is part of your workflow
* Users need immediate access to sync functionality
**When to keep disabled:**
* Document syncing is handled externally through your application
* You manage document export through your own systems
* You want to control the sync process outside the Assistant
**Impact**: When enabled, the synchronize document button appears in the document interface. The button text can be customized using locale overrides (see [String Overrides](#string-overrides)).
**Best Practice**: Enable this if you want users to have direct control over document synchronization. If you handle syncing programmatically through your application, you may prefer to keep this disabled and manage syncing externally.
### templateEditor
**Type**: `boolean`\
**Default**: `true`
Controls whether the Template Assembler is available to end users. When enabled, it allows users to create custom templates for documentation tailored to their needs and preferences.
**When to enable:**
* You want to give end users flexibility to create and tailor their own templates.
**When to keep disabled:**
* You want end users to work only with a predefined, limited set of templates.
* Your integration relies on a fixed mapping of section keys, and you’re not set up to support an evolving list of new section keys.
**Impact**: When the Template Assembler is disabled, users are restricted to a fixed set of templates and cannot customize them.
**Best Practice**: Keep the Template Assembler enabled unless there is a strong reason to restrict template customization.
## Appearance Customization
### Primary Color
**Type**: `string | null`\
**Default**: `null` (uses built-in default theme)
You can customize the primary accent color used throughout the Corti Assistant interface to match your application's branding. This color is applied to interactive elements, buttons, links, focus indicators, and other accent elements throughout the interface.
**Format**: Hex color code as a string (e.g., `"#00a6ff"`, `"#1a73e8"`)
**When to customize:**
* You want to maintain brand consistency with your host application
* Your application has a specific brand color palette
* You need to match existing design systems or style guides
* Visual consistency across your integrated experience is important
**When to use default:**
* Brand consistency is not a priority
* You prefer Corti's default accessible color scheme
* You want to minimize customization complexity
**Impact**: The primary color affects:
* Button backgrounds and hover states
* Link colors
* Focus indicators and active states
* Accent elements and highlights
* Selected states and active UI elements
**Important Considerations:**
* The color must meet WCAG 2.2 AA contrast requirements (see [WCAG Compliance](#wcag-22-aa-compliance))
* The color is applied across all UI states (default, hover, active, focus, disabled)
* Test thoroughly to ensure accessibility and usability
```javascript theme={null}
await api.configure({
appearance: {
primaryColor: "#00a6ff",
},
});
```
### WCAG 2.2 AA Compliance
**Always ensure WCAG 2.2 AA conformance when customizing appearance**
Corti Assistant's default theme has been evaluated against WCAG 2.2 Level AA and meets applicable success criteria in our supported browsers.
**Important**: This conformance claim applies only to the default configuration. Customer changes (e.g., color palettes, CSS overrides, third-party widgets, or content) are outside the scope of this claim. Customers are responsible for ensuring their customizations continue to meet WCAG 2.2 AA.
#### Required WCAG 2.2 AA Criteria
When supplying a custom accent color or theme, you must ensure WCAG 2.2 AA conformance, including:
* **1.4.3 Contrast (Minimum)**:
* Normal text: ≥ 4.5:1 contrast ratio
* Large text: ≥ 3:1 contrast ratio
* **1.4.11 Non-text Contrast**:
* UI boundaries, focus rings, and selected states: ≥ 3:1 contrast ratio
* **2.4.11 Focus Not Obscured (Minimum)**:
* Focus indicators must remain visible and unobstructed
#### Best Practices for Color Customization
1. **Test contrast ratios**: Use tools like [WebAIM Contrast Checker](https://webaim.org/resources/contrastchecker/) to verify your color choices
2. **Test all states**: Verify contrast for default, hover, active, disabled, and focus states
3. **Test on all backgrounds**: Ensure your color works on both light and dark backgrounds if applicable
4. **Maintain focus visibility**: Ensure focus indicators remain clearly visible with your custom color
5. **Consider color blindness**: Test with color blindness simulators to ensure usability
Corti provides accessible defaults. If you override them, verify contrast for
all states (default, hover, active, disabled, focus) and on all backgrounds
you use.
## Locale Settings
Locale settings control the language used for the interface and dictation, as well as allow for custom string overrides.
### Interface Language
**Type**: `string | null`\
**Default**: `null` (uses user's default or browser setting)
Sets the language for the user interface, including buttons, labels, menus, messages, and all UI text. This is separate from the dictation language, allowing you to have a German interface with English dictation, for example.
**Format**: Language code as a string (e.g., `"en"`, `"de-DE"`, `"fr-FR"`)
**When to set:**
* Your application serves users in specific languages
* You want to ensure a consistent language experience
* Users prefer a specific interface language regardless of browser settings
* You're building a localized application
**When to use default (`null`):**
* You want to respect user browser/OS language preferences
* Language detection should be automatic
* You support multiple languages and want automatic selection
**Available languages**: See [Available Interface Languages](#available-interface-languages)
**Best Practice**: Set the interface language based on your application's user preferences or locale settings. You can also allow users to change it if navigation is enabled.
### Dictation Language
**Type**: `string`\
**Default**: `"en"`
Sets the language for speech recognition and dictation. This determines which language model is used for transcribing spoken audio. The dictation language can be different from the interface language.
**Format**: Language code as a string (e.g., `"en"`, `"de"`, `"fr"`, `"da-DK"`)
**When to customize:**
* Users dictate in a language different from the interface language
* You need to support multilingual dictation
* Regional variations require specific language codes (e.g., `"en-GB"` vs `"en"`)
**Important Notes:**
* **Region-specific**: Different regions (EU, US) support different dictation languages
* **Separate from interface language**: Interface and dictation languages are independent
* **Must be valid**: The language code must be supported in your region (see [Available Dictation Languages](#available-dictation-languages))
**Available languages**: See [Available Dictation Languages](#available-dictation-languages)
**Best Practice**: Set the dictation language based on the actual language users will speak during consultations. This may differ from the interface language, especially in multilingual environments.
### String Overrides
**Type**: `Record`\
**Default**: `{}` (empty object)
Allows you to customize specific UI strings in the interface by providing key-value pairs where keys are string identifiers and values are the replacement text.
**Format**: Object with string keys and string values:
```typescript theme={null}
{
"string.key": "Custom text",
"another.key": "Another custom text"
}
```
**When to use:**
* You need to customize button labels to match your terminology
* You want to localize specific strings that aren't covered by interface language
* You need to match your application's terminology (e.g., "EHR" vs "EMR")
* You want to provide context-specific labels
**Limitations:**
* Only specific keys are available for override (see [Available String Overrides](#available-string-overrides))
* Not all UI strings can be customized
* Overrides apply regardless of interface language
**Best Practice**: Use string overrides sparingly for critical terminology that must match your application. For broader localization, use the interface language setting instead.
#### Available String Overrides
Currently, the following keys are exposed for override:
| Key | Default value | Purpose |
| --------------------------------------- | ------------------------ | ------------------------------------------------------------------- |
| `interview.document.syncDocument.label` | *"Synchronize document"* | The button text for the *"synchronize document"* button if enabled. |
More string override keys may be added in future releases. Contact support if
you need additional strings customized.
### Available Interface Languages
Updated as per February 2026
| Language code | Language |
| ------------- | ----------------- |
| `en` | English |
| `de-DE` | German |
| `fr-FR` | French |
| `it-IT` | Italian |
| `sv-SE` | Swedish |
| `da-DK` | Danish |
| `nb-NO` | Norwegian Bokmål |
| `nn-NO` | Norwegian Nynorsk |
### Available Dictation Languages
Updated as per February 2026
#### EU Region
| Language code | Language |
| ------------- | --------------- |
| `en` | English |
| `en-GB` | British English |
| `de` | German |
| `fr` | French |
| `sv` | Swedish |
| `da` | Danish |
| `nl` | Dutch |
| `no` | Norwegian |
#### US Region
| Language code | Language |
| ------------- | -------- |
| `en` | English |
## Configuration Structure
The `configure` action accepts a configuration object with three main sections:
```typescript theme={null}
{
features: {
interactionTitle: boolean,
aiChat: boolean,
documentFeedback: boolean,
navigation: boolean,
virtualMode: boolean,
syncDocumentAction: boolean
templateEditor: boolean
},
appearance: {
primaryColor: string | null
},
locale: {
interfaceLanguage: string | null,
dictationLanguage: string,
overrides: Record
}
}
```
All properties are optional. You can configure only the sections you need, and the rest will maintain their current values. The action returns the complete current configuration object, allowing you to read the current state.
## Best Practices
### Minimum Supported Resolution
To ensure a reliable and accessible experience, Assistant requires a minimum viewport size depending on how it is deployed.
Resolution requirements are defined by **functional guarantees**, not device types. At the minimum supported size, all critical workflows must remain fully usable without layout breakage or hidden actions.
#### Embedded (iframe / SDK)
* **Recommended Width:** ≥ `1024px` (Note: template customization via the template Assembler is not supported below this width)
* **Minimum Width:** ≥ `640px`
* **Minimum Height:** ≥ `320px`
At minimum width:
* No horizontal scrolling is required.
* Primary actions (recording, transcript, facts, document editing) remain visible and accessible.
* All core workflows can be completed without layout overlap or clipping.
If embedded in a narrower container, the Embedded Assistant will switch to a compact layout. Below the minimum supported width, layout integrity is not guaranteed.
As an integrator, you should ensure that your embedding container respects the minimum supported width to guarantee full functionality.
#### Narrow Containers (Mobile)
The app calls `getIsMobile()`, which returns true if browser UA matches: `Mobile|Android|iPhone|iPad|iPod`.
So the “switch point” is effectively: when the browser identifies as a mobile device, not at XY pixels wide.
#### Accessibility Requirements
At the minimum supported resolution, Assistant:
* Complies with WCAG 2.2 AA reflow requirements.
* Supports up to **200% browser zoom** without loss of functionality.
* Does not require two-dimensional scrolling to complete primary workflows.
### Configuration Timing and Lifecycle
**Apply configuration early**: Configure the Assistant as soon as it's ready (after the `ready` event) and before users interact with it. This ensures a consistent experience from the start.
**Configure before navigation**: If you're navigating to specific routes, set your configuration first to ensure the target page loads with the correct settings.
**Reconfigure dynamically**: You can update configuration at any time during a session. This is useful for:
* Adapting to different user roles or contexts
* Responding to user preferences changes
* Switching between different workflow modes
**Configuration persistence**: Configuration settings persist for the duration of the session but do not persist across page reloads or new sessions. You'll need to reapply configuration each time the Assistant is initialized.
### Incremental Configuration
The `configure` action supports incremental updates. You can update only specific sections without affecting others:
* Update only `features` to change UI visibility
* Update only `appearance` to change branding
* Update only `locale` to change language settings
This allows you to make targeted changes without needing to specify the entire configuration object each time.
### Reading Current Configuration
The `configure` action always returns the complete current configuration object, regardless of what you pass in. This allows you to:
* Read the current state of all settings
* Verify that your configuration was applied correctly
* Build upon existing configuration rather than replacing it entirely
### Error Handling
Always implement proper error handling when configuring:
* **Handle configuration failures**: Configuration may fail due to invalid values, network issues, or other errors
* **Provide fallbacks**: Have default configuration ready in case of failures
* **Log errors appropriately**: Log configuration errors for debugging while maintaining user experience
* **Validate before applying**: Validate configuration values (especially colors and language codes) before sending them
### Accessibility Best Practices
**Color customization:**
* Always test color contrast ratios using tools like [WebAIM Contrast Checker](https://webaim.org/resources/contrastchecker/)
* Test all UI states: default, hover, active, disabled, and focus
* Ensure focus indicators remain clearly visible
* Test on both light and dark backgrounds if applicable
* Consider color blindness: Test with simulators to ensure usability
**Feature toggles:**
* Ensure that disabling features doesn't break keyboard navigation
* Verify that screen readers can still navigate the interface
* Test that essential functionality remains accessible when features are disabled
* Consider the impact on users who rely on specific features
**Language settings:**
* Ensure interface language changes don't break layout or functionality
* Test that all UI elements are properly localized
* Verify that right-to-left languages (if supported) work correctly
### Performance Optimization
**Configure once at startup**: Apply your primary configuration once when the Assistant is ready, rather than repeatedly calling `configure`.
**Batch configuration changes**: If you need to change multiple settings, do it in a single `configure` call rather than multiple separate calls.
**Cache configuration objects**: Store your configuration objects and reuse them across sessions to avoid reconstructing them each time.
**Minimize reconfiguration**: Avoid unnecessary reconfiguration during a session. Only update configuration when user preferences or context actually change.
### User Experience Guidelines
**Match user workflows**: Configure features based on your users' actual workflow needs. Consider:
* What features do users actually need for their tasks?
* What can be simplified or hidden to reduce cognitive load?
* How does the Assistant fit into the broader application workflow?
**Consistent branding**: Use appearance settings to maintain visual consistency with your host application. This creates a more cohesive, integrated experience.
**Respect user preferences**:
* Use interface language settings that match your application's user preferences
* Set dictation language based on the actual language users will speak
* Consider allowing users to change settings if navigation is enabled
**Context-aware configuration**:
* Configure differently for different user roles (e.g., physicians vs. nurses)
* Adapt configuration based on consultation type (virtual vs. in-person)
* Consider workflow-specific configurations for different use cases
**Progressive disclosure**: Start with a minimal configuration and enable additional features as needed. This reduces initial complexity while allowing power users to access advanced features.
### Security and Compliance
**No sensitive data in configuration**: Configuration values are visible in client-side code. Never include sensitive information like API keys, tokens, or personal data in configuration.
**Validate user permissions**: If you're configuring based on user roles or permissions, validate those permissions server-side before applying configuration.
**Compliance considerations**:
* Ensure customizations maintain compliance with relevant regulations (HIPAA, GDPR, etc.)
* Document any customizations that affect compliance
* Test that disabled features don't impact required functionality
### Testing Recommendations
**Test all feature combinations**: Test how different feature toggle combinations affect the interface and functionality.
**Test color customizations thoroughly**:
* Test with various color values
* Verify accessibility in all states
* Test with different screen sizes and resolutions
**Test language settings**:
* Verify all supported languages work correctly
* Test interface language and dictation language combinations
* Ensure string overrides work as expected
**Test incremental updates**: Verify that partial configuration updates work correctly and don't reset other settings.
**Test error scenarios**: Test what happens when invalid configuration values are provided.
## Related Documentation
* [API Reference](/assistant/api-reference) - Complete reference for the `configure` action
* [PostMessage API](/assistant/postmessage-api) - Learn how to use the PostMessage integration method
* [Window API](/assistant/window-api) - Learn how to use the Window API integration method
* [Embedded API Overview](/assistant/embedded-api) - General overview of the embedded API
* [Proxy Guide](/assistant/proxy) - How to run the embedded Assistant behind a reverse proxy
Please [contact us](https://help.corti.app) for help or questions about
configuration.
# Embedded Assistant API
Source: https://docs.corti.ai/assistant/embedded-api
Access an API for embedding Corti Assistant in your workflow today
The Corti Embedded Assistant API enables seamless integration of [Corti Assistant](https://assistant.corti.ai) into host applications, such as Electronic Health Record (EHR) systems, web-based clinical portals, or native applications using embedded WebViews. The implementation provides a robust, consistent, and secure interface for parent applications to control and interact with embedded Corti Assistant.
The details outlined below are for you to embed the Corti Assistant "AI scribe
solution" natively within your application. To lean more about the full Corti
API, please see more [here](/api-reference/welcome)
***
## Overview
The Embedded Assistant API is a communication interface that allows your application to embed and control Corti Assistant within your own application interface. It provides programmatic control over authentication, session management, interaction creation, document generation, and more.
The API enables two-way communication between your application and the embedded Corti Assistant, allowing you to:
* Authenticate users and manage sessions
* Create and manage clinical interactions
* Configure the Assistant interface and appearance
* Control recording functionality
* Receive real-time events and updates
* Access generated documents and transcripts
## Requirements
Before getting started, ensure you have:
* **Created an OAuth Client for Corti Assistant**: You'll need to create a Corti Assistant specific client from the [Developer Console](https://console.corti.app). Note, you may need to request access from our Activation Team.
* **Modern browser or WebView**: For web applications, use a modern browser. For native apps, use a modern WebView (WebView2, WKWebView, or Chromium-based WebView)
* **HTTPS**: The embedded Assistant must be loaded over HTTPS (required for microphone access)
* **Microphone permissions**: Your application must request and handle microphone permissions appropriately
* **OAuth2 client**: You'll need an OAuth2 client configured for user-based authentication
## Recommendations
* **Use PostMessage API** for iframe/WebView integrations and cross-origin scenarios
* **Use Window API** for same-origin integrations where direct JavaScript access is preferred
* **Implement proper error handling** for all API calls
* **Handle authentication token refresh** to maintain user sessions
* **Request microphone permissions** before initializing the embedded Assistant
## Available Regions
* **EU**: [https://assistant.eu.corti.app](https://assistant.eu.corti.app)
* **EU MD**: [https://assistantmd.eu.corti.app](https://assistantmd.eu.corti.app) (medical device compliant)
* **US**: [https://assistant.us.corti.app](https://assistant.us.corti.app)
## Choosing an Integration Method
The Embedded Assistant API offers two integration methods, each suited for different use cases:
### When to Use PostMessage API
Use the **PostMessage API** when:
* **Embedding in a web application** (iframe-based integration)
* **Cross-origin communication** is required
* **Native applications** using WebViews (iOS WKWebView, Android WebView, Windows WebView2)
* You need **secure cross-origin communication** between different domains
* Your application and Corti Assistant are served from **different origins**
The PostMessage API uses the browser's `postMessage` mechanism, which is the standard way to communicate securely across origins.
**[PostMessage API Quick Start](/assistant/postmessage-api#quick-start)** - Complete guide for initializing and authenticating with the PostMessage API
### When to Use Window API
Use the **Window API** when:
* **Same-origin integration** (your application and Corti Assistant share the same domain)
* **Direct JavaScript access** is preferred
* **TypeScript support** and type safety are important
* You want **synchronous API calls** with Promise-based methods
* Your application is a **single-page application (SPA)** that can load Corti Assistant directly
The Window API provides direct access to `window.CortiEmbedded.v1`, offering a more traditional JavaScript API experience.
**[Window API Quick Start](/assistant/window-api#quick-start)** - Complete guide for initializing and authenticating with the Window API
### Quick Decision Guide
| Scenario | Recommended Method |
| -------------------------------- | ------------------ |
| Web app embedding via iframe | PostMessage API |
| Native app with WebView | PostMessage API |
| Same-origin web integration | Window API |
| Cross-origin integration | PostMessage API |
| Need TypeScript types | Window API |
| Need cross-browser compatibility | PostMessage API |
Both guides include complete code examples for authentication, configuration, and creating interactions.
## Documentation
* **[PostMessage API](/assistant/postmessage-api)** - Complete guide for iframe/WebView integrations using postMessage
* **[Window API](/assistant/window-api)** - Complete guide for direct integrations using the Window API
* **[API Reference](/assistant/api-reference)** - Complete reference for all actions, events, message types, and return values
* **[OAuth Authentication](/assistant/authentication)** - Guide for implementing OAuth2 authentication flows
## Next Steps
1. Review the [OAuth Authentication Guide](/assistant/authentication) to set up user authentication
2. Choose your integration method based on your use case
3. Review the detailed documentation for your chosen method
4. Consult the [API Reference](/assistant/api-reference) for all available actions and events
5. Implement your integration following the examples and best practices
Please [contact us](https://help.corti.app) for help or questions.
# account.creditsConsumed
Source: https://docs.corti.ai/assistant/events/generated/account/creditsConsumed
Emitted when account credits are consumed.
## Event Properties
| Field | Value |
| -------------- | --------------------------- |
| `event` | `"account.creditsConsumed"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ---------------------------------------------------- | ---------------------------------------------------------------------------- |
| `creditsConsumed` | `number` | Number of credits used |
| `reason` | `"stream" \| "transcription" \| "document-creation"` | What operation consumed the credits |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "account.creditsConsumed",
"confidential": false,
"payload": {
"creditsConsumed": 12,
"reason": "stream",
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ----------------- | ---------------------------------------------------- | --------------------------------------------------------------- |
| `creditsConsumed` | `number` | Number of credits used |
| `reason` | `"stream" \| "transcription" \| "document-creation"` | What operation consumed the credits |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "account.creditsConsumed",
"confidential": true,
"payload": {
"creditsConsumed": 12,
"reason": "stream",
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# account.loggedIn
Source: https://docs.corti.ai/assistant/events/generated/account/loggedIn
Emitted when a user logs in.
## Event Properties
| Field | Value |
| -------------- | -------------------- |
| `event` | `"account.loggedIn"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------ | -------- | ------------------------------------ |
| `authMethod` | `string` | Authentication method used for login |
#### Example
```json theme={null}
{
"event": "account.loggedIn",
"confidential": false,
"payload": {
"authMethod": "password"
}
}
```
| Field | Type | Description |
| ------------ | -------- | ------------------------------------ |
| `authMethod` | `string` | Authentication method used for login |
| `email` | `string` | User's email address |
#### Example
```json theme={null}
{
"event": "account.loggedIn",
"confidential": true,
"payload": {
"authMethod": "password",
"email": "user@example.com"
}
}
```
# account.loggedOut
Source: https://docs.corti.ai/assistant/events/generated/account/loggedOut
Emitted when a user logs out.
## Event Properties
| Field | Value |
| -------------- | --------------------- |
| `event` | `"account.loggedOut"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| -------- | -------- | ------------------------- |
| `reason` | `string` | What triggered the logout |
#### Example
```json theme={null}
{
"event": "account.loggedOut",
"confidential": false,
"payload": {
"reason": "sign-out"
}
}
```
| Field | Type | Description |
| -------- | -------- | ------------------------- |
| `reason` | `string` | What triggered the logout |
#### Example
```json theme={null}
{
"event": "account.loggedOut",
"confidential": true,
"payload": {
"reason": "sign-out"
}
}
```
# chat.answerCopied
Source: https://docs.corti.ai/assistant/events/generated/ai-chat/answerCopied
Emitted when a chat answer is copied.
## Event Properties
| Field | Value |
| -------------- | --------------------- |
| `event` | `"chat.answerCopied"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `answerLength` | `number` | Character count of the copied answer |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "chat.answerCopied",
"confidential": false,
"payload": {
"answerLength": 200,
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| -------------- | ------------------------- | --------------------------------------------------------------- |
| `answer` | `string` | Content that was copied |
| `answerLength` | `number` | Character count of the copied answer |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "chat.answerCopied",
"confidential": true,
"payload": {
"answer": "...",
"answerLength": 200,
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# chat.asked
Source: https://docs.corti.ai/assistant/events/generated/ai-chat/asked
Emitted when AI chat is asked a question.
## Event Properties
| Field | Value |
| -------------- | -------------- |
| `event` | `"chat.asked"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `questionLength` | `number` | Character count of the user's question |
| `promptType` | `"question" \| "update"` | Type of prompt sent to the AI |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "chat.asked",
"confidential": false,
"payload": {
"questionLength": 120,
"promptType": "question",
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ---------------- | ------------------------- | --------------------------------------------------------------- |
| `questionLength` | `number` | Character count of the user's question |
| `promptType` | `"question" \| "update"` | Type of prompt sent to the AI |
| `prompt` | `string` | Full text of the user's question |
| `reply` | `string` | AI's generated response |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "chat.asked",
"confidential": true,
"payload": {
"questionLength": 120,
"promptType": "question",
"prompt": "...",
"reply": "...",
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# chat.dictationStarted
Source: https://docs.corti.ai/assistant/events/generated/ai-chat/dictationStarted
Emitted when chat dictation starts.
## Event Properties
| Field | Value |
| -------------- | ------------------------- |
| `event` | `"chat.dictationStarted"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `inputLanguage` | `string` | Language used for voice recognition |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "chat.dictationStarted",
"confidential": false,
"payload": {
"inputLanguage": "en",
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| --------------- | ------------------------- | --------------------------------------------------------------- |
| `inputLanguage` | `string` | Language used for voice recognition |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "chat.dictationStarted",
"confidential": true,
"payload": {
"inputLanguage": "en",
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# chat.dictationStopped
Source: https://docs.corti.ai/assistant/events/generated/ai-chat/dictationStopped
Emitted when chat dictation stops.
## Event Properties
| Field | Value |
| -------------- | ------------------------- |
| `event` | `"chat.dictationStopped"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `inputLanguage` | `string` | Language used for voice recognition |
| `wordsDictated` | `number` | Number of words captured during this session |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "chat.dictationStopped",
"confidential": false,
"payload": {
"inputLanguage": "en",
"wordsDictated": 44,
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| --------------- | ------------------------- | --------------------------------------------------------------- |
| `inputLanguage` | `string` | Language used for voice recognition |
| `text` | `string` | Transcribed text from dictation |
| `wordsDictated` | `number` | Number of words captured during this session |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "chat.dictationStopped",
"confidential": true,
"payload": {
"inputLanguage": "en",
"text": "...",
"wordsDictated": 44,
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# chat.failed
Source: https://docs.corti.ai/assistant/events/generated/ai-chat/failed
Emitted when AI chat fails.
## Event Properties
| Field | Value |
| -------------- | --------------- |
| `event` | `"chat.failed"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `errorCode` | `string` | Machine-readable error identifier |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "chat.failed",
"confidential": false,
"payload": {
"errorCode": "timeout",
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | ------------------------- | --------------------------------------------------------------- |
| `prompt` | `string` | Question that failed to process |
| `errorCode` | `string` | Machine-readable error identifier |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "chat.failed",
"confidential": true,
"payload": {
"prompt": "...",
"errorCode": "timeout",
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# document.copied
Source: https://docs.corti.ai/assistant/events/generated/document/copied
Emitted when a document is copied.
## Event Properties
| Field | Value |
| -------------- | ------------------- |
| `event` | `"document.copied"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "document.copied",
"confidential": false,
"payload": {
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | ------------------------- | --------------------------------------------------------------- |
| `text` | `string` | Complete document content copied to clipboard |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "document.copied",
"confidential": true,
"payload": {
"text": "...",
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# document.dictationStarted
Source: https://docs.corti.ai/assistant/events/generated/document/dictationStarted
Emitted when document dictation starts.
## Event Properties
| Field | Value |
| -------------- | ----------------------------- |
| `event` | `"document.dictationStarted"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `sectionKey` | `string` | Which section dictation is targeting |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "document.dictationStarted",
"confidential": false,
"payload": {
"sectionKey": "assessment",
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | ------------------------- | --------------------------------------------------------------- |
| `sectionKey` | `string` | Which section dictation is targeting |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "document.dictationStarted",
"confidential": true,
"payload": {
"sectionKey": "assessment",
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# document.dictationStopped
Source: https://docs.corti.ai/assistant/events/generated/document/dictationStopped
Emitted when document dictation stops.
## Event Properties
| Field | Value |
| -------------- | ----------------------------- |
| `event` | `"document.dictationStopped"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `sectionKey` | `string` | Which section was being dictated into |
| `wordsDictated` | `number` | Number of words captured during this dictation session |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "document.dictationStopped",
"confidential": false,
"payload": {
"sectionKey": "assessment",
"wordsDictated": 120,
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| --------------- | ------------------------- | --------------------------------------------------------------- |
| `sectionKey` | `string` | Which section was being dictated into |
| `text` | `string` | Full text content after dictation |
| `wordsDictated` | `number` | Number of words captured during this dictation session |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "document.dictationStopped",
"confidential": true,
"payload": {
"sectionKey": "assessment",
"text": "...",
"wordsDictated": 120,
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# document.edited
Source: https://docs.corti.ai/assistant/events/generated/document/edited
Emitted when a document section is edited.
## Event Properties
| Field | Value |
| -------------- | ------------------- |
| `event` | `"document.edited"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------------------------------------- | ---------------------------------------------------------------------------- |
| `sectionKey` | `string` | Which section was modified |
| `method` | `"direct" \| "dictation" \| "chat" \| "undo" \| "redo"` | How the edit was performed |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "document.edited",
"confidential": false,
"payload": {
"sectionKey": "hpi",
"method": "direct",
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | ------------------------------------------------------- | --------------------------------------------------------------- |
| `sectionKey` | `string` | Which section was modified |
| `method` | `"direct" \| "dictation" \| "chat" \| "undo" \| "redo"` | How the edit was performed |
| `text` | `string` | Full text content after the edit |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "document.edited",
"confidential": true,
"payload": {
"sectionKey": "hpi",
"method": "direct",
"text": "...",
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# document.feedbackSubmitted
Source: https://docs.corti.ai/assistant/events/generated/document/feedbackSubmitted
Emitted when document feedback is submitted.
## Event Properties
| Field | Value |
| -------------- | ------------------------------ |
| `event` | `"document.feedbackSubmitted"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `score` | `number` | User rating for the document |
| `maxScore` | `number` | Max possible rating for the document |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "document.feedbackSubmitted",
"confidential": false,
"payload": {
"score": 4,
"maxScore": 5,
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | ------------------------- | --------------------------------------------------------------- |
| `score` | `number` | User rating for the document |
| `maxScore` | `number` | Max possible rating for the document |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "document.feedbackSubmitted",
"confidential": true,
"payload": {
"score": 4,
"maxScore": 5,
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# document.generated
Source: https://docs.corti.ai/assistant/events/generated/document/generated
Emitted when document generation completes.
## Event Properties
| Field | Value |
| -------------- | ---------------------- |
| `event` | `"document.generated"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | -------------------------------- | ---------------------------------------------------------------------------- |
| `reason` | `"generation" \| "regeneration"` | Was this a newly generated document or a update to existing? |
| `durationMs` | `number` | Duration of the generation request |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "document.generated",
"confidential": false,
"payload": {
"reason": "generation",
"durationMs": 1840,
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | -------------------------------- | --------------------------------------------------------------- |
| `reason` | `"generation" \| "regeneration"` | Was this a newly generated document or a update to existing? |
| `durationMs` | `number` | Duration of the generation request |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "document.generated",
"confidential": true,
"payload": {
"reason": "generation",
"durationMs": 1840,
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# document.generationFailed
Source: https://docs.corti.ai/assistant/events/generated/document/generationFailed
Emitted when document generation fails.
## Event Properties
| Field | Value |
| -------------- | ----------------------------- |
| `event` | `"document.generationFailed"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `reason` | `string` | Human-readable description of the failure |
| `errorCode` | `string` | Machine-readable error identifier |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "document.generationFailed",
"confidential": false,
"payload": {
"reason": "timeout",
"errorCode": "timeout",
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | ------------------------- | --------------------------------------------------------------- |
| `reason` | `string` | Human-readable description of the failure |
| `errorCode` | `string` | Machine-readable error identifier |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "document.generationFailed",
"confidential": true,
"payload": {
"reason": "timeout",
"errorCode": "timeout",
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# document.sectionCopied
Source: https://docs.corti.ai/assistant/events/generated/document/sectionCopied
Emitted when a document section is copied.
## Event Properties
| Field | Value |
| -------------- | -------------------------- |
| `event` | `"document.sectionCopied"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `sectionKey` | `string` | Which section was copied |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "document.sectionCopied",
"confidential": false,
"payload": {
"sectionKey": "assessment",
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | ------------------------- | --------------------------------------------------------------- |
| `sectionKey` | `string` | Which section was copied |
| `text` | `string` | Content that was copied to clipboard |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "document.sectionCopied",
"confidential": true,
"payload": {
"sectionKey": "assessment",
"text": "...",
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# document.synced
Source: https://docs.corti.ai/assistant/events/generated/document/synced
Emitted when a document sync completes.
## Event Properties
| Field | Value |
| -------------- | ------------------- |
| `event` | `"document.synced"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "document.synced",
"confidential": false,
"payload": {
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | ------------------------- | --------------------------------------------------------------- |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "document.synced",
"confidential": true,
"payload": {
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# document.updated
Source: https://docs.corti.ai/assistant/events/generated/document/updated
Emitted when a document is updated.
## Event Properties
| Field | Value |
| -------------- | -------------------- |
| `event` | `"document.updated"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------------ | ---------------------------------------------------------------------------- |
| `reason` | `string` | What triggered the update |
| `documentId` | `string` | Unique document identifier |
| `documentName` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "document.updated",
"confidential": false,
"payload": {
"reason": "edit",
"documentId": "doc_123",
"documentName": "SOAP note",
"outputLanguage": "en",
"templateType": "built-in",
"templateId": "soap",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | ------------------------- | --------------------------------------------------------------- |
| `reason` | `string` | What triggered the update |
| `document` | `ExternalPatientDocument` | Complete document object with all sections and metadata |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ---------------- | ------------------------ | ---------------------------------------------------------------------------- |
| `id` | `string` | Unique document identifier |
| `name` | `string` | Document name or title |
| `outputLanguage` | `string` | Output language code (e.g., "en", "de") |
| `sections` | `Section[]` | Document sections containing the generated content |
| `templateType` | `"built-in" \| "custom"` | Type of template used: built-in or custom |
| `templateId` | `string` | Template identifier (either ref for built-in or customTemplateId for custom) |
**Section properties:**
| Field | Type | Description |
| ------ | -------- | ----------------------------------------- |
| `key` | `string` | Unique key identifying the section type |
| `name` | `string` | Display name of the section |
| `text` | `string` | Section content in markdown or plain text |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "document.updated",
"confidential": true,
"payload": {
"reason": "edit",
"document": {
"id": "doc_123",
"name": "SOAP note",
"outputLanguage": "en",
"sections": [
{
"key": "hpi",
"name": "HPI",
"text": "..."
}
],
"templateType": "built-in",
"templateId": "soap"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# embedded.appConfigured
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/appConfigured
Emitted when the embedded configure method is successfully called.
## Event Properties
| Field | Value |
| -------------- | -------------------------- |
| `event` | `"embedded.appConfigured"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| --------------- | ---------------------------------------------------------- | ------------------------------------------ |
| `appearance` | `Record \| undefined` | Visual customization options |
| `features` | `Record \| undefined` | Feature toggles for UI capabilities |
| `locale` | `Record \| undefined` | Language and localization settings |
| `overrideCount` | `number \| undefined` | Count of configuration properties provided |
#### Example
```json theme={null}
{
"event": "embedded.appConfigured",
"confidential": false,
"payload": {
"appearance": {
"primaryColor": "#3366FF"
},
"features": {
"interactionTitle": true,
"aiChat": true,
"navigation": false,
"documentFeedback": true,
"virtualMode": true,
"syncDocumentAction": false
},
"locale": {
"interfaceLanguage": "en",
"dictationLanguage": "en"
},
"overrideCount": 2
}
}
```
| Field | Type | Description |
| --------------- | ---------------------------------------------------------- | ------------------------------------------ |
| `appearance` | `Record \| undefined` | Visual customization options |
| `features` | `Record \| undefined` | Feature toggles for UI capabilities |
| `locale` | `Record \| undefined` | Language and localization settings |
| `overrideCount` | `number \| undefined` | Count of configuration properties provided |
#### Example
```json theme={null}
{
"event": "embedded.appConfigured",
"confidential": true,
"payload": {
"appearance": {
"primaryColor": "#3366FF"
},
"features": {
"interactionTitle": true,
"aiChat": true,
"navigation": false,
"documentFeedback": true,
"virtualMode": true,
"syncDocumentAction": false
},
"locale": {
"interfaceLanguage": "en",
"dictationLanguage": "en"
},
"overrideCount": 2
}
}
```
# embedded.authenticated
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/authenticated
Emitted when the embedded auth() method is successfully called.
## Event Properties
| Field | Value |
| -------------- | -------------------------- |
| `event` | `"embedded.authenticated"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ----------------------------- | ---------------------------------------- |
| `tokenType` | `string \| undefined` | Type of authentication token |
| `scope` | `string \| undefined` | OAuth scope granted |
| `expiresIn` | `number \| null \| undefined` | Access token expiration time in seconds |
| `refreshExpiresIn` | `number \| null \| undefined` | Refresh token expiration time in seconds |
| `hasRefreshToken` | `boolean` | Whether a refresh token was provided |
| `hasIdToken` | `boolean` | Whether an ID token was provided |
| `hasProfile` | `boolean` | Whether user profile data was included |
#### Example
```json theme={null}
{
"event": "embedded.authenticated",
"confidential": false,
"payload": {
"tokenType": "Bearer",
"scope": "openid profile",
"expiresIn": 3600,
"refreshExpiresIn": 7200,
"hasRefreshToken": true,
"hasIdToken": true,
"hasProfile": true
}
}
```
| Field | Type | Description |
| ------------------ | ----------------------------- | ---------------------------------------- |
| `tokenType` | `string \| undefined` | Type of authentication token |
| `scope` | `string \| undefined` | OAuth scope granted |
| `expiresIn` | `number \| null \| undefined` | Access token expiration time in seconds |
| `refreshExpiresIn` | `number \| null \| undefined` | Refresh token expiration time in seconds |
| `hasRefreshToken` | `boolean` | Whether a refresh token was provided |
| `hasIdToken` | `boolean` | Whether an ID token was provided |
| `hasProfile` | `boolean` | Whether user profile data was included |
#### Example
```json theme={null}
{
"event": "embedded.authenticated",
"confidential": true,
"payload": {
"tokenType": "Bearer",
"scope": "openid profile",
"expiresIn": 3600,
"refreshExpiresIn": 7200,
"hasRefreshToken": true,
"hasIdToken": true,
"hasProfile": true
}
}
```
# embedded.credentialsUpdated
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/credentialsUpdated
Emitted when the embedded setCredentials method is successfully called.
## Event Properties
| Field | Value |
| -------------- | ------------------------------- |
| `event` | `"embedded.credentialsUpdated"` |
| `confidential` | `boolean` |
| `payload` | `object` |
No payload properties.
#### Example
```json theme={null}
{
"event": "embedded.credentialsUpdated",
"confidential": false,
"payload": {}
}
```
No payload properties.
#### Example
```json theme={null}
{
"event": "embedded.credentialsUpdated",
"confidential": true,
"payload": {}
}
```
# embedded.factsAdded
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/factsAdded
Emitted when the embedded addFacts method is successfully called.
## Event Properties
| Field | Value |
| -------------- | ----------------------- |
| `event` | `"embedded.factsAdded"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `factCount` | `number` | Number of facts being added |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "embedded.factsAdded",
"confidential": false,
"payload": {
"factCount": 2,
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `factCount` | `number` | Number of facts being added |
| `facts` | `Fact[]` | Array of facts with text, group, and source |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| -------- | ------------------------------------------- | ----------- |
| `text` | `string` | - |
| `group` | `string` | - |
| `source` | `"core" \| "system" \| "user" \| undefined` | - |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "embedded.factsAdded",
"confidential": true,
"payload": {
"factCount": 2,
"facts": [
{
"text": "Chest pain",
"group": "other",
"source": "user"
},
{
"text": "Shortness of breath",
"group": "other",
"source": "system"
}
],
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# embedded.interactionCreated
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/interactionCreated
Emitted when the embedded createInteraction method is successfully called.
## Event Properties
| Field | Value |
| -------------- | ------------------------------- |
| `event` | `"embedded.interactionCreated"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | -------------------------------------------------------------------------------------- | --------------------------------- |
| `encounterType` | `"first_consultation" \| "consultation" \| "emergency" \| "inpatient" \| "outpatient"` | Type of clinical encounter |
| `encounterStatus` | `"planned" \| "in-progress" \| "cancelled" \| "deleted" \| "on-hold" \| "completed"` | Current status of the encounter |
| `hasPatient` | `boolean` | Whether patient data was provided |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "embedded.interactionCreated",
"confidential": false,
"payload": {
"encounterType": "consultation",
"encounterStatus": "planned",
"hasPatient": true,
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------- |
| `encounterType` | `"first_consultation" \| "consultation" \| "emergency" \| "inpatient" \| "outpatient"` | Type of clinical encounter |
| `encounterStatus` | `"planned" \| "in-progress" \| "cancelled" \| "deleted" \| "on-hold" \| "completed"` | Current status of the encounter |
| `encounterIdentifier` | `string` | External system's encounter identifier |
| `encounterStartedAt` | `string (ISO 8601 date)` | When the encounter began |
| `encounterTitle` | `string \| undefined` | Optional descriptive title for the encounter |
| `assignedUserId` | `string \| null \| undefined` | User responsible for this interaction |
| `hasPatient` | `boolean` | Whether patient data was provided |
| `patient` | `{ identifier?: string \| undefined; name?: string \| undefined; gender?: "male" \| "female" \| "other" \| "unknown" \| undefined; birthDate?: string \| null \| undefined; pronouns?: string \| undefined; } \| undefined` | Patient demographic data |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ------------ | -------- | ----------- |
| `identifier` | `string` | - |
| `name` | `string` | - |
| `gender` | `string` | - |
| `birthDate` | `string` | - |
| `pronouns` | `string` | - |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "embedded.interactionCreated",
"confidential": true,
"payload": {
"encounterType": "consultation",
"encounterStatus": "planned",
"encounterIdentifier": "encounter-123",
"encounterStartedAt": "2024-01-01T00:00:00.000Z",
"encounterTitle": "Initial Consultation",
"assignedUserId": "user_123",
"hasPatient": true,
"patient": {
"identifier": "patient-123",
"name": "Jane Doe",
"gender": "female",
"birthDate": "1980-01-01",
"pronouns": "she/her"
},
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# embedded.navigated
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/navigated
Emitted when the embedded navigate method is successfully called.
## Event Properties
| Field | Value |
| -------------- | ---------------------- |
| `event` | `"embedded.navigated"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------ | -------- | -------------------------------- |
| `path` | `string` | Target route path for navigation |
#### Example
```json theme={null}
{
"event": "embedded.navigated",
"confidential": false,
"payload": {
"path": "/session/interaction-123"
}
}
```
| Field | Type | Description |
| ------ | -------- | -------------------------------- |
| `path` | `string` | Target route path for navigation |
#### Example
```json theme={null}
{
"event": "embedded.navigated",
"confidential": true,
"payload": {
"path": "/session/interaction-123"
}
}
```
# embedded.ready
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/ready
Emitted when the embedded API is ready to receive calls.
## Event Properties
| Field | Value |
| -------------- | ------------------ |
| `event` | `"embedded.ready"` |
| `confidential` | `boolean` |
| `payload` | `object` |
No payload properties.
#### Example
```json theme={null}
{
"event": "embedded.ready",
"confidential": false,
"payload": {}
}
```
No payload properties.
#### Example
```json theme={null}
{
"event": "embedded.ready",
"confidential": true,
"payload": {}
}
```
# embedded.recordingStarted
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/recordingStarted
Emitted when the embedded startRecording method is successfully called.
## Event Properties
| Field | Value |
| -------------- | ----------------------------- |
| `event` | `"embedded.recordingStarted"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "embedded.recordingStarted",
"confidential": false,
"payload": {
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "embedded.recordingStarted",
"confidential": true,
"payload": {
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# embedded.recordingStopped
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/recordingStopped
Emitted when the embedded stopRecording method is successfully called.
## Event Properties
| Field | Value |
| -------------- | ----------------------------- |
| `event` | `"embedded.recordingStopped"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "embedded.recordingStopped",
"confidential": false,
"payload": {
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "embedded.recordingStopped",
"confidential": true,
"payload": {
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# embedded.sessionConfigured
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/sessionConfigured
Emitted when the embedded configureSession method is successfully called.
## Event Properties
| Field | Value |
| -------------- | ------------------------------ |
| `event` | `"embedded.sessionConfigured"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ----------------------- | --------------------------------------- | ---------------------------------------- |
| `defaultLanguage` | `string \| undefined` | Default language for transcription |
| `defaultMode` | `"virtual" \| "in-person" \| undefined` | Default recording mode |
| `defaultOutputLanguage` | `string \| undefined` | Default language for document generation |
| `defaultTemplateKey` | `string \| undefined` | Default template to use for documents |
#### Example
```json theme={null}
{
"event": "embedded.sessionConfigured",
"confidential": false,
"payload": {
"defaultLanguage": "en",
"defaultMode": "virtual",
"defaultOutputLanguage": "en",
"defaultTemplateKey": "soap"
}
}
```
| Field | Type | Description |
| ----------------------- | --------------------------------------- | ---------------------------------------- |
| `defaultLanguage` | `string \| undefined` | Default language for transcription |
| `defaultMode` | `"virtual" \| "in-person" \| undefined` | Default recording mode |
| `defaultOutputLanguage` | `string \| undefined` | Default language for document generation |
| `defaultTemplateKey` | `string \| undefined` | Default template to use for documents |
#### Example
```json theme={null}
{
"event": "embedded.sessionConfigured",
"confidential": true,
"payload": {
"defaultLanguage": "en",
"defaultMode": "virtual",
"defaultOutputLanguage": "en",
"defaultTemplateKey": "soap"
}
}
```
# embedded.statusReturned
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/statusReturned
Emitted when the embedded getStatus method is successfully called.
## Event Properties
| Field | Value |
| -------------- | --------------------------- |
| `event` | `"embedded.statusReturned"` |
| `confidential` | `boolean` |
| `payload` | `object` |
No payload properties.
#### Example
```json theme={null}
{
"event": "embedded.statusReturned",
"confidential": false,
"payload": {}
}
```
No payload properties.
#### Example
```json theme={null}
{
"event": "embedded.statusReturned",
"confidential": true,
"payload": {}
}
```
# embedded.templatesReturned
Source: https://docs.corti.ai/assistant/events/generated/embedded-api/templatesReturned
Emitted when the embedded getTemplates method is successfully called.
## Event Properties
| Field | Value |
| -------------- | ------------------------------ |
| `event` | `"embedded.templatesReturned"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------- | -------- | ---------------------------- |
| `count` | `number` | Number of templates returned |
#### Example
```json theme={null}
{
"event": "embedded.templatesReturned",
"confidential": false,
"payload": {
"count": 5
}
}
```
| Field | Type | Description |
| ------- | -------- | ---------------------------- |
| `count` | `number` | Number of templates returned |
#### Example
```json theme={null}
{
"event": "embedded.templatesReturned",
"confidential": true,
"payload": {
"count": 5
}
}
```
# error.triggered
Source: https://docs.corti.ai/assistant/events/generated/errors/triggered
Emitted when an error occurs.
## Event Properties
| Field | Value |
| -------------- | ------------------- |
| `event` | `"error.triggered"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | --------------------------------- |
| `message` | `string` | Human-readable error description |
| `code` | `string` | Machine-readable error identifier |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "error.triggered",
"confidential": false,
"payload": {
"message": "Network error",
"code": "network_error",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `message` | `string` | Human-readable error description |
| `code` | `string` | Machine-readable error identifier |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "error.triggered",
"confidential": true,
"payload": {
"message": "Network error",
"code": "network_error",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# interaction.archived
Source: https://docs.corti.ai/assistant/events/generated/interaction/archived
Emitted when an interaction is archived.
## Event Properties
| Field | Value |
| -------------- | ------------------------ |
| `event` | `"interaction.archived"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| --------------- | -------- | ------------------------------ |
| `interactionId` | `string` | ID of the archived interaction |
#### Example
```json theme={null}
{
"event": "interaction.archived",
"confidential": false,
"payload": {
"interactionId": "interaction_12345"
}
}
```
| Field | Type | Description |
| --------------- | -------- | ------------------------------ |
| `interactionId` | `string` | ID of the archived interaction |
#### Example
```json theme={null}
{
"event": "interaction.archived",
"confidential": true,
"payload": {
"interactionId": "interaction_12345"
}
}
```
# interaction.created
Source: https://docs.corti.ai/assistant/events/generated/interaction/created
Emitted when an interaction is created.
## Event Properties
| Field | Value |
| -------------- | ----------------------- |
| `event` | `"interaction.created"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "interaction.created",
"confidential": false,
"payload": {
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "interaction.created",
"confidential": true,
"payload": {
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# interaction.deleted
Source: https://docs.corti.ai/assistant/events/generated/interaction/deleted
Emitted when an interaction is permanently deleted.
## Event Properties
| Field | Value |
| -------------- | ----------------------- |
| `event` | `"interaction.deleted"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| --------------- | -------- | ----------------------------------------- |
| `interactionId` | `string` | ID of the permanently deleted interaction |
#### Example
```json theme={null}
{
"event": "interaction.deleted",
"confidential": false,
"payload": {
"interactionId": "interaction_12345"
}
}
```
| Field | Type | Description |
| --------------- | -------- | ----------------------------------------- |
| `interactionId` | `string` | ID of the permanently deleted interaction |
#### Example
```json theme={null}
{
"event": "interaction.deleted",
"confidential": true,
"payload": {
"interactionId": "interaction_12345"
}
}
```
# interaction.loaded
Source: https://docs.corti.ai/assistant/events/generated/interaction/loaded
Emitted when an interaction is loaded.
## Event Properties
| Field | Value |
| -------------- | ---------------------- |
| `event` | `"interaction.loaded"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "interaction.loaded",
"confidential": false,
"payload": {
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "interaction.loaded",
"confidential": true,
"payload": {
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# interaction.renamed
Source: https://docs.corti.ai/assistant/events/generated/interaction/renamed
Emitted when an interaction is renamed.
## Event Properties
| Field | Value |
| -------------- | ----------------------- |
| `event` | `"interaction.renamed"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| --------------- | -------- | ----------------------------- |
| `interactionId` | `string` | ID of the renamed interaction |
#### Example
```json theme={null}
{
"event": "interaction.renamed",
"confidential": false,
"payload": {
"interactionId": "interaction_12345"
}
}
```
| Field | Type | Description |
| --------------- | -------- | ----------------------------- |
| `interactionId` | `string` | ID of the renamed interaction |
| `title` | `string` | New title for the interaction |
#### Example
```json theme={null}
{
"event": "interaction.renamed",
"confidential": true,
"payload": {
"interactionId": "interaction_12345",
"title": "Follow-up visit"
}
}
```
# interaction.restored
Source: https://docs.corti.ai/assistant/events/generated/interaction/restored
Emitted when an interaction is restored from archive.
## Event Properties
| Field | Value |
| -------------- | ------------------------ |
| `event` | `"interaction.restored"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| --------------- | -------- | ------------------------------ |
| `interactionId` | `string` | ID of the restored interaction |
#### Example
```json theme={null}
{
"event": "interaction.restored",
"confidential": false,
"payload": {
"interactionId": "interaction_12345"
}
}
```
| Field | Type | Description |
| --------------- | -------- | ------------------------------ |
| `interactionId` | `string` | ID of the restored interaction |
#### Example
```json theme={null}
{
"event": "interaction.restored",
"confidential": true,
"payload": {
"interactionId": "interaction_12345"
}
}
```
# interaction.virtualAudioDisconnected
Source: https://docs.corti.ai/assistant/events/generated/interaction/virtualAudioDisconnected
Emitted when virtual audio disconnects.
## Event Properties
| Field | Value |
| -------------- | ---------------------------------------- |
| `event` | `"interaction.virtualAudioDisconnected"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "interaction.virtualAudioDisconnected",
"confidential": false,
"payload": {
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "interaction.virtualAudioDisconnected",
"confidential": true,
"payload": {
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# note.added
Source: https://docs.corti.ai/assistant/events/generated/note/added
Emitted when a note is added.
## Event Properties
| Field | Value |
| -------------- | -------------- |
| `event` | `"note.added"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `group` | `string` | Category the note belongs to |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "note.added",
"confidential": false,
"payload": {
"noteId": "note_2",
"group": "summary",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `group` | `string` | Category the note belongs to |
| `text` | `string` | Content of the added note |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "note.added",
"confidential": true,
"payload": {
"noteId": "note_2",
"group": "summary",
"text": "...",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# note.dictationStarted
Source: https://docs.corti.ai/assistant/events/generated/note/dictationStarted
Emitted when note dictation starts.
## Event Properties
| Field | Value |
| -------------- | ------------------------- |
| `event` | `"note.dictationStarted"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | --------------------------------- |
| `noteId` | `string` | Which note dictation is targeting |
| `group` | `string` | Category the note belongs to |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "note.dictationStarted",
"confidential": false,
"payload": {
"noteId": "note_7",
"group": "summary",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `noteId` | `string` | Which note dictation is targeting |
| `group` | `string` | Category the note belongs to |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "note.dictationStarted",
"confidential": true,
"payload": {
"noteId": "note_7",
"group": "summary",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# note.dictationStopped
Source: https://docs.corti.ai/assistant/events/generated/note/dictationStopped
Emitted when note dictation stops.
## Event Properties
| Field | Value |
| -------------- | ------------------------- |
| `event` | `"note.dictationStopped"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------------------- |
| `noteId` | `string` | Which note was being dictated into |
| `group` | `string` | Category the note belongs to |
| `wordsDictated` | `number` | Number of words captured during this session |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "note.dictationStopped",
"confidential": false,
"payload": {
"noteId": "note_8",
"group": "summary",
"wordsDictated": 32,
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| --------------- | --------------------- | --------------------------------------------------------------- |
| `noteId` | `string` | Which note was being dictated into |
| `group` | `string` | Category the note belongs to |
| `text` | `string` | Updated content after dictation |
| `wordsDictated` | `number` | Number of words captured during this session |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "note.dictationStopped",
"confidential": true,
"payload": {
"noteId": "note_8",
"group": "summary",
"text": "...",
"wordsDictated": 32,
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# note.discarded
Source: https://docs.corti.ai/assistant/events/generated/note/discarded
Emitted when a note is discarded.
## Event Properties
| Field | Value |
| -------------- | ------------------ |
| `event` | `"note.discarded"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `group` | `string` | Category the note belongs to |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "note.discarded",
"confidential": false,
"payload": {
"noteId": "note_5",
"group": "summary",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `group` | `string` | Category the note belongs to |
| `text` | `string` | Content of the discarded note |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "note.discarded",
"confidential": true,
"payload": {
"noteId": "note_5",
"group": "summary",
"text": "...",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# note.edited
Source: https://docs.corti.ai/assistant/events/generated/note/edited
Emitted when a note is edited.
## Event Properties
| Field | Value |
| -------------- | --------------- |
| `event` | `"note.edited"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `group` | `string` | Category the note belongs to |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "note.edited",
"confidential": false,
"payload": {
"noteId": "note_3",
"group": "summary",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `group` | `string` | Category the note belongs to |
| `text` | `string` | Updated content after editing |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "note.edited",
"confidential": true,
"payload": {
"noteId": "note_3",
"group": "summary",
"text": "...",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# note.generated
Source: https://docs.corti.ai/assistant/events/generated/note/generated
Emitted when a new note is generated during stream.
## Event Properties
| Field | Value |
| -------------- | ------------------ |
| `event` | `"note.generated"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `group` | `string` | Category the note belongs to |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "note.generated",
"confidential": false,
"payload": {
"noteId": "note_1",
"group": "summary",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `group` | `string` | Category the note belongs to |
| `text` | `string` | Content of the generated note |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "note.generated",
"confidential": true,
"payload": {
"noteId": "note_1",
"group": "summary",
"text": "...",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# note.moved
Source: https://docs.corti.ai/assistant/events/generated/note/moved
Emitted when a note is moved to a different group.
## Event Properties
| Field | Value |
| -------------- | -------------- |
| `event` | `"note.moved"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `oldGroup` | `string` | Previous category before move |
| `newGroup` | `string` | New category after move |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "note.moved",
"confidential": false,
"payload": {
"noteId": "note_9",
"oldGroup": "summary",
"newGroup": "details",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `oldGroup` | `string` | Previous category before move |
| `newGroup` | `string` | New category after move |
| `text` | `string` | Content of the moved note |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "note.moved",
"confidential": true,
"payload": {
"noteId": "note_9",
"oldGroup": "summary",
"newGroup": "details",
"text": "The patient reported mild headaches.",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# note.restored
Source: https://docs.corti.ai/assistant/events/generated/note/restored
Emitted when a note is restored.
## Event Properties
| Field | Value |
| -------------- | ----------------- |
| `event` | `"note.restored"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | -------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `group` | `string` | Category the note belongs to |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "note.restored",
"confidential": false,
"payload": {
"noteId": "note_6",
"group": "summary",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `noteId` | `string` | Unique identifier for the note |
| `group` | `string` | Category the note belongs to |
| `text` | `string` | Content of the restored note |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "note.restored",
"confidential": true,
"payload": {
"noteId": "note_6",
"group": "summary",
"text": "...",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# settings.advancedSettingsEnabled
Source: https://docs.corti.ai/assistant/events/generated/onboarding-and-preferences/advancedSettingsEnabled
Emitted when advanced settings are enabled.
## Event Properties
| Field | Value |
| -------------- | ------------------------------------ |
| `event` | `"settings.advancedSettingsEnabled"` |
| `confidential` | `boolean` |
| `payload` | `object` |
No payload properties.
#### Example
```json theme={null}
{
"event": "settings.advancedSettingsEnabled",
"confidential": false,
"payload": {}
}
```
No payload properties.
#### Example
```json theme={null}
{
"event": "settings.advancedSettingsEnabled",
"confidential": true,
"payload": {}
}
```
# settings.onboardingCompleted
Source: https://docs.corti.ai/assistant/events/generated/onboarding-and-preferences/onboardingCompleted
Emitted when onboarding completes.
## Event Properties
| Field | Value |
| -------------- | -------------------------------- |
| `event` | `"settings.onboardingCompleted"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| --------------- | -------- | --------------------------------- |
| `countryCode` | `string` | User's selected country |
| `languageCode` | `string` | User's selected language |
| `specialtyName` | `string` | User's selected medical specialty |
#### Example
```json theme={null}
{
"event": "settings.onboardingCompleted",
"confidential": false,
"payload": {
"countryCode": "DK",
"languageCode": "en",
"specialtyName": "cardiology"
}
}
```
| Field | Type | Description |
| --------------- | -------- | --------------------------------- |
| `countryCode` | `string` | User's selected country |
| `languageCode` | `string` | User's selected language |
| `specialtyName` | `string` | User's selected medical specialty |
#### Example
```json theme={null}
{
"event": "settings.onboardingCompleted",
"confidential": true,
"payload": {
"countryCode": "DK",
"languageCode": "en",
"specialtyName": "cardiology"
}
}
```
# settings.onboardingContinued
Source: https://docs.corti.ai/assistant/events/generated/onboarding-and-preferences/onboardingContinued
Emitted when onboarding continues.
## Event Properties
| Field | Value |
| -------------- | -------------------------------- |
| `event` | `"settings.onboardingContinued"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------ | ------------------------------------------ | ------------------------------- |
| `step` | `"country" \| "specialty" \| "microphone"` | Which step the user advanced to |
#### Example
```json theme={null}
{
"event": "settings.onboardingContinued",
"confidential": false,
"payload": {
"step": "microphone"
}
}
```
| Field | Type | Description |
| ------ | ------------------------------------------ | ------------------------------- |
| `step` | `"country" \| "specialty" \| "microphone"` | Which step the user advanced to |
#### Example
```json theme={null}
{
"event": "settings.onboardingContinued",
"confidential": true,
"payload": {
"step": "microphone"
}
}
```
# settings.onboardingStarted
Source: https://docs.corti.ai/assistant/events/generated/onboarding-and-preferences/onboardingStarted
Emitted when onboarding starts.
## Event Properties
| Field | Value |
| -------------- | ------------------------------ |
| `event` | `"settings.onboardingStarted"` |
| `confidential` | `boolean` |
| `payload` | `object` |
No payload properties.
#### Example
```json theme={null}
{
"event": "settings.onboardingStarted",
"confidential": false,
"payload": {}
}
```
No payload properties.
#### Example
```json theme={null}
{
"event": "settings.onboardingStarted",
"confidential": true,
"payload": {}
}
```
# settings.onboardingValueChanged
Source: https://docs.corti.ai/assistant/events/generated/onboarding-and-preferences/onboardingValueChanged
Emitted when an onboarding value changes.
## Event Properties
| Field | Value |
| -------------- | ----------------------------------- |
| `event` | `"settings.onboardingValueChanged"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| -------------- | -------- | -------------------------- |
| `settingName` | `string` | Which setting was modified |
| `settingValue` | `string` | New value for the setting |
#### Example
```json theme={null}
{
"event": "settings.onboardingValueChanged",
"confidential": false,
"payload": {
"settingName": "language",
"settingValue": "en"
}
}
```
| Field | Type | Description |
| -------------- | -------- | -------------------------- |
| `settingName` | `string` | Which setting was modified |
| `settingValue` | `string` | New value for the setting |
#### Example
```json theme={null}
{
"event": "settings.onboardingValueChanged",
"confidential": true,
"payload": {
"settingName": "language",
"settingValue": "en"
}
}
```
# settings.pwaInstalled
Source: https://docs.corti.ai/assistant/events/generated/onboarding-and-preferences/pwaInstalled
Emitted when the PWA is installed.
## Event Properties
| Field | Value |
| -------------- | ------------------------- |
| `event` | `"settings.pwaInstalled"` |
| `confidential` | `boolean` |
| `payload` | `object` |
No payload properties.
#### Example
```json theme={null}
{
"event": "settings.pwaInstalled",
"confidential": false,
"payload": {}
}
```
No payload properties.
#### Example
```json theme={null}
{
"event": "settings.pwaInstalled",
"confidential": true,
"payload": {}
}
```
# settings.reset
Source: https://docs.corti.ai/assistant/events/generated/onboarding-and-preferences/reset
Emitted when settings are reset.
## Event Properties
| Field | Value |
| -------------- | ------------------ |
| `event` | `"settings.reset"` |
| `confidential` | `boolean` |
| `payload` | `object` |
No payload properties.
#### Example
```json theme={null}
{
"event": "settings.reset",
"confidential": false,
"payload": {}
}
```
No payload properties.
#### Example
```json theme={null}
{
"event": "settings.reset",
"confidential": true,
"payload": {}
}
```
# settings.userSettingsValueChanged
Source: https://docs.corti.ai/assistant/events/generated/onboarding-and-preferences/userSettingsValueChanged
Emitted when a user setting value changes.
## Event Properties
| Field | Value |
| -------------- | ------------------------------------- |
| `event` | `"settings.userSettingsValueChanged"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| -------------- | -------- | -------------------------- |
| `settingName` | `string` | Which setting was modified |
| `settingValue` | `string` | New value for the setting |
#### Example
```json theme={null}
{
"event": "settings.userSettingsValueChanged",
"confidential": false,
"payload": {
"settingName": "language",
"settingValue": "en"
}
}
```
| Field | Type | Description |
| -------------- | -------- | -------------------------- |
| `settingName` | `string` | Which setting was modified |
| `settingValue` | `string` | New value for the setting |
#### Example
```json theme={null}
{
"event": "settings.userSettingsValueChanged",
"confidential": true,
"payload": {
"settingName": "language",
"settingValue": "en"
}
}
```
# recording.languageChanged
Source: https://docs.corti.ai/assistant/events/generated/recording/languageChanged
Emitted when recording language changes.
## Event Properties
| Field | Value |
| -------------- | ----------------------------- |
| `event` | `"recording.languageChanged"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | --------------------------------------- |
| `language` | `string` | New language selected for transcription |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "recording.languageChanged",
"confidential": false,
"payload": {
"language": "en",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `language` | `string` | New language selected for transcription |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "recording.languageChanged",
"confidential": true,
"payload": {
"language": "en",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# recording.modeChanged
Source: https://docs.corti.ai/assistant/events/generated/recording/modeChanged
Emitted when recording mode changes.
## Event Properties
| Field | Value |
| -------------- | ------------------------- |
| `event` | `"recording.modeChanged"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | -------------------------- | -------------------------------- |
| `mode` | `"virtual" \| "in-person"` | New recording mode selected |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "recording.modeChanged",
"confidential": false,
"payload": {
"mode": "in-person",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | -------------------------- | --------------------------------------------------------------- |
| `mode` | `"virtual" \| "in-person"` | New recording mode selected |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "recording.modeChanged",
"confidential": true,
"payload": {
"mode": "in-person",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# recording.started
Source: https://docs.corti.ai/assistant/events/generated/recording/started
Emitted when recording starts.
## Event Properties
| Field | Value |
| -------------- | --------------------- |
| `event` | `"recording.started"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | -------------------------- | -------------------------------- |
| `mode` | `"virtual" \| "in-person"` | Recording mode at start |
| `language` | `string` | Language used for transcription |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "recording.started",
"confidential": false,
"payload": {
"mode": "virtual",
"language": "en",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | -------------------------- | --------------------------------------------------------------- |
| `mode` | `"virtual" \| "in-person"` | Recording mode at start |
| `language` | `string` | Language used for transcription |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "recording.started",
"confidential": true,
"payload": {
"mode": "virtual",
"language": "en",
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# recording.stopped
Source: https://docs.corti.ai/assistant/events/generated/recording/stopped
Emitted when recording stops.
## Event Properties
| Field | Value |
| -------------- | --------------------- |
| `event` | `"recording.stopped"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | ---------------------------------------------- |
| `duration` | `number` | Duration of the recording session before pause |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "recording.stopped",
"confidential": false,
"payload": {
"duration": 12,
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `duration` | `number` | Duration of the recording session before pause |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "recording.stopped",
"confidential": true,
"payload": {
"duration": 12,
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# recording.transcriptReceived
Source: https://docs.corti.ai/assistant/events/generated/recording/transcriptReceived
Emitted when a transcript is received.
## Event Properties
| Field | Value |
| -------------- | -------------------------------- |
| `event` | `"recording.transcriptReceived"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------------ | ------------------ | --------------------------------- |
| `wordCount` | `number` | Number of words in the transcript |
| `interactionId` | `string` | Unique interaction identifier |
| `interactionState` | `InteractionState` | Current state of the interaction |
#### Example
```json theme={null}
{
"event": "recording.transcriptReceived",
"confidential": false,
"payload": {
"wordCount": 42,
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
| Field | Type | Description |
| ------------- | --------------------- | --------------------------------------------------------------- |
| `transcript` | `string` | Transcribed text from audio |
| `wordCount` | `number` | Number of words in the transcript |
| `interaction` | `ExternalInteraction` | Interaction context including transcripts, documents, and facts |
| Field | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `id` | `string` | Unique interaction identifier |
| `title` | `string \| null` | Interaction title |
| `state` | `InteractionState` | Current state of the interaction |
| `startedAt` | `string (ISO 8601 date)` | When the interaction started (ISO 8601 date string) |
| `transcriptCount` | `number` | Number of transcripts in the interaction |
| `documentCount` | `number` | Number of documents in the interaction |
| `facts` | `Fact[]` | Facts extracted during the interaction |
**Fact properties:**
| Field | Type | Description |
| ------------- | ------------------------------ | -------------------------------------------------------- |
| `id` | `string` | Unique fact identifier |
| `text` | `string` | Fact text content |
| `group` | `string` | Fact group/category name |
| `source` | `"core" \| "system" \| "user"` | Source of the fact: core (AI extracted), system, or user |
| `isDiscarded` | `boolean` | Whether the fact has been discarded |
#### Example
```json theme={null}
{
"event": "recording.transcriptReceived",
"confidential": true,
"payload": {
"transcript": "...",
"wordCount": 42,
"interaction": {
"id": "int_123",
"title": "Visit",
"state": "ongoing",
"startedAt": "2024-01-01T00:00:00.000+00:00",
"transcriptCount": 0,
"documentCount": 0,
"facts": []
}
}
}
```
# template.copied
Source: https://docs.corti.ai/assistant/events/generated/templates/copied
Emitted when a template is copied.
## Event Properties
| Field | Value |
| -------------- | ------------------- |
| `event` | `"template.copied"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ---------- | -------- | --------------------------------- |
| `id` | `string` | Identifier of the copied template |
| `title` | `string` | Name of the copied template |
| `language` | `string` | Language of the copied template |
#### Example
```json theme={null}
{
"event": "template.copied",
"confidential": false,
"payload": {
"id": "template_2",
"title": "SOAP Copy",
"language": "en"
}
}
```
| Field | Type | Description |
| ---------- | -------- | --------------------------------- |
| `id` | `string` | Identifier of the copied template |
| `title` | `string` | Name of the copied template |
| `language` | `string` | Language of the copied template |
#### Example
```json theme={null}
{
"event": "template.copied",
"confidential": true,
"payload": {
"id": "template_2",
"title": "SOAP Copy",
"language": "en"
}
}
```
# template.created
Source: https://docs.corti.ai/assistant/events/generated/templates/created
Emitted when a template is created.
## Event Properties
| Field | Value |
| -------------- | -------------------- |
| `event` | `"template.created"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ------------- | -------- | ----------------------------------- |
| `title` | `string` | Name given to the new template |
| `language` | `string` | Language the template is written in |
| `numSections` | `number` | Count of sections in the template |
#### Example
```json theme={null}
{
"event": "template.created",
"confidential": false,
"payload": {
"title": "SOAP",
"language": "en",
"numSections": 6
}
}
```
| Field | Type | Description |
| ------------- | ------------------------ | ------------------------------------------ |
| `title` | `string` | Name given to the new template |
| `language` | `string` | Language the template is written in |
| `numSections` | `number` | Count of sections in the template |
| `template` | `TemplateCreatedPayload` | Complete template object with all sections |
| Field | Type | Description |
| ---------- | ----------- | ----------- |
| `id` | `string` | - |
| `name` | `string` | - |
| `language` | `string` | - |
| `sections` | `Section[]` | - |
**Section properties:**
| Field | Type | Description |
| ------- | -------- | ----------- |
| `id` | `string` | - |
| `title` | `string` | - |
#### Example
```json theme={null}
{
"event": "template.created",
"confidential": true,
"payload": {
"title": "SOAP",
"language": "en",
"numSections": 6,
"template": {
"id": "template_1",
"name": "SOAP",
"language": "en",
"sections": [
{
"id": "section_1",
"title": "HPI"
}
]
}
}
}
```
# template.defaultTemplateUpdated
Source: https://docs.corti.ai/assistant/events/generated/templates/defaultTemplateUpdated
Emitted when the default template is updated.
## Event Properties
| Field | Value |
| -------------- | ----------------------------------- |
| `event` | `"template.defaultTemplateUpdated"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| -------------------------- | -------- | ------------------------------------ |
| `defaultTemplateId` | `string` | Previous default template identifier |
| `defaultTemplateLanguage` | `string` | Previous default template language |
| `selectedTemplateId` | `string` | New default template identifier |
| `selectedTemplateLanguage` | `string` | New default template language |
#### Example
```json theme={null}
{
"event": "template.defaultTemplateUpdated",
"confidential": false,
"payload": {
"defaultTemplateId": "template_default",
"defaultTemplateLanguage": "en",
"selectedTemplateId": "template_selected",
"selectedTemplateLanguage": "en"
}
}
```
| Field | Type | Description |
| -------------------------- | -------- | ------------------------------------ |
| `defaultTemplateId` | `string` | Previous default template identifier |
| `defaultTemplateLanguage` | `string` | Previous default template language |
| `selectedTemplateId` | `string` | New default template identifier |
| `selectedTemplateLanguage` | `string` | New default template language |
#### Example
```json theme={null}
{
"event": "template.defaultTemplateUpdated",
"confidential": true,
"payload": {
"defaultTemplateId": "template_default",
"defaultTemplateLanguage": "en",
"selectedTemplateId": "template_selected",
"selectedTemplateLanguage": "en"
}
}
```
# template.deleted
Source: https://docs.corti.ai/assistant/events/generated/templates/deleted
Emitted when a template is deleted.
## Event Properties
| Field | Value |
| -------------- | -------------------- |
| `event` | `"template.deleted"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ----- | -------- | ---------------------------------- |
| `id` | `string` | Identifier of the deleted template |
#### Example
```json theme={null}
{
"event": "template.deleted",
"confidential": false,
"payload": {
"id": "template_3"
}
}
```
| Field | Type | Description |
| ----- | -------- | ---------------------------------- |
| `id` | `string` | Identifier of the deleted template |
#### Example
```json theme={null}
{
"event": "template.deleted",
"confidential": true,
"payload": {
"id": "template_3"
}
}
```
# template.pickerOpened
Source: https://docs.corti.ai/assistant/events/generated/templates/pickerOpened
Emitted when the template picker opens.
## Event Properties
| Field | Value |
| -------------- | ------------------------- |
| `event` | `"template.pickerOpened"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| ---------- | ----------------------------- | ------------------------------------------ |
| `location` | `"settings" \| "interaction"` | Where in the UI the picker was opened from |
#### Example
```json theme={null}
{
"event": "template.pickerOpened",
"confidential": false,
"payload": {
"location": "settings"
}
}
```
| Field | Type | Description |
| ---------- | ----------------------------- | ------------------------------------------ |
| `location` | `"settings" \| "interaction"` | Where in the UI the picker was opened from |
#### Example
```json theme={null}
{
"event": "template.pickerOpened",
"confidential": true,
"payload": {
"location": "settings"
}
}
```
# template.selected
Source: https://docs.corti.ai/assistant/events/generated/templates/selected
Emitted when a template is selected.
## Event Properties
| Field | Value |
| -------------- | --------------------- |
| `event` | `"template.selected"` |
| `confidential` | `boolean` |
| `payload` | `object` |
| Field | Type | Description |
| -------------------------- | -------- | --------------------------------- |
| `selectedTemplateId` | `string` | Identifier of the chosen template |
| `selectedTemplateLanguage` | `string` | Language of the chosen template |
#### Example
```json theme={null}
{
"event": "template.selected",
"confidential": false,
"payload": {
"selectedTemplateId": "template_selected",
"selectedTemplateLanguage": "en"
}
}
```
| Field | Type | Description |
| -------------------------- | -------- | --------------------------------- |
| `selectedTemplateId` | `string` | Identifier of the chosen template |
| `selectedTemplateLanguage` | `string` | Language of the chosen template |
#### Example
```json theme={null}
{
"event": "template.selected",
"confidential": true,
"payload": {
"selectedTemplateId": "template_selected",
"selectedTemplateLanguage": "en"
}
}
```
# Corti Assistant Events Overview
Source: https://docs.corti.ai/assistant/events/index
How Corti Assistant dispatches events
## Overview
Corti Assistant uses a **real-time event system** to communicate state changes and important updates from the embedded application to your integration. Events enable you to build responsive integrations that react to user actions, recording states, and document lifecycle changes without polling.
## Event Structure
All Corti Assistant events follow a consistent schema:
```typescript theme={null}
{
event: string, // Event name (e.g., 'recording.started', 'document.generated')
confidential: boolean, // Whether event contains sensitive data
payload: object // Event-specific data
}
```
### Structure Components
* **`event`**: A dot-notation string identifier for the specific event type (e.g., `'recording.started'`, `'document.generated'`, `'interaction.loaded'`)
* **`confidential`**: A boolean indicating whether the payload contains sensitive patient or user data
* **`payload`**: An object containing event-specific data. The structure varies by event type and confidentiality level.
### Confidential vs Public Events
Events can contain two types of payloads:
* **Public Payload**: Contains metadata and identifiers (IDs, states, durations) without sensitive information
* **Confidential Payload**: Contains full interaction context including transcripts, documents, facts, and other protected data
The `confidential` field indicates which payload type is included.
## Integration Transports
Events are delivered through different transport mechanisms depending on your integration method. For details on how to receive and handle events in your specific integration:
* [PostMessage API](/assistant/postmessage-api) - For iframe/WebView integrations
* [Window API](/assistant/window-api) - For same-origin integrations
* Web Component (coming soon) - For simplified component-based integration
## Detailed Event Reference
For complete documentation of each event, including full payload schemas, confidential fields, and usage examples, see the individual event pages in this section.
# Legacy Events (Deprecated)
Source: https://docs.corti.ai/assistant/events/legacy-events
Deprecated event format still dispatched for backward compatibility
These events are **deprecated** and will be removed in a future version. They
are still dispatched for backward compatibility, but you should migrate to the
[new event format](/assistant/events) as soon as possible.
## Overview
The legacy event system uses the `CORTI_EMBEDDED_EVENT` wrapper format with camelCase event names. These events are still sent alongside the new dot-notation events, but support will be removed in a future release.
## Legacy Event Structure
All legacy events follow this structure:
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT', // Always this value for legacy events
event: string, // camelCase event name (e.g., 'recordingStarted')
deprecated: true, // Always true for legacy events
payload?: object // Optional event-specific data
}
```
## New Event Structure
The new event format uses dot-notation event names and includes a `confidential` field:
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT', // Same wrapper as legacy format
event: string, // Dot-notation event name (e.g., 'recording.started')
confidential: boolean, // Indicates if payload contains sensitive data
payload: object // Event-specific data with new structure
}
```
## Migration Path
When migrating from legacy events to the new format:
1. Update your event names from camelCase to dot-notation (e.g., `recordingStarted` → `recording.started`)
2. Handle the new `confidential` field in events
3. Update payload structures to match the new format (see individual event pages)
4. Remove checks for `deprecated: true` field (not present in new events)
The `CORTI_EMBEDDED_EVENT` type wrapper remains the same in both legacy and
new formats. You still check for `event.data?.type === 'CORTI_EMBEDDED_EVENT'`
* only the event names and payload structures have changed.
You can use the `deprecated: true` field to programmatically detect and log
warnings for legacy events in your integration, helping you track migration
progress.
## Legacy Events Reference
### ready
Emitted when the embedded app is loaded and ready to receive messages.
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT',
event: 'ready',
deprecated: true,
payload?: {}
}
```
**Replacement:** Use [`embedded.ready`](/assistant/events/generated/embedded-api/ready) instead.
***
### loaded
Emitted when navigation to a specific path has completed.
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT',
event: 'loaded',
deprecated: true,
payload: {
path: string
}
}
```
**Replacement:** Use [`interaction.loaded`](/assistant/events/generated/interaction/loaded) instead.
***
### recordingStarted
Emitted when recording has started.
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT',
event: 'recordingStarted',
deprecated: true,
payload?: {}
}
```
**Replacement:** Use [`recording.started`](/assistant/events/generated/recording/started) instead.
***
### recordingStopped
Emitted when recording has stopped.
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT',
event: 'recordingStopped',
deprecated: true,
payload?: {}
}
```
**Replacement:** Use [`recording.stopped`](/assistant/events/generated/recording/stopped) instead.
***
### documentGenerated
Emitted when a document has been generated.
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT',
event: 'documentGenerated',
deprecated: true,
payload: {
document: {
id: string,
name: string,
templateRef: string,
// ... (see getStatus response for full document structure)
}
}
}
```
**Replacement:** Use [`document.generated`](/assistant/events/generated/document/generated) instead.
***
### documentUpdated
Emitted when a document has been updated.
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT',
event: 'documentUpdated',
deprecated: true,
payload: {
document: {
id: string,
name: string,
templateRef: string,
// ... (see getStatus response for full document structure)
}
}
}
```
**Replacement:** Use [`document.updated`](/assistant/events/generated/document/updated) instead.
***
### documentSynced
Emitted when a document has been synced to EHR.
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT',
event: 'documentSynced',
deprecated: true,
payload: {
document: {
id: string,
name: string,
templateRef: string,
// ... (see getStatus response for full document structure)
}
}
}
```
**Replacement:** Use [`document.synced`](/assistant/events/generated/document/synced) instead.
### usage
Emitted when credits have been consumed because of either of these triggers:
* **Ending/pausing a recording**: credits consumed for transcription and fact extraction
* **Ending dictation**: credits consumed for transcription
* **Generating a document**: credits consumed for text generation
```typescript theme={null}
{
type: 'CORTI_EMBEDDED_EVENT',
event: 'usage',
payload: {
creditsConsumed: 0.13,
}
}
```
This value is not accumulative and only refers to the latest trigger.
**Replacement:** Use [`account.creditsConsumed`](/assistant/events/generated/account/creditsConsumed) instead.
***
## Listening for Legacy Events
```javascript theme={null}
window.addEventListener("message", (event) => {
if (event.data?.type === "CORTI_EMBEDDED_EVENT") {
switch (event.data.event) {
case "ready":
console.log("Embedded app ready");
break;
case "loaded":
console.log("Navigation completed:", event.data.payload.path);
break;
case "documentGenerated":
console.log("Document generated:", event.data.payload.document);
break;
case "documentUpdated":
console.log("Document updated:", event.data.payload.document);
break;
case "documentSynced":
console.log("Document synced:", event.data.payload.document);
break;
case "recordingStarted":
console.log("Recording started");
break;
case "recordingStopped":
console.log("Recording stopped");
break;
default:
console.log("Unknown event:", event.data.event);
}
}
});
```
## Timeline
* **Current**: Both legacy and new events are dispatched
* **Future**: Legacy events will be removed (date TBD)
* **Action Required**: Migrate to new event format before legacy support is removed
# Introduction to Embedded Assistant
Source: https://docs.corti.ai/assistant/introduction
Embed a first-class ambient scribing experience into your healthcare application in minutes.
## Build Ambient Scribing Directly Into Your Application
Corti Assistant is an embeddable ambient scribing application for software teams building EHRs and healthcare platforms.
It gives you a production-ready, healthcare-grade assistant that listens to clinical conversations, structures medical information, and generates EHR-ready documentation. You embed it directly into your product so it feels near-native to clinicians, not like a separate tool they have to manage.
This documentation is for developers who want to ship ambient scribing quickly, without building speech recognition, clinical NLP, document generation, and compliance infrastructure from scratch.
Corti Assistant can run as a standalone app, but it is designed first and foremost to be embedded into existing healthcare software.
***
## Why Embed an Assistant Instead of Building One
Embedding Corti Assistant lets you focus on your core product while offering a full ambient scribing experience that fits naturally into your workflow.
Instead of stitching together speech-to-text, clinical extraction, templates, and exports yourself, you integrate a single embedded surface that already works end to end.
When embedded, the assistant:
* Lives inside your EHR or healthcare application
* Uses your existing authentication and session model
* Appears exactly where clinicians already work
* Shares context like patient, encounter, and workflow state
The result is a seamless experience where documentation happens as part of the product, not alongside it.
From a development perspective, this means weeks of integration instead of months or years of AI development, model tuning, and compliance work.
You also inherit enterprise-grade security and certifications out of the box, including HIPAA, GDPR, SOC 2 Type 2, ISO 27001, WCAG 2.2 AA, and more.
***
## From Conversation to Structured Clinical Documentation
At its core, Corti Embedded Assistant turns spoken consultations into structured, editable documentation.
The workflow is simple and predictable:
**Conversation → Transcript → Clinical facts → EHR-ready documents**
During a consultation, the assistant listens continuously. Speech is transcribed using healthcare-optimized recognition, then analyzed to extract clinically relevant facts such as symptoms, history, vitals, and plans.
These facts are presented as structured, editable items that clinicians can review, adjust, or reorganize before generating one or more documents.
From a single conversation, you can generate multiple outputs such as SOAP notes, H\&P, emergency notes, referrals, discharge summaries, or patient summaries, all based on the same curated fact set.
***
## Embedded by Design
Corti Assistant is delivered as an embeddable application with APIs that give you control over how it behaves inside your product.
You decide:
* When sessions start and stop
* How recording is controlled
* How documents are generated and exported
* How authentication and identity are handled
Two integration modes are supported:
* **PostMessage API** for iframe or WebView based integrations
* **Window API** for same-origin, synchronous TypeScript integrations
Both options are designed to support a near-native user experience while keeping the assistant decoupled from your core codebase.
***
## What You Get Out of the Box
The embedded assistant is not just transcription. It is a complete documentation surface that includes:
* Real-time AI chat for editing documents, asking clinical questions, and referencing guidelines
* Familiar clinical templates like SOAP, H\&P, and custom formats
* Multi-document generation from a single encounter
* Drag-and-drop, editable clinical facts
* One-click export to your EHR or downstream systems
The UI can be customized to match your application, including branding, feature visibility, language support, and terminology overrides.
***
## Built for Healthcare, Operated at Scale
Corti Assistant runs on the same healthcare-specific platform used by large health systems.
The underlying Corti API handles:
* Medical speech recognition
* Clinical fact extraction
* Language models tuned for healthcare
* Document generation
* Security, privacy, and compliance
For organizations with advanced needs, enterprise capabilities are available through the same embedded interface, including EHR connectivity, medical coding, standardized documentation, custom guidelines, and priority support.
# PostMessage API Quickstart
Source: https://docs.corti.ai/assistant/postmessage-api
Use the PostMessage API to integrate Corti Assistant via iframe or WebView
The PostMessage API enables secure cross-origin communication between your application and the embedded Corti Assistant. This method is recommended for iframe or WebView integrations.
This method is recommended for iFrame/WebView integration. Note, a web
component is coming soon that will make this integration easier to use.
## Overview
The PostMessage API uses the browser's `postMessage` mechanism to enable secure communication between your application and the embedded Corti Assistant, even when they're served from different origins. This makes it ideal for embedding Corti Assistant in iframes or WebViews.
## Requirements
Before getting started, ensure you have:
* **Access to Corti Assistant**: You'll need credentials and access to one of the available regions
* **HTTPS**: The embedded Assistant must be loaded over HTTPS (required for microphone access)
* **Microphone permissions**: Your application must request and handle microphone permissions appropriately
* **OAuth2 client**: You'll need an OAuth2 client configured for user-based authentication
* **Modern browser or WebView**: For web applications, use a modern browser. For native apps, use a modern WebView
## Recommendations
* **Validate message origins** to ensure security
* **Use specific target origins** instead of `'*'` when possible
* **Implement proper error handling** for all API calls
* **Handle authentication token refresh** to maintain user sessions
* **Request microphone permissions** before initializing the embedded Assistant
## Available Regions
* **US**: [https://assistant.us.corti.app](https://assistant.us.corti.app)
* **EU**: [https://assistant.eu.corti.app](https://assistant.eu.corti.app)
* **EU MD**: [https://assistantmd.eu.corti.app](https://assistantmd.eu.corti.app) (medical device compliant)
## Features
* Secure cross-origin communication
* Works with any iframe or WebView implementation
* Fully asynchronous with request/response pattern
## Quick Start
### Step 1: Set Up Authentication
The Embedded Assistant API **only supports user-based authentication**. You
must authenticate as an end user, not as an application. Client credentials
and other machine-to-machine authentication methods are not supported.
Before you can use the PostMessage API, you need to authenticate your users using OAuth2. The recommended flow is **Authorization Code Flow with PKCE** for secure, user-facing integrations.
For detailed information on OAuth2 flows and authentication, see our [OAuth Authentication Guide](/assistant/authentication).
**Key points:**
* Use **Authorization Code Flow with PKCE** for embedded integrations
* Obtain `access_token`, `refresh_token`, and `id_token` for your users
* Handle token refresh to maintain sessions
* Never expose client secrets in client-side code
### Step 2: Load the Embedded Assistant
Load the Corti Assistant in an iframe or WebView:
```javascript EU example expandable theme={null}
```
```javascript EU MD example expandable theme={null}
```
```javascript US example expandable theme={null}
```
## Message Format
All messages sent to the embedded app follow this structure:
```typescript theme={null}
{
type: 'CORTI_EMBEDDED',
version: 'v1',
action: string,
requestId?: string,
payload?: object
}
```
### Message Properties
* `type`: Always `'CORTI_EMBEDDED'`
* `version`: API version (currently `'v1'`)
* `action`: The action to perform (see [API Reference](/assistant/api-reference) for all actions)
* `requestId`: Optional unique identifier for tracking responses
* `payload`: Optional data specific to the action
## Response Handling
Responses from the embedded app are sent via `postMessage` and can be identified by checking the message type:
```javascript theme={null}
window.addEventListener("message", (event) => {
// Handle responses
if (event.data?.type === "CORTI_EMBEDDED_RESPONSE") {
const { requestId, success, data, error } = event.data;
// Handle response
}
// Handle events
if (event.data?.type === "CORTI_EMBEDDED_EVENT") {
const { event: eventType, payload } = event.data;
// Handle event
}
});
```
## Complete Integration Example
Here's a complete example showing the recommended integration flow with proper request/response handling:
```javascript Example Embedded Integration expandable theme={null}
// State management
let iframe = null;
let isReady = false;
let currentInteractionId = null;
let pendingRequests = new Map();
// Initialize the integration
function initializeCortiEmbeddedIntegration(iframeElement) {
iframe = iframeElement;
isReady = false;
currentInteractionId = null;
setupEventListeners();
}
function setupEventListeners() {
window.addEventListener("message", (event) => {
// Handle responses
if (event.data?.type === "CORTI_EMBEDDED_RESPONSE") {
handleResponse(event.data);
}
// Handle events
if (event.data?.type === "CORTI_EMBEDDED_EVENT") {
handleEvent(event.data);
}
});
}
function handleResponse(responseData) {
const { requestId, success, data, error } = responseData;
const pendingRequest = pendingRequests.get(requestId);
if (pendingRequest) {
pendingRequests.delete(requestId);
if (success) {
pendingRequest.resolve(data);
} else {
pendingRequest.reject(new Error(error?.message || "Request failed"));
}
}
}
function handleEvent(eventData) {
switch (eventData.event) {
case "embedded.ready":
isReady = true;
startIntegrationFlow();
break;
case "document.generated":
onDocumentGenerated(eventData.payload.document);
break;
case "recording.started":
console.log("Recording started");
break;
case "recording.stopped":
console.log("Recording stopped");
break;
// ... handle other events
}
}
function generateRequestId() {
return `req-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
async function startIntegrationFlow() {
try {
// 1. Authenticate
await authenticate();
// 2. Configure session
await configureSession();
// 3. Create interaction
const interaction = await createInteraction();
// 4. Add relevant facts
await addFacts();
// 5. Navigate to interaction UI
await navigateToSession(interaction.id);
console.log("Integration flow completed successfully");
} catch (error) {
console.error("Integration flow failed:", error);
}
}
function sendMessage(action, payload = {}) {
return new Promise((resolve, reject) => {
const requestId = generateRequestId();
pendingRequests.set(requestId, { resolve, reject });
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action,
requestId,
payload,
},
"*",
);
// Optional: Add timeout
setTimeout(() => {
if (pendingRequests.has(requestId)) {
pendingRequests.delete(requestId);
reject(new Error("Request timeout"));
}
}, 30000);
});
}
async function authenticate() {
// Requires OAuth2 tokens from user authentication
return sendMessage("auth", {
access_token: "your-access-token", // From OAuth2 flow
refresh_token: "your-refresh-token", // From OAuth2 flow
id_token: "your-id-token", // From OAuth2 flow
token_type: "Bearer",
});
}
async function configureSession() {
return sendMessage("configureSession", {
defaultLanguage: "en",
defaultOutputLanguage: "en",
defaultTemplateKey: "corti-soap",
defaultMode: "virtual",
});
}
async function createInteraction() {
return sendMessage("createInteraction", {
assignedUserId: null,
encounter: {
identifier: `encounter-${Date.now()}`,
status: "planned",
type: "first_consultation",
period: {
startedAt: new Date().toISOString(),
},
title: "Initial Consultation",
},
});
}
async function addFacts() {
return sendMessage("addFacts", {
facts: [
{ text: "Chest pain", group: "other" },
{ text: "Shortness of breath", group: "other" },
],
});
}
async function navigateToSession(interactionId) {
return sendMessage("navigate", {
path: `/session/${interactionId}`,
});
}
function onDocumentGenerated(document) {
console.log("Document generated:", document);
}
// Usage example:
const iframeElement = document.getElementById("corti-iframe");
initializeCortiEmbeddedIntegration(iframeElement);
```
## Events
Corti Assistant dispatches events to notify your application of state changes and important updates. When using the PostMessage API, these events are wrapped in the `CORTI_EMBEDDED_EVENT` message type.
### Event Format Translation
Core events documented in the [Events Reference](/assistant/events) are wrapped for postMessage delivery:
**Core Event Structure:**
```json theme={null}
{
"event": "recording.started",
"confidential": false,
"payload": {
"mode": "dictation",
"language": "en",
"interactionId": "int_123"
}
}
```
**PostMessage Wrapper:**
```json theme={null}
{
"type": "CORTI_EMBEDDED_EVENT",
"event": "recording.started",
"confidential": false,
"payload": {
"mode": "dictation",
"language": "en",
"interactionId": "int_123"
}
}
```
### Listening for Events
Set up a message listener to receive events from the embedded Assistant:
```javascript Listening for Events expandable theme={null}
const ALLOWED_ORIGINS = [
"https://assistant.eu.corti.app",
"https://assistantmd.eu.corti.app",
"https://assistant.us.corti.app",
];
window.addEventListener("message", (event) => {
// Validate origin for security
if (!ALLOWED_ORIGINS.includes(event.origin)) {
return;
}
// Check for Corti events
if (event.data?.type === "CORTI_EMBEDDED_EVENT") {
const { event: eventName, confidential, payload } = event.data;
// Handle different event types
switch (eventName) {
case "recording.started":
console.log("Recording started:", payload);
break;
case "recording.paused":
console.log("Recording paused:", payload);
break;
case "document.generated":
console.log("Document generated:", payload);
handleDocumentGenerated(payload);
break;
case "error.triggered":
console.error("Error occurred:", payload);
break;
default:
console.log("Unknown event:", eventName, payload);
}
}
});
function handleDocumentGenerated(payload) {
const { documentId, documentName, interactionId } = payload;
// Update your UI, sync to backend, etc.
}
```
### Available Events
For a complete list of events and their payload structures, see the [Events Overview](/assistant/events).
Common events include:
* `recording.started` - Recording has started
* `recording.paused` - Recording has paused
* `document.generated` - Document has been generated
* `document.updated` - Document has been edited
* `document.synced` - Document synced to external system
* `error.triggered` - An error occurred
### Legacy Events
The embedded Assistant also dispatches [legacy
events](/assistant/events/legacy-events) using camelCase names (e.g.,
`recordingStarted`, `documentGenerated`). These are deprecated and will be
removed in a future version.
## Error Handling
Always handle errors when making requests:
```javascript Error Handling expandable theme={null}
try {
const result = await sendMessage("auth", {
accessToken: "your-access-token",
refreshToken: "your-refresh-token",
id_token: "your-id-token",
token_type: "Bearer",
});
console.log("Authentication successful:", result);
} catch (error) {
console.error("Authentication failed:", error.message);
// Handle authentication failure
}
```
## Security Considerations
When using `postMessage`, always:
1. **Validate message origin**: Check `event.origin` to ensure messages come from trusted sources
2. **Use specific target origins**: Replace `'*'` with the specific origin when possible
3. **Sanitize data**: Never trust data from postMessage without validation
```javascript Security Best Practices expandable theme={null}
const ALLOWED_ORIGINS = [
"https://assistant.eu.corti.app",
"https://assistantmd.eu.corti.app",
"https://assistant.us.corti.app",
];
window.addEventListener("message", (event) => {
// Validate origin
if (!ALLOWED_ORIGINS.includes(event.origin)) {
console.warn("Message from untrusted origin:", event.origin);
return;
}
// Process message
if (event.data?.type === "CORTI_EMBEDDED_EVENT") {
// Handle event
}
});
// Send messages with specific origin
iframe.contentWindow.postMessage(
{
type: "CORTI_EMBEDDED",
version: "v1",
action: "auth",
payload: {
/* ... */
},
},
"https://assistant.eu.corti.app",
); // Specific origin instead of '*'
```
## Next Steps
* Review the [OAuth Authentication Guide](/assistant/authentication) to set up user authentication
* See the [API Reference](/assistant/api-reference) for all available actions and their parameters
* Learn about [events](/assistant/events) that the embedded app can send
* Check out the [Window API](/assistant/window-api) for same-origin integrations
Please [contact us](https://help.corti.app) for help or questions.
# Window API Quickstart
Source: https://docs.corti.ai/assistant/window-api
Use the Window API for direct integration with Corti Assistant
The Window API provides a synchronous TypeScript API via `window.CortiEmbedded` for direct integration with Corti Assistant. This method is ideal for same-origin integrations.
This method is recommended for direct integration
## Overview
The Window API offers a Promise-based, TypeScript-friendly interface for integrating Corti Assistant into your application. It provides direct access to `window.CortiEmbedded.v1`, making it feel like a traditional JavaScript SDK.
## Requirements
### Implementation Requirements
To use the Window API, you'll need to implement a WebView or similar browser component within your native application. The embedded Corti Assistant runs as a web application and requires a modern browser environment to function properly.
### Minimum Requirements
* **Modern WebView**: Use a modern WebView implementation that supports:
* **WebView2** (Windows) - Recommended for Windows applications
* **WKWebView** (iOS/macOS) - Recommended for Apple platforms
* **WebView** (Android) - Use the latest Chromium-based WebView
* **Electron WebView** - For Electron-based applications
* **Browser Compatibility**: The WebView must support:
* ES6+ JavaScript features
* Modern Web APIs (WebRTC, MediaDevices API)
* PostMessage API
* Local Storage and Session Storage
* **Microphone Permissions**: Your application must request and handle microphone permissions:
* Request microphone access before initializing the embedded Assistant
* Handle permission denial gracefully
* Provide clear messaging to users about why microphone access is needed
* Ensure permissions are granted at the OS level (not just browser level)
### Platform-Specific Considerations
**Windows (WebView2)**
* Ensure WebView2 Runtime is installed or bundled with your application
* Request microphone permissions in your application manifest
* Handle permission prompts appropriately
**iOS/macOS (WKWebView)**
* Add `NSMicrophoneUsageDescription` to your Info.plist
* Request microphone permissions using `AVAudioSession` or similar APIs
* Ensure permissions are granted before loading the embedded Assistant
**Android (WebView)**
* Request `RECORD_AUDIO` permission in your AndroidManifest.xml
* Request runtime permissions using `ActivityCompat.requestPermissions()`
* Handle permission callbacks appropriately
## Recommendations
* **Use TypeScript** for better type safety and developer experience
* **Implement proper error handling** for all API calls
* **Handle token refresh** to maintain user sessions
* **Request microphone permissions early** in your application flow
* **Test on target platforms** to ensure WebView compatibility
## Quick Start
### Step 1: Set Up Authentication
The Embedded Assistant API **only supports user-based authentication**. You
must authenticate as an end user, not as an application. Client credentials
and other machine-to-machine authentication methods are not supported.
Before you can use the Window API, you need to authenticate your users using OAuth2. The recommended flow is **Authorization Code Flow with PKCE** for secure, user-facing integrations.
For detailed information on OAuth2 flows and authentication, see our [OAuth Authentication Guide](/assistant/authentication).
**Key points:**
* Use **Authorization Code Flow with PKCE** for embedded integrations
* Obtain `access_token`, `refresh_token`, and `id_token` for your users
* Handle token refresh to maintain sessions
* Never expose client secrets in client-side code
### Step 2: Wait for the Embedded App to Be Ready
The embedded Corti Assistant will send an `embedded.ready` event when it's loaded and ready to receive API calls:
```javascript Basic Setup expandable theme={null}
window.addEventListener("message", async (event) => {
if (
event.data?.type === "CORTI_EMBEDDED_EVENT" &&
event.data.event === "embedded.ready"
) {
// The API is now available
const api = window.CortiEmbedded.v1;
console.log("Corti Assistant is ready");
}
});
```
### Step 3: Authenticate the User
Once the API is ready, authenticate the user with their OAuth2 tokens:
```javascript Authentication expandable theme={null}
const api = window.CortiEmbedded.v1;
const user = await api.auth({
access_token: "your-access-token", // From OAuth2 flow
refresh_token: "your-refresh-token", // From OAuth2 flow
id_token: "your-id-token", // From OAuth2 flow
token_type: "Bearer",
});
console.log("Authenticated user:", user);
```
### Step 4: Configure and Use
After authentication, you can configure the interface and start using the Assistant:
```javascript Configure and Use expandable theme={null}
// Configure the interface
const config = await api.configure({
features: {
interactionTitle: false,
aiChat: false,
navigation: true,
},
appearance: {
primaryColor: "#00a6ff",
},
locale: {
interfaceLanguage: "en",
dictationLanguage: "en",
},
});
// Create an interaction
const interaction = await api.createInteraction({
assignedUserId: null,
encounter: {
identifier: `encounter-${Date.now()}`,
status: "planned",
type: "first_consultation",
period: {
startedAt: new Date().toISOString(),
},
title: "Initial Consultation",
},
});
// Navigate to the interaction
await api.navigate({
path: `/session/${interaction.id}`,
});
```
## API Structure
The API is available at `window.CortiEmbedded.v1` and provides the following methods:
```typescript theme={null}
window.CortiEmbedded.v1 = {
auth: (payload) => Promise,
configure: (payload) => Promise,
createInteraction: (payload) => Promise,
addFacts: (payload) => Promise,
configureSession: (payload) => Promise,
navigate: (payload) => Promise,
setCredentials: (payload) => Promise,
startRecording: () => Promise,
stopRecording: () => Promise,
getStatus: () => Promise,
};
```
## Complete Integration Example
Here's a complete example showing the recommended integration flow:
```javascript Example Window API Integration expandable theme={null}
let api = null;
let isReady = false;
// Wait for the embedded app to be ready
window.addEventListener("message", async (event) => {
if (
event.data?.type === "CORTI_EMBEDDED_EVENT" &&
event.data.event === "embedded.ready"
) {
api = window.CortiEmbedded.v1;
isReady = true;
try {
await startIntegrationFlow();
} catch (error) {
console.error("Integration flow failed:", error);
}
}
});
async function startIntegrationFlow() {
try {
// 1. Authenticate (requires OAuth2 tokens)
const user = await api.auth({
access_token: "your-access-token", // From OAuth2 flow
refresh_token: "your-refresh-token", // From OAuth2 flow
id_token: "your-id-token", // From OAuth2 flow
token_type: "Bearer",
});
console.log("Authenticated user:", user);
// 2. Configure interface
const config = await api.configure({
features: {
interactionTitle: false,
aiChat: false,
documentFeedback: false,
navigation: true,
virtualMode: true,
syncDocumentAction: false,
templateEditor: true,
},
appearance: {
primaryColor: "#00a6ff",
},
locale: {
interfaceLanguage: "en",
dictationLanguage: "en",
},
});
console.log("Configuration applied:", config);
// 3. Configure session
await api.configureSession({
defaultLanguage: "en",
defaultOutputLanguage: "en",
defaultTemplateKey: "corti-soap",
defaultMode: "virtual",
});
// 4. Create interaction
const interaction = await api.createInteraction({
assignedUserId: null,
encounter: {
identifier: `encounter-${Date.now()}`,
status: "planned",
type: "first_consultation",
period: {
startedAt: new Date().toISOString(),
},
title: "Initial Consultation",
},
});
console.log("Interaction created:", interaction);
// 5. Add relevant facts
await api.addFacts({
facts: [
{ text: "Chest pain", group: "other" },
{ text: "Shortness of breath", group: "other" },
{ text: "Fatigue", group: "other" },
],
});
// 6. Navigate to interaction UI
await api.navigate({
path: `/session/${interaction.id}`,
});
console.log("Integration flow completed successfully");
} catch (error) {
console.error("Integration flow failed:", error);
throw error;
}
}
// Listen for events
window.addEventListener("message", (event) => {
if (event.data?.type === "CORTI_EMBEDDED_EVENT") {
switch (event.data.event) {
case "document.generated":
console.log("Document generated:", event.data.payload.document);
break;
case "recording.started":
console.log("Recording started");
break;
case "recording.stopped":
console.log("Recording stopped");
break;
// ... handle other events
}
}
});
```
## Events
Corti Assistant dispatches events to notify your application of user activity, state change, data updates, and many more interactions. When using the Window API, events are delivered through the same `postMessage` mechanism.
### Event Format Translation
Core events documented in the [Events Reference](/assistant/events) are wrapped in the `CORTI_EMBEDDED_EVENT` message type:
**Core Event Structure:**
```json theme={null}
{
"event": "event-name",
"confidential": true,
"payload": {
"various": "properties"
}
}
```
**Window API Delivery:**
```json theme={null}
{
"type": "CORTI_EMBEDDED_EVENT",
"event": "recording.started",
"confidential": false,
"payload": {
"mode": "virtual",
"language": "en",
"interactionId": "int_123",
"interactionState": "ongoing"
}
}
```
### Listening for Events
Even when using the Window API for method calls, events are delivered via `postMessage`. Set up a listener:
```javascript Listening for Events expandable theme={null}
window.addEventListener("message", (event) => {
// Check for Corti events
if (event.data?.type === "CORTI_EMBEDDED_EVENT") {
const { event: eventName, confidential, payload } = event.data;
// Handle different event types
switch (eventName) {
case "recording.started":
console.log("Recording started:", payload);
updateRecordingState(true);
break;
case "recording.paused":
console.log("Recording paused:", payload);
updateRecordingState(false);
break;
case "document.generated":
console.log("Document generated:", payload);
handleNewDocument(payload);
break;
case "error.triggered":
console.error("Error occurred:", payload);
showErrorNotification(payload);
break;
default:
console.log("Unknown event:", eventName);
}
}
});
function updateRecordingState(isRecording) {
// Update your UI to reflect recording state
}
function handleNewDocument(payload) {
const { documentId, documentName, templateId } = payload;
// Process the new document
}
```
### Combining API Calls and Events
Use the Window API for actions and events for state updates:
```javascript Combined Usage expandable theme={null}
const api = window.CortiEmbedded.v1;
// Set up event listener
window.addEventListener("message", (event) => {
if (event.data?.type === "CORTI_EMBEDDED_EVENT") {
const { event: eventName, payload } = event.data;
if (eventName === "recording.started") {
console.log("Recording started successfully");
}
}
});
// Trigger action via Window API
try {
await api.startRecording();
// Event will be received via message listener above
} catch (error) {
console.error("Failed to start recording:", error);
}
```
### Available Events
For a complete list of events and their payload structures, see the [Events Overview](/assistant/events).
Common events include:
* `recording.started` - Recording has started
* `recording.paused` - Recording has paused
* `document.generated` - Document has been generated
* `document.updated` - Document has been edited
* `interaction.loaded` - Interaction has been loaded
* `error.triggered` - An error occurred
### Legacy Events
The embedded Assistant also dispatches [legacy
events](/assistant/events/legacy-events) using camelCase names (e.g.,
`recordingStarted`, `documentGenerated`). These are deprecated and will be
removed in a future version.
## Error Handling
All API methods return Promises and can throw errors. Always wrap calls in try-catch blocks:
```javascript Error Handling expandable theme={null}
try {
const api = window.CortiEmbedded.v1;
const user = await api.auth({
access_token: "your-access-token",
refresh_token: "your-refresh-token",
id_token: "your-id-token", // From OAuth2 flow
token_type: "Bearer",
});
console.log("Authentication successful:", user);
} catch (error) {
console.error("Authentication failed:", error.message);
// Handle authentication failure
}
```
## TypeScript Support
If you're using TypeScript, you can extend the Window interface to get type safety:
```typescript TypeScript Definitions expandable theme={null}
interface CortiEmbeddedAPI {
auth: (payload: AuthPayload) => Promise;
configure: (payload: ConfigurePayload) => Promise;
createInteraction: (
payload: CreateInteractionPayload,
) => Promise;
addFacts: (payload: AddFactsPayload) => Promise;
configureSession: (payload: ConfigureSessionPayload) => Promise;
navigate: (payload: NavigatePayload) => Promise;
setCredentials: (payload: SetCredentialsPayload) => Promise;
startRecording: () => Promise;
stopRecording: () => Promise;
getStatus: () => Promise;
}
interface Window {
CortiEmbedded: {
v1: CortiEmbeddedAPI;
};
}
```
## Helper Function
You can create a helper function to ensure the API is ready:
```javascript Helper Function expandable theme={null}
function waitForCortiAPI() {
return new Promise((resolve) => {
if (window.CortiEmbedded?.v1) {
resolve(window.CortiEmbedded.v1);
return;
}
const listener = (event) => {
if (
event.data?.type === "CORTI_EMBEDDED_EVENT" &&
event.data.event === "embedded.ready"
) {
window.removeEventListener("message", listener);
resolve(window.CortiEmbedded.v1);
}
};
window.addEventListener("message", listener);
});
}
// Usage
async function useAPI() {
const api = await waitForCortiAPI();
const user = await api.auth({
access_token: "your-access-token",
refresh_token: "your-refresh-token",
id_token: "your-id-token",
token_type: "Bearer",
});
}
```
## Next Steps
* Review the [OAuth Authentication Guide](/assistant/authentication) to set up user authentication
* See the [API Reference](/assistant/api-reference) for all available methods and their parameters
* Learn about [events](/assistant/events) that the embedded app can send
* Check out the [PostMessage API](/assistant/postmessage-api) for cross-origin integrations
Please [contact us](https://help.corti.app) for help or questions.
# Creating Clients
Source: https://docs.corti.ai/authentication/creating_clients
Quick steps to creating your first client on the Corti Developer Console.
Start by creating a project. This gives you a workspace and a \$50 trial credit.
1. Inside your project, create a new client. Choose a clear name such as `test`, `staging`, or `prod`.
2. Select the correct data residency region, EU or US, based on your deployment requirements.
After creating the client, you will receive credentials that allow your backend to request OAuth access tokens. Store these securely and never expose them to browsers or mobile apps.
* `client_id`
* `client_secret`
* `tenant-name` (usually `base`)
* `environment` (`eu` or `us`)
Use the client credentials to fetch an access token, then call the Corti API with a Bearer token and the correct `Tenant-Name` header.
# Environments & Tenants
Source: https://docs.corti.ai/authentication/environments_tenants
Learn how Corti environments and tenants work, and how they affect authentication and data residency.
## Environments
Corti operates separate regional environments. Your environment determines where data is stored and which API endpoints you use. It also defines which identity provider you authenticate against.
Available environments:
| Environment | Region | Base API URL | Auth Base URL |
| ----------- | ----------------- | -------------------------- | --------------------------- |
| `eu` | Azure West Europe | `https://api.eu.corti.app` | `https://auth.eu.corti.app` |
| `us` | Azure US East | `https://api.us.corti.app` | `https://auth.us.corti.app` |
Your API client must authenticate against the correct environment. Tokens from one region cannot be used in another. When you create a client in the console, you can pick your preferred environment.
## Tenants
A tenant represents a shared identity realm for Corti API customers. All API customers operate inside the shared tenant named `base`. This keeps authentication consistent while maintaining strict segregation of customer data at the application layer.
You include the tenant name in the authentication URL or in headers when interacting with certain APIs. For most customers, this will always be: `Tenant-Name: "base"`
Bespoke private tenants are available only for specific enterprise or regulatory scenarios that require full isolation at the identity realm level. These cases are rare and need a dedicated review. If you believe your organisation needs its own tenant, speak with your Corti representative.
# Overview
Source: https://docs.corti.ai/authentication/overview
Learn how to authenticate with client credentials for use with the Corti API.
This guide covers authentication for use with the Corti API. If you are looking for authenticating users with Corti Assistant Embedded, then see more [here](/assistant/embedded-api#authentication).
## Authenticating with Corti Auth
Corti uses OAuth 2.0 client credentials for server-to-server authentication. This flow requires that you fetch a short-lived access token based on a `client_id` and `client_secret` from the Corti Auth Server before calling the API.
```mermaid theme={null}
%%{init: {
"sequence": {
"mirrorActors": false,
"boxTextMargin": 20
}
}}%%
sequenceDiagram
participant Service as Your Backend
participant OAuth as Corti Auth
participant Corti as Corti API
Service->>OAuth: POST /token grant_type=client_credentials client_id, client_secret
OAuth-->>Service: 200 OK access_token (short-lived)
Service->>Corti: API Request Authorization: Bearer {{access_token}} Tenant-Name: base
Corti-->>Service: API Response
```
Note that both the client secret and the access tokens generated have full access to the API. They should never be shared or exposed to the client. For best practices on keeping your credentials secure, read [our guide](/authentication/security_best_practices).
### Fetching an Access Token with OAuth 2.0 client-credentials
```curl Standard theme={null}
curl
'https://auth.{environment}.corti.app/realms/base/protocol/openid-connect/token'
-d 'client_id=xxx' -d 'client_secret=xxx'
-d 'grant_type=client_credentials' -d 'scope=openid'
```
```curl Custom Tenant theme={null}
curl
'https://auth.{environment}.corti.app/realms/{tenant-name}/protocol/openid-connect/token'
-d 'client_id=xxx' -d 'client_secret=xxx'
-d 'grant_type=client_credentials' -d 'scope=openid'
```
```json Response theme={null}
{
"access_token": "eyJhbGciOi...",
"expires_in": 300,
"token_type": "Bearer",
"scope": "profile openid email"
}
```
For more detailed instructions on how to get an access token in various languages, see our guide here: [Authentication Quickstart](/authentication/quickstart)
### Using the access token in API requests
Once you have an access token, include it in the Authorization header. You must also provide the Tenant-Name header to specify which tenant context the request operates in.
```curl Example Request theme={null}
curl -X GET 'https://api.{environment}.corti.app/v2/interactions' \
-H "Authorization: Bearer {{access_token}}" \
-H "Tenant-Name: base"
```
Must contain the bearer token returned from the OAuth server.
The tenant identifier where the request is executed. Default tenants typically use `base`, but enterprise setups may use a custom tenant name.
## Why we use client credentials instead of an API key
API keys are simple, but they are static. If one leaks, whoever has it can call your APIs until you rotate it. Client credentials solve this by issuing short-lived tokens that expire automatically, which limits the blast radius of a leak and improves auditability.
Key differences:
* API keys are long-lived, client credentials produce short-lived tokens (5 minutes).
* API keys cannot express scopes or granular permissions, OAuth tokens can.
* OAuth flows integrate with identity providers and tenancy models, which makes them safer and easier to govern in enterprise environments.
# Quickstart - Authenticating to the Corti API
Source: https://docs.corti.ai/authentication/quickstart
Learn how to authenticate with client credentials.
This guide shows how to authenticate with the Corti API using OAuth 2.0 client credentials.
## Authenticate using code examples
For **JavaScript**, we recommend using the [Corti JavaScript SDK](/sdk/js-sdk), which handles authentication automatically. If you need to implement OAuth manually, see the examples below for other languages.
### Code examples
```js JavaScript expandable theme={null}
// Replace these with your values
const CLIENT_ID = "";
const CLIENT_SECRET = "";
const ENV = ""; // "eu" or "us"
const TENANT = ""; // for example "base"
async function getAccessToken() {
const tokenUrl = `https://auth.${ENV}.corti.app/realms/${TENANT}/protocol/openid-connect/token`;
const params = new URLSearchParams();
params.append("client_id", CLIENT_ID);
params.append("client_secret", CLIENT_SECRET);
params.append("grant_type", "client_credentials");
params.append("scope", "openid");
const res = await fetch(tokenUrl, {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: params
});
if (!res.ok) {
throw new Error(`Failed to get token, status ${res.status}`);
}
const data = await res.json();
return data.access_token;
}
// Example usage
getAccessToken().then(token => {
console.log("Access token:", token);
}).catch(err => {
console.error("Error:", err);
});
```
```py Python expandable theme={null}
import requests
# Replace these with your values
CLIENT_ID = ""
CLIENT_SECRET = ""
ENV = "" # e.g. "eu" or "us"
TENANT = "" # e.g. "base"
def get_access_token():
"""Request an OAuth2 client-credentials access token from Corti."""
url = f"https://auth.{ENV}.corti.app/realms/{TENANT}/protocol/openid-connect/token"
data = {
"client_id": CLIENT_ID,
"client_secret": CLIENT_SECRET,
"grant_type": "client_credentials",
"scope": "openid",
}
res = requests.post(url, data=data, headers={"Content-Type": "application/x-www-form-urlencoded"})
res.raise_for_status()
return res.json()["access_token"]
# Example usage
if __name__ == "__main__":
token = get_access_token()
print("Access token:", token)
```
```csharp C# expandable theme={null}
using System;
using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks;
class Program
{
private const string ClientId = "";
private const string ClientSecret = "";
private const string Env = ""; // "eu" or "us"
private const string Tenant = "";
private static async Task GetAccessTokenAsync()
{
var tokenUrl = $"https://auth.{Env}.corti.app/realms/{Tenant}/protocol/openid-connect/token";
using var http = new HttpClient();
var content = new FormUrlEncodedContent(new[]
{
new KeyValuePair("client_id", ClientId),
new KeyValuePair("client_secret", ClientSecret),
new KeyValuePair("grant_type", "client_credentials"),
new KeyValuePair("scope", "openid")
});
var response = await http.PostAsync(tokenUrl, content);
response.EnsureSuccessStatusCode();
var payload = await response.Content.ReadAsStringAsync();
using var json = JsonDocument.Parse(payload);
return json.RootElement.GetProperty("access_token").GetString()!;
}
static async Task Main()
{
var token = await GetAccessTokenAsync();
Console.WriteLine($"Access token: {token}");
}
}
```
Tokens expire after **300 seconds** (5 minutes), refresh as needed.
# Security Best Practices
Source: https://docs.corti.ai/authentication/security_best_practices
Five essential steps for keeping your client credentials and access tokens secure.
Client credentials act as a powerful service account. Anyone holding them can act on your behalf and access your tenant. Protect them as you would any internal system password.
## How to keep your tokens safe
### 1. Use environment variables and a proper secret store
Never hardcode your `client_secret` in source files. Use environment variables and a secure secret manager provided by your cloud platform or infrastructure. Rotate secrets if exposure is suspected.
### 2. Never expose credentials in frontend or untrusted environments
Client credentials must only live on trusted servers. Do not embed them in browser code, mobile apps, desktop apps, or any environment you cannot fully control. Instead, your backend should request access tokens, validate requests, and decide what your users can do.
> You should never expose client credentials to a frontend application. In some applications it is however not possible to avoid passing an `accessToken` to the frontend. In these cases, you can limit the scope of the token by requesting a token with an explicit scope of either `transcribe` and/or `streams` set. This will restrict the token to streaming APIs.
### 3. Use a backend proxy to handle all Corti API calls from frontends
When you need a frontend to be able to call the Corti API, it is advised to use a proxy. This means all requests are through a proxy that you control. The proxy injects authentication, performs validation, and enforces user-level rules.
### 4. If you must use tokens in special cases, use limited-scope credentials
Some scenarios may require more narrowly scoped access. We are introducing support for limited scope tokens that reduce risk if exposed.
# Building Your Ambient Scribe
Source: https://docs.corti.ai/get_started/ambient-scribe
Implementation guide for building an ambient scribe application
An implementation handbook for product and engineering teams building ambient clinical documentation using the Corti platform.
Modeled after structured use-case guides, this document is designed to help you move from concept → workflow → implementation → integration.
## Getting Started
Before writing a single line of code, align on the fundamentals:
Be explicit about *who* this scribe is for and *what* problem it solves. Is it primary care SOAP notes? Specialty consult documentation? Urgent care throughput optimization?
The shape of your clinical output — structure, tone, length, required fields — will vary significantly based on specialty and workflow. A narrowly defined initial use case leads to faster iteration and stronger provider trust.
Decide whether documentation should update live during the visit or generate after the encounter ends:
* Real-time systems improve transparency, allow in-visit correction, and plan ahead for in consultation agents but if network is unstable (or non-existent) it may make for a more difficult first use case.
* Post-encounter generation can simplify UX and solve for offline periods, but you can lose the ability to intervene if user audio is poor quality.
Your choice affects architecture, infrastructure requirements, and provider behavior.
Ambient is the new kid on the block and it solves for a lot with your user base. Some specialties or user groups are also used to using other classic speech technologies like dictation.
Corti offers an API endpoint to support dictation workflows in addition to APIs for building out an ambient scribe. Choosing whether you support this from the start will help you to design the UX in an intuitive way so providers know when Ambient is the right way to go or if they want to go full Dictation. Design for the behaviors you want to drive.
Ambient scribes are most powerful when inside existing clinical workflows (we don’t want to change workflows, we want to support them!).
* Determine what systems you’ll pull context from (e.g. EHR demographics, scheduling system appointment reason) and where documentation will be written back (e.g. EHR note, After Visit Summary).
* Clarify whether you need deep EHR embedding, background API write-back, or a lightweight copy/paste workflow. Integration scope will heavily influence build complexity and timeline.
Clinicians must remain the final authority on documentation. Define how users will review extracted facts, edit generated sections, and approve the final note.
* Should providers be able to listen back to their cases?
* Will edits to documents be logged for your team to track common changes to then adjust prompts?
Designing thoughtful review controls builds trust, supports compliance, and improves long-term accuracy through feedback loops.
### Establish your Success Metrics
Determining the best way to measure success for your scribe can be difficult. The true measure of success is workflow transformation. Before launch, define how you will quantify impact — operationally, clinically, and experientially.
Provider trust and comfort are the leading indicators of long-term adoption.
Measure:
* Overall satisfaction score (CSAT or NPS-style survey)
* Adoption Rates
Ambient tools fail not because they are inaccurate, but because they are cognitively burdensome or unpredictable. Regular pulse surveys (2–4 weeks post-rollout) help detect friction early.
If charting time is currently tracked, this becomes a powerful ROI metric.
Measure:
* Average documentation time per encounter
* After-hours charting ("pajama time")
Even a 20–30% reduction in post-visit documentation time materially improves provider well-being and operational efficiency. Remember, it takes time to see some of these impacts as new tools take time to learn.
Ambient tools often shift clinician attention back to the patient.
Measure:
* Patient-reported perception of provider attentiveness
* Visit quality ratings
Improved patient satisfaction can be a secondary but meaningful outcome of successful ambient implementation.
Tracking the behaviors of end user modification can be a great proxy metric for time savings and even provider trust:
Measure:
* % of sections edited
* Average word-level modification rate
* Most frequently rewritten sections
Don’t be afraid of seeing the edits though! Edits show adoption of tools. What you should focus on is where are the trends in edits and where are the outliers.
***
## The Corti API Basics
The interaction is the central hub for managing conversational sessions, letting you create and update interactions that drive clinical AI workflows.
Real-time, stateless speech-to-text over WebSocket designed to power fluid dictation experiences with reliable medical language recognition.
Extract and retrieve clinically relevant facts from interactions to enhance insight and decision support.
Create and manage AI-driven agents that automate contextual messaging and task workflows with experts registry support.
Live WebSocket interaction streaming that concurrently produces transcripts and clinical facts to support ambient documentation workflows.
Define reusable document structures that ensure clarity and consistency in generated outputs.
Upload and organize audio recordings tied to interactions to fuel downstream transcription and document generation.
Generate polished clinical documents from transcripts and templates for notes, summaries, or referrals.
Convert uploaded recordings into structured, usable text to support review and documentation.
***
## How to Implement Your Ambient Scribe
### 1. Map Your Ambient Workflows
Ambient scribing is not just speech to text + summarization. It is a **clinical workflow system**.
Before building, map the end-to-end experience:
#### Questions to Align On
* Is this **in-person**, **virtual**, or both?
* Should facts be generated live? Or just documents at the end of the visit?
* How should providers:
* Review extracted facts?
* Edit generated documents?
* Approve final documentation?
* What documentation needs do your users have?
* Predefined structured SOAP notes?
* Specialty specific templates?
* User managed templates?
#### Visualize Your Core Workflows
To illustrate the concept with a hypothetical EHR, they may have made the following decisions for their design:
| Question | Answer | Justification |
| ------------------------------------------------------------------------------ | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Is this in-person, virtual, or both? | Both | The below workflow doesn’t highlight this, but this would impact the UI design for sharing audio either from an attached microphone or a browser tab. |
| Should facts be generated live? Or just documents at the end of the visit? | Live | We’re using the Stream endpoint which is optimized for real time fact generation. |
| How should providers review facts? | In/Post Consultation | In the workflow, we’re presenting facts to providers to edit before submitting for document generation. |
| How should providers edit generated documents and approve final documentation? | Edit in app | The workflow shows the document being presented to the end user after generation. They should make necessary edits before exiting the chart or saving the document. |
| What documentation needs do your users have? | Corti Standard Template List | In the workflow, you’ll see calling the List Templates endpoint which will return the Corti standard list. |
### 2. Determine Audio Capture Strategy
Ambient systems are only as strong as their audio layer. Corti provides multiple capture paths, including browser-based capture via the [JS SDK](/sdk/js-sdk).
#### Option A: Realtime Scribe | Browser-Based Capture (JS SDK)
Real time audio capture is a game changer in the clinical world. This is important for two key reasons:
1. **Builds trust** - by capturing live audio, you can bring facts to clinicians live in the consultation. It brings trust to the provider to see the facts extracting in real time and knowing the scribe is following along.
2. **Intercepts issues** - with live audio capture, you can use Corti’s Audio Health events to intercept areas where the audio being received isn’t clear. It’s easier to tell a user the audio isn’t clear in the session rather than after so they can correct it sooner.
This is ideal for:
* Web-based EHRs
* Telehealth platforms
* Embedded scribe widgets
```javascript Sample code expandable theme={null}
import WebSocket from "ws";
import fs from "fs";
const TENANT_NAME = "YOUR_TENANT_NAME";
const ACCESS_TOKEN = "YOUR_ACCESS_TOKEN";
const INTERACTION_ID = "YOUR_INTERACTION_UUID"; // must be created via REST first
const ENVIRONMENT = "eu"; // or "us"
const WSS_URL = `wss://api.corti.ai/stream/${INTERACTION_ID}?environment=${ENVIRONMENT}&tenant-name=${TENANT_NAME}&token=${ACCESS_TOKEN}`;
const ws = new WebSocket(WSS_URL);
ws.on("open", () => {
console.log("✅ WebSocket connected");
// Step 1: Send config immediately (must be within 10 seconds)
const config = {
type: "config",
configuration: {
transcription: {
primaryLanguage: "en",
isDiarization: false,
isMultichannel: false,
participants: [
{ channel: 0, role: "multiple" }
]
},
mode: {
type: "facts", // or "transcription" if you don't need facts
outputLocale: "en"
}
}
};
ws.send(JSON.stringify(config));
console.log("📤 Sent config");
});
ws.on("message", (data) => {
// Audio binary frames come back as Buffer — skip those
if (Buffer.isBuffer(data) && !isJson(data)) return;
const message = JSON.parse(data.toString());
console.log("📨 Received:", JSON.stringify(message, null, 2));
switch (message.type) {
case "CONFIG_ACCEPTED":
console.log("✅ Config accepted — session:", message.sessionId);
// Step 2: Start sending audio now that config is accepted
sendAudio();
break;
case "CONFIG_DENIED":
case "CONFIG_MISSING":
case "CONFIG_NOT_PROVIDED":
case "CONFIG_TIMEOUT":
console.error("❌ Config error:", message);
ws.close();
break;
case "transcript":
message.data.forEach((seg) => {
console.log(`🗣 [${seg.time.start}s → ${seg.time.end}s] ${seg.transcript}`);
});
break;
case "facts":
message.fact.forEach((fact) => {
console.log(`💡 Fact [${fact.group}]: ${fact.text}`);
});
break;
case "flushed":
console.log("🔄 Buffer flushed");
break;
case "usage":
console.log(`💳 Credits used: ${message.credits}`);
break;
case "ENDED":
console.log("🏁 Session ended — server closing socket");
// ws closes automatically after this
break;
case "error":
console.error("❌ Runtime error:", message.error);
break;
}
});
ws.on("close", (code, reason) => {
console.log(`🔌 Connection closed [${code}]: ${reason}`);
});
ws.on("error", (err) => {
console.error("🚨 WebSocket error:", err.message);
});
// --- Audio sending ---
function sendAudio() {
const AUDIO_FILE = "./sample.webm"; // swap with your audio file path
if (!fs.existsSync(AUDIO_FILE)) {
console.warn("⚠️ No audio file found — sending silence simulation");
simulateAudioAndEnd();
return;
}
const audioBuffer = fs.readFileSync(AUDIO_FILE);
const CHUNK_SIZE = 8192; // ~250–500ms chunks recommended
let offset = 0;
console.log(`🎙 Streaming ${audioBuffer.length} bytes of audio...`);
const interval = setInterval(() => {
if (ws.readyState !== WebSocket.OPEN) {
clearInterval(interval);
return;
}
if (offset >= audioBuffer.length) {
clearInterval(interval);
console.log("✅ All audio sent");
endSession();
return;
}
const chunk = audioBuffer.slice(offset, offset + CHUNK_SIZE);
ws.send(chunk); // send raw binary — no JSON wrapping
offset += CHUNK_SIZE;
}, 300); // send a chunk every 300ms
}
function simulateAudioAndEnd() {
// Demo: just wait a moment then end
setTimeout(() => endSession(), 2000);
}
// --- Optional: flush the audio buffer mid-session ---
function flushBuffer() {
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify({ type: "flush" }));
console.log("📤 Sent flush");
}
}
// --- End the session ---
function endSession() {
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify({ type: "end" }));
console.log("📤 Sent end — waiting for ENDED...");
}
}
// Helper: check if a Buffer looks like JSON
function isJson(buf) {
try {
JSON.parse(buf.toString());
return true;
} catch {
return false;
}
}
```
#### Option B: Async Scribe | External Capture + Send Audio
Sometimes conditions aren’t prime for real time audio transmission. That could be due to existing architecture constraints or because your customer base may not have reliable access to internet in the work that they do.
```javascript Sample code expandable theme={null}
// Corti API – Async Workflow
// 1. Create Interaction 2. Upload Recording 3. Generate Transcript 4. Extract Facts
// Docs: https://docs.corti.ai/workflows/ambient-async
const BASE_URL = `https://api.${ENVIRONMENT}.corti.app/v2`;
const TENANT = "YOUR_TENANT_NAME";
const TOKEN = "YOUR_ACCESS_TOKEN"; // obtain via OAuth client_credentials flow
const headers = {
"Authorization": `Bearer ${TOKEN}`,
"Tenant-Name": TENANT,
"Content-Type": "application/json",
};
// ─── STEP 1 · Create Interaction ────────────────────────────────────────────
async function createInteraction(): Promise {
const res = await fetch(`${BASE_URL}/interactions`, {
method: "POST",
headers,
body: JSON.stringify({
encounter: {
identifier: crypto.randomUUID(),
status: "planned",
type: "first_consultation",
period: { startedAt: new Date().toISOString() },
},
}),
});
if (!res.ok) throw new Error(`Create interaction failed: ${res.status}`);
const data = await res.json();
const interactionId: string = data.id;
console.log("✅ Interaction created:", interactionId);
return interactionId;
}
// ─── STEP 2 · Upload Recording (full file as octet-stream) ──────────────────
async function uploadRecording(
interactionId: string,
audioBuffer: ArrayBuffer // full recording file contents
): Promise {
const res = await fetch(`${BASE_URL}/interactions/${interactionId}/recordings/`, {
method: "POST",
headers: {
"Authorization": `Bearer ${TOKEN}`,
"Tenant-Name": TENANT,
"Content-Type": "application/octet-stream",
},
body: audioBuffer,
});
if (!res.ok) throw new Error(`Upload recording failed: ${res.status}`);
const data = await res.json();
const recordingId: string = data.recordingId;
console.log("✅ Recording uploaded:", recordingId);
return recordingId;
}
// ─── STEP 3 · Generate Transcript ───────────────────────────────────────────
async function createTranscript(
interactionId: string,
recordingId: string
): Promise {
const res = await fetch(`${BASE_URL}/interactions/${interactionId}/transcripts/`, {
method: "POST",
headers,
body: JSON.stringify({
recordingId,
primaryLanguage: "en",
diarize: true, // separate speakers
isMultichannel: false,
modelName: "base", // "base" | "ensemble" | "symphony"
}),
});
if (!res.ok) throw new Error(`Create transcript failed: ${res.status}`);
const data = await res.json();
const transcript: string = data.transcript ?? JSON.stringify(data);
console.log("✅ Transcript generated");
return transcript;
}
// ─── STEP 4 · Extract Facts ─────────────────────────────────────────────────
async function extractFacts(interactionId: string): Promise