# Introduction to the Administration API Source: https://docs.corti.ai/about/admin-api Programmatic access to manage your Corti API Console account ## What is the Admin API? The `Admin API` lets you manage your Corti API Console programmatically. It is built for administrators who want to automate account operations. This `Admin API` is separate from the `Corti API` used for speech to text, text generation, and agentic workflows: * Authentication and scope for the `Admin API` uses email-and-password to obtain a bearer token via `/auth/token`. This token is only used for API administration. * The `Admin API` endpoints `/customers` and `/users` are only enabled and exposed for projects with Embedded Assistant. Please [contact us](https://help.corti.app) if you have interest in this functionality or further questions. ### Use Cases The following functionality is currently supported by the `Admin API`: | Feature | Functionality | Scope | | :------------------- | :----------------------------------------------------------------------- | :------------------------------- | | **Authentication** | Authenticate user and get access token | All projects | | **Manage Customers** | Create, update, list, and delete customer accounts within your project | Projects with Embedded Assistant | | **Manage Users** | Create, update, list, and delete users associated with customer accounts | Projects with Embedded Assistant | Permissions mirror the Corti API Console - only project admins or owners can create, update, or delete resources. ## Quickstart * Sign up or log in at [console.corti.app](https://corti-api-console.web.app/) * Ensure your account has a password set Best practice: use a dedicated service account for Admin API automation. Assign only the minimal required role and rotate credentials regularly. Call `/auth/token` with your Console email and password to obtain a JWT access token. See API Reference: [Authenticate user and get access token](/api-reference/admin/auth/authenticate-user-and-get-access-token) ```bash theme={null} curl -X POST https://api.console.corti.app/functions/v1/public/auth/token \ -H "Content-Type: application/json" \ -d '{ "email": "your-email@example.com", "password": "your-password" }' ``` Example response: ```json theme={null} { "accessToken": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...", "tokenType": "bearer", "expiresIn": 3600 } ``` Include the token in the Authorization header for subsequent requests: ```bash theme={null} curl -X GET https://api.console.corti.app/functions/v1/public/projects/{projectId}/customers \ -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." ``` Tokens expire after `expiresIn` seconds. Once expired, call the `auth/token` endpoint again to obtain a new token. *** ## Top Pages Obtain an access token Create a new customer in a project Create a new user within a customer
Please [contact us](https://help.corti.app) for support or more information # Compliance & Trust Source: https://docs.corti.ai/about/compliance # Errors Source: https://docs.corti.ai/about/errors Collection of known errors and solutions [docs-get-access]: https://docs.corti.ai/get_started/getaccess [docs-quickstart]: https://docs.corti.ai/get_started/quickstart [docs-langs]: https://docs.corti.ai/about/languages [support]: https://help.corti.app [support-email]: help@corti.ai ## Access forbidden (`403`) Code: `A0001` You don't have the necessary permissions to access this feature or information. Please check [Get access][docs-get-access] and [Authentication][docs-quickstart]. ## Timeout (`504`) Code: `A0002` The allowed time for this request has passed. Please try again with smaller request size or contact the [support][support]. ## Bad request (`400`) Code: `A0003` Your request couldn't be processed. Please check the relevant documentation, base URL, endpoint, `Content-Type` header, required fields, field format (e.g. UUIDs) and try again. ## Invalid token (`403`) Code: `A0004` The provided token was incomplete or not in correct format. Please check the token or contact the [support][support]. ## Invalid user (`403`) Code: `A0005` The provided token contains an invalid user. Please check the token or contact the [support][support]. ## Invalid ID (`400`) Code: `A0006` The provided ID is in an incorrect format or shape. Please check the documentation for required ID formatting such as UUID. ## Interaction not found (`404`) Code: `A0007` The requested interaction could not be found in our system. Ensure the interaction exists and you have permission to access it through the listing endpoint. ## Invalid UUID (`400`) Code: `A0008` The provided UUID is in an incorrect format or shape. Please check the documentation for required ID formatting such as UUID. ## Document not found (`404`) Code: `A0009` The requested document could not be found in our system. Ensure the document exists and you have permission to access it through the listing endpoint. ## Error processing (`500`) Code: `A0010` The request could not be performed at the moment. Please try again later or contact the [support][support]. ## Recording not found (`404`) Code: `A0011` The requested recording could not be found in our system. Ensure the recording exists and you have permission to access it through the listing endpoint. ## Transcript not found (`404`) Code: `A0012` The requested transcript could not be found in our system. Ensure the transcript exists and you have permission to access it through the listing endpoint. ## Bad query in URL (`400`) Code: `A0013` The query parameters are invalid or incomplete. Please verify them against the endpoint specification. ## Template section not found (`404`) Code: `A0014` The requested template section could not be located in our system. Ensure the template section exists and you have permission to access it. ## Invalid context structure (`400`) Code: `A0015` The specified context structure is invalid. Ensure the request matches the expected format. ## Limit reached (`400`) Code: `A0016` The provided input exceeds the maximum allowed limit. Please provide a different value or check the endpoint's documentation. ## Duplicate value (`400`) Code: `A0017` The value provided already exists. Duplicate entries are not allowed for this field. Refer to the endpoint specification for the field. ## Unsupported language (`400`) Code: `A0018` The requested language is not supported by the endpoint. Please check the [supported languages][docs-langs] or contact the [support][support]. ## Fact group not found (`404`) Code: `A0019` The requested fact group was not found in the system. Double-check such a group exists or contact the [support][support]. ## Fact not found (`404`) Code: `A0020` The requested fact was not found in the system. Double-check such a group exists or contact the [support][support]. ## Insufficient balance (`429`) Code: `A0021` You're currently in limited access mode. To unlock full API functionality, please add credits to continue using the API. For assistance, contact [support][support]. ## Provided audio is invalid (`400`) Code: `A0022` Provided audio is invalid or corrupted. Please check the audio format and try again. For assistance or more information, contact [support][support]. ## Invalid scope (`403`) Code: `A0023` The provided token does have a forbidden scope for this service. Please check the token scopes or contact the [support][support]. # Help Center Source: https://docs.corti.ai/about/help # Introduction to the Corti API Source: https://docs.corti.ai/about/introduction Overview of Corti application programming interfaces (API) Corti is the all-in-one AI stack for healthcare, built for medical accuracy, compliance, and scale. Healthcare's complex language and specialized knowledge demands purpose-built AI infrastructure. The Corti AI clinical speech understanding, LLMs, and agentic automation are delivered through a single platform that makes it easy to embed AI directly into healthcare workflows. From documentation and coding to billing and referrals, boosting care and fostering provider wellbeing, Corti enables product teams surface critical insights, improve patient outcomes, and ship faster with less effort. Corti's goal is to be the most complete and accurate **AI infrastructure platform for healthcare developers** building products that demand medical-grade reasoning and enterprise reliability, without any compromises on integration speed or regulatory compliance.

Learn more about what makes the Corti AI platform the right choice for developers and healthcare organizations building the next generation of clinical applications.
*** ## Why Choose the Corti API? | | | | :-------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Purpose-built for healthcare** | Optimized for the unique needs and compliance standards of the medical field. | | **Real-time processing** | Live data streaming with highly accurate fact generation enables instantaneous AI-driven support to integrated applications and healthcare professionals. | | **Seamless workflow integration** | Designed to work across multiple modalities within clinical and operational workflows. | | **Customizable and scalable** | Robust and adaptable capabilities to fit your organizational needs. |       Bespoke API integrations
      SDKs and web components
      Embeddable UI elements
      Medical language proficiency
      Secure and compliant
      Real-time or asynchronous
*** ## Integrate seamlessly Corti AI can be integrated via the Corti API, allowing organizations to build bespoke AI-powered solutions that meet their specific needs. The same API powers [Corti Assistant](/assistant/welcome) - a fully packaged, EHR agnostic, real-time ambient AI scribe that automates documentation, information lookup, medical coding and more. If, however, you want to embed Corti AI into your workflow or customize the interactions, then take a deeper dive into the API documentation.
Corti AI Symphony Corti AI Symphony Corti’s **model network and orchestration layer** for Text and Audio, powering Speech to Text, Text Generation, and Agent capabilities. The power of reasoning and contextual inference unlocks critical functionality to power healthcare workflows. Corti AI Symphony Corti AI Symphony Corti AI Symphony Corti AI Symphony With the Corti API you can build any speech-enabled or text-based workflow for healthcare. The capabilities of the Corti AI platform can be accessed directly via the API or with the help of SDKs, Web Components, and embeddable applications (as desired). Corti AI Symphony Corti AI Symphony *** ## Core Capabilities
*** ## Bringing it All Together This documentation site outlines how to use the API and provides example workflows. * Continue on to [How it works](/about/how_it_works) to learn more about the system architecture. * Documentation pages [welcome](/about/welcome) you to the API and provide explanations of core capabilities. * The [API Console](/get_started/getaccess/) to create an account and create client credentials so you can begin your journey. * The [Javascript SDK](/quickstart/javascript-sdk/) page walks through how to get stated quickly with Corti API. * The [API Reference](/api-reference/welcome) page provides detailed documentation for each available endpoint. * The [Resources](/about/resources/) page provides release notes and other useful resources.
If you have any questions about how to implement Corti AI in your healthcare environment or application, then [contact us](https://help.corti.app) for more information. # Languages Source: https://docs.corti.ai/about/languages Learn about how languages are supported in Corti APIs Corti speech to text and text generation are specifically designed for use in the healthcare domain. Speech to text (STT) language models are designed to balance recognition speed, performance, and accuracy. Text generation LLMs accept various inputs depending on the workflow (e.g., transcripts or facts) and have defined guardrails to support quality assurance of facts and documents outputs. The `language codes` listed below are used in API requests to define output language for speech to text and document generation. * Learn more about speech to text endpoints [here](/stt/overview). * Learn how to query the API for document templates available by language [here](/textgen/templates#retrieving-available-templates). *** ## Speech to Text Performance Tiers Corti speech to text uses a tier system to categorize functionality and performance that is available per language: | Tier | Description | Medical Terminology Validation | | :----------- | :--------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | | **Base** | AI-powered speech recognition, ready to integrate with healthcare IT solutions | `Up to 1,000` | | **Enhanced** | Base plus optimized medical vocabulary for a variety of specialties and improved support for real-time dictation | `1,000-99,999` | | **Premier** | Enhanced plus speech to text models delivering the best performance in terms of accuracy, quality, and latency | `100,000+` | *** ## Language Availability per Endpoint The table below summarizes languages supported by the Corti API and how they can be used with speech to text endpoints (`Transcribe`, `Stream`, and `Transcripts`) and text generation endpoints (`Documents`): | Language | Language Code | ASR Performance | [Transcribe](/api-reference/transcribe) | [Stream](/api-reference/stream) | [Transcripts](/api-reference/transcripts/create-transcript) | [Documents1](/api-reference/documents/generate-document) | | :---------------- | :------------------: | :-----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------: | :-------------------------------------------------------------------------------------------------------------: | | Arabic | `ar` | Base | | | | | | Danish | `da` | Premier | | | 2 | | | Dutch | `nl` | Enhanced | | | | | | English (US) | `en` or `en-US` | Premier | | | 2 | | | English (UK) | `en-GB` | Premier | | | 2 | | | French | `fr` | Premier | | | 2 | | | German | `de` | Premier | | | 2 | | | Hungarian | `hu` | Enhanced | | | | 5 | | Italian | `it` | Base | | | | | | Norwegian | `no` | Enhanced | | | | | | Portuguese | `pt` | Base | | | | | | Spanish | `es` | Base | | | | | | Swedish | `sv` | Enhanced | | | | | | Swiss German | `gsw-CH`3 | Enhanced | | 4 | 2 | | | Swiss High German | `de-CH`3 | Premier | | 4 | 2 | |
**Notes:**
1 Use the language codes listed above for the `outputLanguage` parameter in `POST/documents` requests. Template(s) or section(s) in the defined output must be available for successful document generation. 2 Speech to text accuracy for async audio file processing via `/transcripts` endpoint may be degraded as compared to real-time recognition via the `/transcribe` and `/stream` endpoints. Further model updates are in progress to address the performance limitation. 3 Use language code `gsw-CH` for dialectical Swiss German workflows (e.g., conversational AI scribing), and language code `de-CH` when Swiss High German is spoken (e.g., dictation). 4 For Swiss German `/stream` configuration: Use `gsw-CH` for `primaryLanguage` as you transcribe dialectical spoken to written Swiss High German, and use `de-CH` for the facts `outputLanguage`. 5 Hungarian document generation via default templates upon request.

*** ## Languages Available for Exploration The table below summarizes languages that, upon request, can enabled with `base` tier functionality and performance. Corti values the opportunity to expand to new markets, but we need your collaboration and partnership in speech-to-text validation and functionality refinement. Please [contact us](https://help.corti.app) to discuss further. | Language | Language Code | | :--------- | :-----------: | | Bulgarian | `bg` | | Croatian | `hr` | | Czech | `cs` | | Estonian | `et` | | Finnish | `fi` | | Greek | `el` | | Hebrew | `he` | | Japanese | `ja` | | Latvian | `lv` | | Lithuanian | `lt` | | Maltese | `mt` | | Mandarin | `cmn` | | Polish | `pl` | | Romanian | `ro` | | Russian | `ru` | | Slovakian | `sk` | | Slovenian | `sl` | | Ukrainian | `uk` | *** ## Language Translation * Translation (audio capture in one language with transcript output in a different language) is not officially supported in the Corti API at this time. * Some general support for translation of `transcripts` in English to `facts` in other languages (e.g. German, French, Danish, etc.) is available in [stream](/textgen/facts_realtime#using-the-api) or [extract Facts](/api-reference/facts/extract-facts) requests. * Additional translation language-pair combinations are not quality assessed or performance benchmarked.
Please [contact us](https://help.corti.app) if you are interested in a language that is not listed here, need help with tiers and endpoint definitions, or have questions about how to use language codes in API requests. # Public Roadmap Source: https://docs.corti.ai/about/roadmap # A2A Protocol (Agent-to-Agent) Source: https://docs.corti.ai/agentic/a2a-protocol Learn about the Agent-to-Agent protocol for inter-agent communication ### What is the A2A Protocol The **Agent-to-Agent (A2A)** protocol is an open standard that enables secure, framework-agnostic communication between autonomous AI agents. Instead of building bespoke integrations whenever you want agents to collaborate, A2A gives Corti-Agentic and other systems a **common language** agents can use to discover, talk to, and delegate work to one another. For the full technical specification, see the official A2A project docs at [a2a-protocol.org](https://a2a-protocol.org/latest/). Originally developed by Google and now stewarded under the Linux Foundation, A2A solves a core problem in multi-agent systems: interoperability across ecosystems, languages, and vendors. It lets you connect agents built in Python, JavaScript, Java, Go, .NET, or other languages and have them cooperate on complex workflows without exposing internal agent state or proprietary logic. ### Why Corti-Agentic Uses A2A We chose A2A because it: * **Standardizes agent communication.** Agents can talk to each other without siloed, point-to-point integrations. That makes composite workflows easier to build and maintain. * **Supports real workflows.** A2A includes discovery, task negotiation, and streaming updates, so agents can coordinate long-running or multi-step jobs. * **Preserves security and opacity.** Agents exchange structured messages without sharing internal memory or tools. That protects intellectual property and keeps interactions predictable. * **Leverages open tooling.** There are open source SDKs in multiple languages and example implementations you can reuse. In Corti-Agentic, A2A is the backbone for agent collaboration. Whether you’re orchestrating specialist agents, chaining reasoning tasks, or integrating external agent services, A2A gives you a robust, open foundation you don’t have to reinvent. ### Open Source SDKs and Tooling For links to Corti’s official SDK and the official A2A project SDKs (Python, JavaScript/TypeScript, Java, Go, and .NET), see **[SDKs & Integrations](/agentic/sdks-integrations)**. Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework. # Create Agent Source: https://docs.corti.ai/agentic/agents/create-agent agentic/auto-generated-openapi.yml post /agents This endpoint allows the creation of a new agent that can be utilized in the `POST /agents/{id}/v1/message:send` endpoint. # Delete Agent by ID Source: https://docs.corti.ai/agentic/agents/delete-agent-by-id agentic/auto-generated-openapi.yml delete /agents/{id} This endpoint deletes an agent by its identifier. Once deleted, the agent can no longer be used in threads. # Get Agent by ID Source: https://docs.corti.ai/agentic/agents/get-agent-by-id agentic/auto-generated-openapi.yml get /agents/{id} This endpoint retrieves an agent by its identifier. The agent contains information about its capabilities and the experts it can call. # Get Agent Card Source: https://docs.corti.ai/agentic/agents/get-agent-card agentic/auto-generated-openapi.yml get /agents/{id}/agent-card.json This endpoint retrieves the agent card in JSON format, which provides metadata about the agent, including its name, description, and the experts it can call. # Get Context by ID Source: https://docs.corti.ai/agentic/agents/get-context-by-id agentic/auto-generated-openapi.yml get /agents/{id}/v1/contexts/{contextId} This endpoint retrieves all tasks and top-level messages associated with a specific context for the given agent. # Get Task by ID Source: https://docs.corti.ai/agentic/agents/get-task-by-id agentic/auto-generated-openapi.yml get /agents/{id}/v1/tasks/{taskId} This endpoint retrieves the status and details of a specific task associated with the given agent. It provides information about the task's current state, history, and any artifacts produced during its execution. # List Agents Source: https://docs.corti.ai/agentic/agents/list-agents agentic/auto-generated-openapi.yml get /agents This endpoint retrieves a list of all agents that can be called by the Corti Agent Framework. # List Registry Experts Source: https://docs.corti.ai/agentic/agents/list-registry-experts agentic/auto-generated-openapi.yml get /agents/registry/experts This endpoint retrieves the experts registry, which contains information about all available experts that can be referenced when creating agents through the AgentsExpertReference schema. # Send Message to Agent Source: https://docs.corti.ai/agentic/agents/send-message-to-agent agentic/auto-generated-openapi.yml post /agents/{id}/v1/message:send This endpoint sends a message to the specified agent to start or continue a task. The agent processes the message and returns a response. If the message contains a task ID that matches an ongoing task, the agent will continue that task; otherwise, it will start a new task. # Update Agent by ID Source: https://docs.corti.ai/agentic/agents/update-agent-by-id agentic/auto-generated-openapi.yml patch /agents/{id} This endpoint updates an existing agent. Only the fields provided in the request body will be updated; other fields will remain unchanged. # System Architecture Source: https://docs.corti.ai/agentic/architecture Learn about the Agentic Framework system architecture The Corti Agentic Framework adopts a **multi-agent architecture** to power development of healthcare AI solutions. As compared to a monolithic LLM, the Corti Agentic Framework allows for improved specialization and protocol-based composition. ## Architecture Components Diagram illustrating the Corti Agentic Framework architecture, showing the Orchestrator, Experts, and Memory components and how they interact. The architecture consists of three core components working together: * **[Orchestrator](/agentic/orchestrator)** — The central coordinator that receives user requests and delegates tasks to specialized Experts via the A2A protocol. * **[Experts](/agentic/experts)** — Specialized sub-agents that perform domain-specific work, potentially calling external services through MCP. * **[Memory](/agentic/context-memory)** — Maintains persistent context and state, enabling the Orchestrator to make informed decisions and ensuring continuity across conversations. Together, this architecture enables complex workflows through protocol-based composition while maintaining strict data isolation and stateless reasoning agents. ## Interaction mechanisms in Corti The A2A Protocol supports various interaction patterns to accommodate different needs for responsiveness and persistence. Corti builds on these patterns so you can choose the right interaction model for your product: * **Request/Response (Polling)**: Used for many synchronous Corti APIs where you send input and wait for a single response. For long‑running Corti tasks, your client can poll the task endpoint for status and results. * **Streaming with Server-Sent Events (SSE)**: Used by Corti for real‑time experiences (for example, ambient notes or live guidance). Your client opens an SSE stream to receive incremental tokens, events, or status updates over an open HTTP connection.
Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework. # Beginners' Guide to Agents Source: https://docs.corti.ai/agentic/beginners-guide How LLM agents work in the Corti Agentic Framework export const Lottie = ({path, width = '100%', maxWidth = '800px', height = 'auto', loop = true, autoplay = true}) => { const containerRef = useRef(null); const animationRef = useRef(null); const scriptRef = useRef(null); const propsRef = useRef({ path, loop, autoplay }); useEffect(() => { propsRef.current = { path, loop, autoplay }; }, [path, loop, autoplay]); const initializeAnimation = () => { if (!window.lottie || !containerRef.current) { return; } if (animationRef.current) { animationRef.current.destroy(); animationRef.current = null; } const {path, loop, autoplay} = propsRef.current; animationRef.current = window.lottie.loadAnimation({ container: containerRef.current, renderer: 'svg', loop: loop, autoplay: autoplay, path: path }); }; useEffect(() => { if (window.lottie) { initializeAnimation(); return; } const existingScript = document.querySelector('script[src="https://cdnjs.cloudflare.com/ajax/libs/bodymovin/5.12.2/lottie.min.js"]'); if (existingScript) { existingScript.addEventListener('load', initializeAnimation); return () => { existingScript.removeEventListener('load', initializeAnimation); }; } const script = document.createElement('script'); script.src = 'https://cdnjs.cloudflare.com/ajax/libs/bodymovin/5.12.2/lottie.min.js'; script.async = true; scriptRef.current = script; script.onload = initializeAnimation; document.body.appendChild(script); return () => { if (scriptRef.current && document.body.contains(scriptRef.current)) { document.body.removeChild(scriptRef.current); scriptRef.current = null; } }; }, []); useEffect(() => { if (window.lottie) { initializeAnimation(); } return () => { if (animationRef.current) { animationRef.current.destroy(); animationRef.current = null; } }; }, [path, loop, autoplay]); return
; }; In healthcare, an **LLM agent** is not a chatbot trying to answer everything on its own. The language model is used primarily for reasoning and planning, understanding a request, breaking it down, and deciding which experts, tools, or data sources are best suited to handle each part of the task. Instead of relying on internal knowledge, agents retrieve information from trusted external knowledge bases, clinical systems, and customer-owned data at runtime. When appropriate, they can also take controlled actions, such as writing structured data back to an EHR, triggering downstream workflows, or sending information to other systems. The **Corti Agentic Framework** is the healthcare-grade platform that makes this possible in production. It provides the orchestration layer that allows agents to delegate work to specialized experts, operate within strict safety and governance boundaries, and remain fully auditable. This enables AI systems that can reason, look things up, and act, without guessing or bypassing clinical control. # Context & Memory Source: https://docs.corti.ai/agentic/context-memory Learn how context and memory work in the Corti Agentic Framework A **context** in the Corti Agentic Framework makes use of memory from previous text and data in the conversation so far—think of it as a thread that maintains conversation history. Understanding how context works is essential for building effective integrations that maintain continuity across multiple messages. Diagram showing orchestration flow in the agentic framework ## What is Context? A `Context` (identified by a server-generated `contextId`) is a logical grouping of related `Messages`, `Tasks`, and `Artifacts`, providing context across a multi-turn "conversation"". It enables you to associate multiple tasks and agents with a single patient encounter, call, or workflow, ensuring continuity and proper scoping of shared knowledge throughout. The `contextId` is **always created on the server**. You never generate it client-side. This ensures proper state management and prevents conflicts. ### Data Isolation and Scoping **Contexts provide strict data isolation**: Data can **NEVER** leak across contexts. Each `contextId` creates a completely isolated conversation scope. Messages, tasks, artifacts, and any data within one context are completely inaccessible to agents working in a different context. This ensures: * **Privacy and security**: Patient data from one encounter cannot accidentally be exposed to another encounter * **Data integrity**: Information from different workflows remains properly separated * **Compliance**: You can confidently scope sensitive data to specific contexts without risk of cross-contamination When you need to share information across contexts, you must explicitly pass it via `DataPart` objects in your messages—there is no automatic data sharing between contexts. ## Using Context for Automatic Memory Management The simplest way to use context is to let the framework automatically manage conversation memory: ### Workflow Pattern 1. **First message**: Send your message **without** a `contextId`. The server will create a new context automatically. 2. **Response**: The server's response includes the newly created `contextId`. 3. **Subsequent messages**: Include that `contextId` in your requests. Memory from previous messages in that context is automatically managed and available to the agent. When you include a `contextId` in your request, the agent has access to all previous messages, artifacts, and state within **that specific context only**. Data from other contexts is completely isolated and inaccessible. This enables natural, continuous conversations without manually passing history, while maintaining strict data boundaries between different encounters or workflows. ### Standalone Requests If you don't want automatic memory management, always send messages **without** a `contextId`. Each message will then be treated as a standalone request without access to prior conversation history. This is useful for: * One-off queries that don't depend on prior context * Testing and debugging individual requests * Scenarios where you want explicit control over what context is included ## Passing Additional Context with Each Request In addition to automatic memory management via `contextId`, you can pass additional context in each request by including `DataPart` objects in your message. This is useful when you want to provide specific structured data, summaries, or other context that should be considered for that particular request. ```json theme={null} { "contextId": "ctx_abc123", "messages": [ { "role": "user", "parts": [ { "type": "text", "text": "Generate a summary of this patient encounter" }, { "type": "data", "data": { "patientId": "pat_12345", "encounterDate": "2025-12-15", "chiefComplaint": "Chest pain", "vitalSigns": { "bloodPressure": "120/80", "heartRate": 72, "temperature": 98.6 } } } ] } ] } ``` This approach allows you to: * Provide structured data (patient records, clinical facts, etc.) alongside text * Include summaries or distilled information from external sources * Pass metadata or configuration that should be considered for this specific request * Combine automatic memory (via `contextId`) with explicit context (via `DataPart`) ## How Memory Works The Corti Agentic Framework uses an intelligent memory system that automatically indexes and stores all content within a context, enabling semantic retrieval when needed. ### Automatic Indexing Every `TextPart` and `DataPart` you send in messages is automatically indexed and stored in the context's memory. This includes: * Text content from user and agent messages * Structured data from `DataPart` objects (patient records, clinical facts, metadata, etc.) * Artifacts generated by tasks * Any other content that flows through the context ### Semantic Retrieval The memory system operates like a RAG (Retrieval Augmented Generation) pipeline. When an agent processes a new message: 1. **Semantic search**: The system performs semantic search across all indexed content in the context's memory 2. **Relevant retrieval**: It retrieves the most semantically relevant information based on the current query or task 3. **Just-in-time injection**: This relevant context is automatically injected into the agent's prompt, ensuring it has access to the right information at the right time This means you don't need to manually pass all relevant history with each request—the system intelligently retrieves what's needed based on semantic similarity. For example, if you ask "What was the patient's chief complaint?" in a later message, the system will automatically retrieve and include the relevant information from earlier in the conversation, even if it was mentioned many messages ago. ### Benefits * **Efficient**: Only relevant information is retrieved and used, reducing token usage * **Automatic**: No need to manually manage what context to include * **Semantic**: Works based on meaning, not just keyword matching * **Comprehensive**: All content in the context is searchable and retrievable ## Context vs. Reference Task IDs The framework provides two mechanisms for linking related work: * **`contextId`** – Groups multiple related `Messages`, `Tasks`, and `Artifacts` together (think of it as the encounter/call/workflow bucket). This provides automatic memory management and is sufficient for most use cases. * **`referenceTaskIds`** – An optional list of specific past `Task` IDs within the same context that should be treated as explicit inputs or background. Note that `referenceTaskIds` are scoped to a context—they reference tasks within the same `contextId`. **In most situations, you can ignore `referenceTaskIds`** since the automatic memory provided by `contextId` is sufficient. Only use `referenceTaskIds` when you need to explicitly direct the agent to pay attention to specific tasks or artifacts within the context, such as in complex multi-step workflows where you want to ensure certain outputs are prioritized. ## Context and Interaction IDs If you're using contexts alongside Corti's internal interaction representation (for example, when integrating with Corti Assistant or other Corti products that use `interactionId`), note that **these two concepts are currently not linked**. * **`contextId`** (from the Agentic Framework) and **`interactionId`** (from Corti's internal systems) are separate concepts that you will need to map yourself in your application. * There is no automatic association between a Corti `interactionId` and an Agentic Framework `contextId`. **Recommended approach:** * **Use a fresh context per interaction**: When working with a Corti interaction, create a new `contextId` for that interaction. This keeps data properly scoped and isolated per interaction. * Store the mapping between your `interactionId` and `contextId`(s) in your own application state or metadata. * If you need to share data across multiple contexts within the same interaction, explicitly pass it via `DataPart` objects. We're looking into ways to make the relationship between interactions and contexts more ergonomic if this is relevant to your use case. For now, maintaining your own mapping and using one context per interaction is the recommended pattern. For more details on how context relates to other core concepts, see [Core Concepts](/agentic/core-concepts). Please [contact us](https://help.corti.app/) if you need more information about context and memory in the Corti Agentic Framework. # Core Concepts Source: https://docs.corti.ai/agentic/core-concepts Learn the fundamental building blocks of the Corti Agentic Framework This page adds Corti-specific detail on top of the core A2A concepts. We have tried to adhere as closely as possible to the intended A2A protocol specification — for the canonical definition of these concepts, see the A2A documentation on [Core Concepts and Components in A2A](https://a2a-protocol.org/latest/topics/key-concepts). The Corti Agentic Framework uses a set of core concepts that define how Corti agents, tools, and external systems interact. Understanding these building blocks is essential for developing on the Corti platform and for integrating your own systems using the A2A Protocol. ## Core Actors At Corti, these actors typically map to concrete products and integrations: * **User**: A clinician, contact-center agent, knowledge worker, or an automated service in your environment. The user initiates a request (for example, “summarize this consultation” or “triage this patient”) that requires assistance from one or more Corti-powered agents. * **A2A Client (Client Agent)**: The application that calls Corti. This is your application/server. The client initiates communication using the A2A Protocol and orchestrates how results are used in your product. * **A2A Server (Remote Agent)**: A Corti agent or agentic system that exposes an HTTP endpoint implementing the A2A Protocol. It receives requests from clients, processes tasks, and returns results or status updates. ## Fundamental Communication Elements The following elements are fundamental to A2A communication and how Corti uses them: A JSON metadata document describing an agent's identity, capabilities, endpoint, skills, and authentication requirements. **Key Purpose:** Enables Corti and your applications to discover agents and understand how to call them securely and effectively. A stateful unit of work initiated by an agent, with a unique ID and defined lifecycle. **Key Purpose:** Powers long‑running operations in Corti (for example, document generation or multi‑step workflows) and enables tracking and collaboration. A single turn of communication between a client and an agent, containing content and a role ("user" or "agent"). **Key Purpose:** Carries instructions, clinical context, user questions, and agent responses between your application, Corti Assistant, and remote agents. The fundamental content container (for example, TextPart, FilePart, DataPart) used within Messages and Artifacts. **Key Purpose:** Lets Corti exchange text, audio transcripts, structured JSON, and files in a consistent way across agents and tools. A tangible output generated by an agent during a task (for example, a document, image, or structured data). **Key Purpose:** Represents concrete Corti results such as SOAP notes, call summaries, recommendations, or other structured outputs. A server-generated identifier (`contextId`) that logically groups multiple related `Task` objects, providing context across a series of interactions. **Key Purpose:** Enables you to associate multiple tasks and agents with a single patient encounter, call, or workflow, ensuring continuity and proper scoping of shared knowledge throughout an interaction. ## Agent Cards in Corti The Agent Card is a JSON document that serves as a digital business card for initial discovery and interaction setup. It provides essential metadata about an agent. Clients parse this information to determine if an agent is suitable for a given task, how to structure requests, and how to communicate securely. Key information includes identity, service endpoint (URL), A2A capabilities, authentication requirements, and a list of skills. Within Corti, Agent Cards are how you: * Discover first‑party Corti agents and their capabilities. * Register and describe your own remote agents so Corti workflows can call them. * Declare authentication and compliance requirements up front, before any PHI or sensitive data is exchanged. ## Messages and Parts in Corti A message represents a single turn of communication between a client and an agent. It includes a role ("user" or "agent") and a unique `messageId`. It contains one or more Part objects, which are granular containers for the actual content. This design allows A2A to be modality independent and lets Corti mix clinical text, transcripts, and structured data safely in a single exchange. The primary part kinds are: * `TextPart`: Contains plain textual content, such as instructions, questions, or generated notes. * `DataPart`: Carries structured JSON data. This is useful for clinical facts, workflow parameters, EHR identifiers, or any machine‑readable information you exchange with Corti. * `FilePart`: Represents a file (for example, a PDF discharge letter or an audio recording). It can be transmitted either inline (Base64 encoded) or through a URI. It includes metadata like "name" and "mimeType". This is not yet fully supported. ## Artifacts in Corti An artifact represents a tangible output or a concrete result generated by a remote agent during task processing. Unlike general messages, artifacts are the actual deliverables. An artifact has a unique `artifactId`, a human-readable name, and consists of one or more part objects. Artifacts are closely tied to the task lifecycle and can be streamed incrementally to the client. In Corti, artifacts typically correspond to business outputs such as: * Clinical notes (for example, SOAP notes, discharge summaries). * Extracted clinical facts or coding suggestions. * Generated documents, checklists, or other workflow‑specific artifacts. ## Agent response: Task or Message The agent response can be a new `Task` (when the agent needs to perform a long-running operation) or a `Message` (when the agent can respond immediately). On the Corti platform this means: * For quick operations (for example, a short completion or a classification), your agent often responds with a `Message`. * For longer workflows (for example, generating a full clinical document, coordinating multiple tools, or waiting on downstream systems), your agent responds with a `Task` that you can monitor and later retrieve artifacts from. # Experts Source: https://docs.corti.ai/agentic/experts Learn about Experts available for use with the AI Agent An **Expert** is an LLM-powered capability that an AI agent can utilize. Experts are designed to complete small, discrete tasks efficiently, enabling the Orchestrator to compose complex workflows by chaining multiple experts together. Diagram showing where experts sit in the agentic framework flow ## Expert Registry Corti maintains a **registry of experts** that includes both first-party experts built by Corti and third-party integrations. You can browse and discover available experts through the [Expert Registry API](/agentic/agents/list-agent) endpoint, which returns information about all available experts including their capabilities, descriptions, and configuration requirements. The registry includes experts for various healthcare use cases such as: * Clinical reference lookups * Medical coding * Document generation * Data extraction * And more ## Bring Your Own Expert You can create custom experts by exposing an MCP (Model Context Protocol) server. When you register your MCP server, Corti wraps it in a custom LLM agent with a system prompt that you can control. This allows you to: * Integrate your own tools and data sources * Create domain-specific experts tailored to your workflows * Maintain control over the expert's behavior through custom system prompts * Leverage Corti's orchestration and memory management while using your own tools ### Expert Configuration When creating a custom expert, you provide configuration that includes: * **Expert metadata**: ID, name, and description * **System prompt**: Controls how the LLM agent behaves and reasons about tasks * **MCP server configuration**: Details about your MCP server including transport type, authorization, and connection details ```json Expert Configuration expandable theme={null} [ { "type": "expert", "id": "ecg_interpreter", "name": "ECG Interpreter", "description": "Interprets 12 lead ECGs.", "systemPrompt": "You are an expert ECG interpreter.", "mcpServers": [ { "id": "srv1", "name": "ECG API Svc", "transportType": "streamable_http", "authorizationType": "none", "url": "https://api.ecg.com/x" } ] } ] ``` ### MCP Server Requirements Your MCP server must: * Implement the [Model Context Protocol](https://modelcontextprotocol.io/) specification * Expose tools via the standard MCP `tools/list` and `tools/call` endpoints * Handle authentication Once registered, your custom expert becomes available to the Orchestrator and can be used alongside Corti's built-in experts in multi-expert workflows. ## Multi-Agent Composition This feature is coming soon. We're working on exposing A2A (Agent-to-Agent) endpoints that will allow you to attach multiple agents together, enabling more sophisticated multi-agent workflows. This will provide: * Direct agent-to-agent communication using the A2A protocol * Composition of complex workflows across multiple agents * Fine-grained control over agent interactions and data flow For now, the Orchestrator handles expert composition automatically. When A2A endpoints are available, you'll be able to build custom agent networks while still leveraging Corti's orchestration capabilities. ## Direct Expert Calls This feature is coming soon. We're also working on enabling direct calls to experts, allowing you to use them directly in your workflows rather than only through agents. This will provide: * Direct API access to individual experts * Integration of experts into custom workflows * More flexible composition patterns beyond agent-based orchestration **While AI chat is a useful mechanism, it's not the only option!** The Corti Agentic Framework is API-first, enabling synchronous or async usage across a range of modalities: scheduled batch jobs, clinical event triggers, UI widgets, and direct EHR system calls. [Let us know](https://help.corti.app) what types of use cases you're exploring, from doctor-facing chat bots to system-facing automation backends. Please [contact us](https://help.corti.app/) if you need more information about Experts or creating custom experts in the Corti Agentic Framework. # Amboss Researcher Source: https://docs.corti.ai/agentic/experts/amboss-researcher Learn about how the Amboss Researcher expert works Agent framework and tools are currently `under development`. API details subject to change ahead of general release. The **Amboss Researcher** expert provides access to Amboss's comprehensive medical knowledge base, enabling AI agents to retrieve evidence-based clinical information, medical concepts, and educational content. Amboss Researcher is particularly useful for clinical decision support, medical education, and accessing up-to-date medical knowledge during patient care workflows. ## Capabilities The Amboss Researcher expert can: * Search and retrieve medical concepts and clinical information * Access evidence-based medical content * Provide structured medical knowledge for clinical workflows * Support medical education and training scenarios ## Use Cases * Clinical decision support during patient consultations * Medical education and training * Quick reference lookups for medical concepts * Evidence-based practice support ## Detailed information The Amboss Researcher expert integrates with Amboss's medical knowledge platform to provide reliable, evidence-based medical information. When invoked by an AI agent, it can search Amboss's database and return structured medical content that can be used to inform clinical decisions or provide educational context. # ClinicalTrials.gov Source: https://docs.corti.ai/agentic/experts/clinicaltrials-gov Learn about how the ClinicalTrials.gov expert works Agent framework and tools are currently `under development`. API details subject to change ahead of general release. The **ClinicalTrials.gov** expert enables AI agents to search and retrieve information from ClinicalTrials.gov, the U.S. National Library of Medicine's database of privately and publicly funded clinical studies. ClinicalTrials.gov is the primary resource for finding ongoing and completed clinical trials, helping connect patients with research opportunities. ## Capabilities The ClinicalTrials.gov expert can: * Search ClinicalTrials.gov's database of clinical studies * Retrieve trial information including eligibility criteria, locations, and status * Find relevant clinical trials based on medical conditions or interventions * Access trial protocols and study details ## Use Cases * Finding relevant clinical trials for patients * Research study discovery * Accessing trial protocols and eligibility criteria * Clinical research support ## Detailed information The ClinicalTrials.gov expert integrates with the ClinicalTrials.gov database, which contains information about clinical studies conducted around the world. When invoked by an AI agent, it can search for relevant trials based on medical conditions, interventions, or other criteria, helping healthcare providers identify research opportunities for their patients and access detailed trial information. # DrugBank Source: https://docs.corti.ai/agentic/experts/drugbank Learn about how the DrugBank expert works Agent framework and tools are currently `under development`. API details subject to change ahead of general release. The **DrugBank** expert provides AI agents with access to DrugBank, a comprehensive database containing detailed drug and drug target information. DrugBank is an essential resource for drug information, interactions, pharmacology, and medication-related queries. ## Capabilities The DrugBank expert can: * Search DrugBank's database for drug information * Retrieve drug interactions, contraindications, and warnings * Access pharmacological data and drug properties * Find medication-related information and dosing guidelines ## Use Cases * Drug interaction checking * Medication information lookups * Pharmacological research * Clinical decision support for prescribing ## Detailed information The DrugBank expert integrates with DrugBank's comprehensive pharmaceutical knowledge base, which contains detailed information about drugs, their mechanisms of action, interactions, pharmacokinetics, and pharmacodynamics. When invoked by an AI agent, it can retrieve critical medication information to support safe prescribing practices and clinical decision-making. # Medical Calculator Source: https://docs.corti.ai/agentic/experts/medical-calculator Learn about how the Medical Calculator expert works Agent framework and tools are currently `under development`. API details subject to change ahead of general release. The **Medical Calculator** expert enables AI agents to perform medical calculations, including clinical scores, dosing calculations, risk assessments, and other healthcare-related computations. Medical Calculator ensures accurate clinical calculations, reducing the risk of manual calculation errors in critical healthcare scenarios. ## Capabilities The Medical Calculator expert can: * Perform clinical scoring calculations (e.g., CHADS2-VASc, APACHE, etc.) * Calculate medication dosages based on patient parameters * Compute risk assessments and probability scores * Execute various medical formulas and algorithms ## Use Cases * Clinical risk scoring and assessment * Medication dosing calculations * Laboratory value interpretations * Clinical decision support calculations ## Detailed information The Medical Calculator expert provides a comprehensive set of medical calculation capabilities, including clinical scores, dosing formulas, risk assessments, and other healthcare computations. It ensures accuracy and consistency in calculations that are critical for patient care, reducing the risk of errors that can occur with manual calculations. # Medical Coding Source: https://docs.corti.ai/agentic/experts/medical-coding Learn about how the Medical Coding expert works Agent framework and tools are currently `under development`. API details subject to change ahead of general release. The **Medical Coding** expert provides AI agents with the ability to assign appropriate medical codes (such as ICD-10, CPT, or other coding systems) based on clinical documentation and patient information. Medical Coding is essential for billing, claims processing, and maintaining accurate medical records that comply with healthcare coding standards. ## Capabilities The Medical Coding expert can: * Assign appropriate medical codes from various coding systems * Analyze clinical documentation to identify codeable conditions * Suggest codes based on diagnoses, procedures, and clinical findings * Ensure compliance with coding standards and guidelines ## Use Cases * Automated medical coding for billing and claims * Clinical documentation coding assistance * Code validation and verification * Revenue cycle management support ## Detailed information The Medical Coding expert analyzes clinical documentation, patient records, and medical narratives to identify and assign appropriate medical codes. It supports various coding systems including ICD-10, CPT, HCPCS, and others. The expert can help ensure accurate coding, reduce manual coding errors, and improve efficiency in healthcare administrative workflows. # Posos Source: https://docs.corti.ai/agentic/experts/posos Learn about how the Posos expert works Agent framework and tools are currently `under development`. API details subject to change ahead of general release. The **Posos** expert provides AI agents with access to Posos's medical knowledge platform, enabling retrieval of clinical information and medical reference data. Posos offers comprehensive medical reference information that supports clinical decision-making and medical education. ## Capabilities The Posos expert can: * Access Posos's medical knowledge database * Retrieve clinical reference information * Provide medical content and educational materials * Support clinical workflows with authoritative medical data ## Use Cases * Clinical reference lookups * Medical information retrieval * Supporting clinical decision-making * Medical education and training ## Detailed information The Posos expert integrates with Posos's medical knowledge platform to provide access to their comprehensive database of medical information. When invoked by an AI agent, it can search and retrieve relevant clinical reference data, medical content, and educational materials that support healthcare workflows and clinical decision-making processes. # PubMed Source: https://docs.corti.ai/agentic/experts/pubmed Learn about how the PubMed expert works Agent framework and tools are currently `under development`. API details subject to change ahead of general release. The **PubMed** expert provides AI agents with access to PubMed, the comprehensive database of biomedical literature maintained by the National Library of Medicine. PubMed is the go-to resource for accessing peer-reviewed medical research, clinical studies, and scientific publications. ## Capabilities The PubMed expert can: * Search PubMed's database of biomedical literature * Retrieve research papers, clinical studies, and scientific articles * Access abstracts and metadata for publications * Find relevant research based on medical queries ## Use Cases * Literature reviews and research * Evidence-based practice support * Finding relevant clinical studies * Accessing peer-reviewed medical research ## Detailed information The PubMed expert integrates with PubMed's extensive database, which contains millions of citations from biomedical literature, life science journals, and online books. When invoked by an AI agent, it can search for relevant research papers, clinical studies, and scientific publications, providing access to the latest evidence-based medical research to support clinical decision-making. # Questionnaire Interviewing Expert Source: https://docs.corti.ai/agentic/experts/questionnaire-interviewing Learn about how the Questionnaire Interviewing expert works Agent framework and tools are currently `under development`. API details subject to change ahead of general release. The **Questionnaire Interviewing** expert enables AI agents to conduct structured interviews and questionnaires, guiding conversations to collect specific information in a systematic manner. This expert is ideal for patient intake, clinical assessments, and any scenario where structured data collection is required. ## Capabilities The Questionnaire Interviewing expert can: * Conduct structured interviews following predefined questionnaires * Guide conversations to collect specific information * Adapt questioning based on responses * Ensure comprehensive data collection ## Use Cases * Patient intake and history taking * Clinical assessments and screenings * Research data collection * Structured information gathering workflows ## Detailed information The Questionnaire Interviewing expert allows AI agents to conduct structured interviews by following predefined questionnaires or interview protocols. It can adapt its questioning based on patient responses, ensure all required information is collected, and maintain consistency across multiple interactions. This is particularly valuable for clinical workflows that require systematic data collection. # Thieme Source: https://docs.corti.ai/agentic/experts/thieme Learn about how the Thieme expert works Agent framework and tools are currently `under development`. API details subject to change ahead of general release. The **Thieme** expert provides access to Thieme's medical reference materials and educational content, enabling AI agents to retrieve authoritative medical information from Thieme's publications. Thieme is a trusted source for medical reference materials, textbooks, and educational content used by healthcare professionals worldwide. ## Capabilities The Thieme expert can: * Access Thieme's medical reference database * Retrieve information from medical textbooks and publications * Provide authoritative medical content * Support clinical reference and education workflows ## Use Cases * Clinical reference lookups * Medical education and training * Accessing authoritative medical information * Supporting evidence-based practice ## Detailed information The Thieme expert integrates with Thieme's medical knowledge platform, providing access to their extensive collection of medical reference materials, textbooks, and educational content. When invoked by an AI agent, it can search Thieme's database and return structured medical information that supports clinical decision-making and medical education. # Web Search Source: https://docs.corti.ai/agentic/experts/web-search Learn about how the Web Search expert works Agent framework and tools are currently `under development`. API details subject to change ahead of general release. The **Web Search** expert enables AI agents to perform real-time web searches, allowing them to access current information from the internet that may not be available in static knowledge bases. Web Search is essential for accessing the most up-to-date information, recent research, news, and dynamic content that changes frequently. ## Capabilities The Web Search expert can: * Perform real-time web searches across the internet * Retrieve current information and recent updates * Access news, research papers, and dynamic content * Provide context from multiple online sources ## Use Cases * Finding recent medical research and publications * Accessing current news and updates * Verifying information with multiple sources * Retrieving information not available in static databases ## Detailed information The Web Search expert connects AI agents to real-time web search capabilities, enabling them to query the internet and retrieve relevant information. This is particularly valuable for accessing information that changes frequently or is too recent to be included in static knowledge bases. The expert can aggregate results from multiple sources to provide comprehensive answers. # FAQ Source: https://docs.corti.ai/agentic/faq Frequently asked questions about the Corti Agentic Framework Common questions and answers to help you get the most out of the Corti Agentic Framework and the underlying A2A-based APIs. The **Orchestrator** is the central coordinator of the Corti Agentic Framework. It receives user requests, reasons about what needs to be done, and delegates work to specialized Experts. The Orchestrator doesn't perform specialized work itself—instead, it plans, selects appropriate Experts, and coordinates their activities to accomplish complex workflows. An **Expert** is a specialized sub-agent that performs domain-specific tasks. Experts are designed to complete small, discrete tasks efficiently, such as clinical reference lookups, medical coding, or document generation. The Orchestrator composes complex workflows by chaining multiple Experts together. In summary: the Orchestrator coordinates and delegates; Experts execute specialized work. For more details, see [Orchestrator](/agentic/orchestrator) and [Experts](/agentic/experts). **A2A (Agent-to-Agent)** is the protocol used for accessing the Corti API and for communication between agents. It's the standard protocol that your application uses to interact with Corti agents, send messages, receive tasks, and manage the agent lifecycle. A2A enables secure, framework-agnostic communication between autonomous AI agents. **MCP (Model Context Protocol)** is the way to connect additional Experts. When you create custom Experts by exposing an MCP server, Corti wraps it in a custom LLM agent. MCP handles agent-to-tool interactions, allowing Experts to interact with external systems and resources. In the Corti Agentic Framework: A2A handles agent-to-agent communication (including your API calls to Corti), while MCP handles agent-to-tool interactions for Expert integrations. For more information, see [A2A Protocol](/agentic/a2a-protocol) and [MCP Protocol](/agentic/mcp-protocol). The Corti agent typically returns **Tasks** rather than Messages. A Task represents a stateful unit of work with a unique ID and defined lifecycle, which is ideal for most operations in the Corti Agentic Framework. Tasks are used for: * Long-running operations (for example, generating a full clinical document) * Multi-step workflows that coordinate multiple Experts * Operations that may need to wait on downstream systems * Any work that benefits from tracking and monitoring Messages (with immediate responses) are less common and typically used only for very quick operations like simple classifications or completions that can be resolved immediately without any asynchronous processing. For more details, see [Core Concepts](/agentic/core-concepts). Use **`TextPart`** for messages that will be directly exposed to the Orchestrator and the LLM. TextPart content is immediately available for reasoning and response generation. Use **`DataPart`** for structured JSON data that will be stored in memory first and accessed through more indirect manipulation. DataPart content is automatically indexed and stored in the context's memory, enabling semantic retrieval when needed. DataPart is JSON-only and is useful for structured data like patient records, clinical facts, workflow parameters, or EHR identifiers. You can combine both in a single message: use TextPart for the primary instruction or question, and DataPart to provide structured context that will be semantically retrieved when relevant. For more details, see [Core Concepts](/agentic/core-concepts) and [Context & Memory](/agentic/context-memory). Both `Message` and `Artifact` use the same underlying `Part` primitives, but they serve different roles: * **`Message` (with `role: "agent"`)** * Represents a **single turn of communication** from the agent to the client. * Best for ephemeral conversational output, intermediate reasoning, clarifications, or status updates. * Typically tied to a particular task step but not necessarily considered a durable business deliverable. * **`Artifact`** * Represents a **tangible, durable output** of a task (for example, a SOAP note, coding suggestions, a structured fact bundle, or a generated document). * Has its own `artifactId`, name/metadata, and lifecycle; can be streamed, versioned, and reused by later tasks. * Is what downstream systems, UIs, or audits usually consume as the final result. A useful mental model is: **Messages are how agents "talk"; Artifacts are what they "produce".** You might see several `agent` messages during a task (status, intermediate commentary), but only a small number of artifacts that represent the completed work. The Corti Agentic Framework provides automatic memory management through contexts. The `contextId` is always created on the server—send your first message without a `contextId`, and the server will return one in the response. Include that `contextId` in subsequent messages to maintain conversation history automatically. You can also pass additional context in each request using `DataPart` objects to include structured data, summaries, or other specific context alongside the automatic memory. For comprehensive guidance on context and memory management, see [Context & Memory](/agentic/context-memory). No, you cannot share data between different contexts. Contexts provide strict data isolation—data can **never** leak across contexts. Each `contextId` creates a completely isolated conversation scope where messages, tasks, artifacts, and any data within one context are completely inaccessible to agents working in a different context. This isolation ensures: * **Privacy and security**: Patient data from one encounter cannot accidentally be exposed to another encounter * **Data integrity**: Information from different workflows remains properly separated * **Compliance**: You can confidently scope sensitive data to specific contexts without risk of cross-contamination If you need to share information across contexts, you must explicitly pass it via `DataPart` objects in your messages—there is no automatic data sharing between contexts. For more details, see [Context & Memory](/agentic/context-memory). The current time-to-live (TTL) for context memory is **30 days**. After this period, the context and its associated memory are automatically cleaned up. For more information about context lifecycle and memory management, see [Context & Memory](/agentic/context-memory). The Orchestrator analyzes incoming requests and uses reasoning to determine which Expert(s) are needed to fulfill the task. It considers the nature of the request, the available Experts, and their capabilities. You can control Expert selection by writing additional system prompts, both in the Orchestrator configuration and in individual Expert configurations. System prompts guide how the Orchestrator reasons about task decomposition and Expert selection, and how Experts interpret and execute their assigned work. The Orchestrator can compose multiple Experts together, calling them in sequence or parallel as needed to accomplish complex workflows. For more information, see [Orchestrator](/agentic/orchestrator) and [Experts](/agentic/experts). Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework. # Orchestrator Source: https://docs.corti.ai/agentic/orchestrator Learn about the Orchestration Agent at the center of the Agentic Framework The **Orchestrator** is the central intelligence layer of the Corti Agentic Framework. It serves as the primary interface between users and the multi-agent system, coordinating the flow of conversations and tasks. Diagram showing guardrails in the agentic framework ## What the Orchestrator Does The Orchestrator reasons about incoming requests and determines how to fulfill them by coordinating with specialized [Experts](/agentic/experts). Its core responsibilities include: * **Reasoning and planning**: Analyzes user requests and determines the necessary steps to complete them * **Expert selection**: Decides which Expert(s) to call, in what order, and with what data * **Task decomposition**: Breaks complex requests into discrete tasks that can be handled by individual Experts * **Response generation**: Aggregates results from Experts and typically generates the final response to the user * **Context management**: Has full access to the [context](/agentic/context-memory), while Experts typically only have scoped access to relevant portions * **Safety enforcement**: Enforces guardrails, type validation, and policy-driven constraints to ensure safe operation in clinical environments The Orchestrator does not perform specialized work itself—instead, it delegates to appropriate Experts and coordinates their activities to accomplish complex workflows. *** For more information about how the Orchestrator fits into the overall architecture, see [Architecture](/agentic/architecture). To understand how context and memory work, see [Context & Memory](/agentic/context-memory). Please [contact us](https://help.corti.app/) if you need more information about the Orchestrator in the Corti Agentic Framework. # Overview to the Corti Agentic Framework Source: https://docs.corti.ai/agentic/overview AI for every healthcare app The Corti Agentic Framework is a modular artificial intelligence system for software developers to build advanced AI agents that perform high-quality clinical and operational tasks, without having to spend months on complex architecture work. The AI Agent is designed to support use cases across the healthcare spectrum, from chat-based assistants for doctors to automating EHR data entry and powering clinical decision support workflows. ## What Problems It Solves Modern LLMs are powerful, but on their own they are insufficient and unsafe for clinical use. The Corti Agent Platform addresses two fundamental gaps: ### 1. LLMs Do Not Have Reliable Access to Clinical Data LLMs cannot be trusted to rely on internal knowledge alone. In healthcare, responses must be grounded in clinically validated reference sources, real-time patient and system data, and customer-owned systems and APIs. Without access to these sources at runtime, models are forced to infer or guess, which is unacceptable in clinical settings. The Corti Agentic Framework addresses this by enabling agents to retrieve information directly from trusted external tools as part of their reasoning process. Instead of hallucinating answers, agents are designed to look things up, verify context, and base their outputs on authoritative data. ### 2. LLMs Cannot Safely Act on the World Clinical workflows require more than generating text. They involve interacting with real systems: querying EHRs, drafting and updating documentation, preparing prescriptions, and triggering downstream processes. The framework provides a controlled execution layer that allows agents to plan actions, invoke tools, and coordinate multi-step workflows while remaining within clearly defined safety boundaries. Where necessary, agents can pause execution, request human approval, and resume only once explicit consent is given. This ensures that automation enhances clinical workflows without bypassing governance or control. *** ## What You Can Build With It Using the Corti Agent Platform, teams can build: * **Clinician-facing assistants** * Documentation editing * Guideline and reference lookup * Coding and administrative support * **Programmatic agent endpoints** * Embedded into existing clinical software * Triggered by events, APIs, or workflows * **Customer-embedded agents** * Customers bring their own tools and systems * Agents combine Corti, third-party, and customer capabilities All of these share the same underlying agent runtime, safety model, and orchestration layer. *** ## Built for Healthcare by Design Healthcare is not a general-purpose domain, and this platform reflects that reality. **Key design principles include:** Typed inputs and outputs, explicit tool schemas, and guardrails around action-taking ensure safe operation in clinical environments. Every decision and tool call is observable with replayable traces and structured logs for transparency, compliance, and quality assurance. Fine-tuned reasoning layers optimized for healthcare language, workflows, and compliance needs. Corti Agentic Framework uses a state-of-the-art multi-agent architecture to enable greater scale, accuracy, and resilience in AI-driven workflows. Maintain persistent, context-aware conversations and manage multiple active contexts (threads) without losing information throughout the session. Access a library of prebuilt Experts: specialized agents that connect to data sources, tools, and services to execute clinical and operational tasks. Plug directly into EHRs, clinical decision support systems, and medical knowledge bases with minimal setup. Pass relevant context with each query, including structured data formats like FHIR resources, enabling Experts to work with rich, domain-specific information. *** ## Who It’s For The Corti Agent Platform is built for teams working on healthcare software. It is intended for: * **Healthcare software companies** embedding intelligent automation directly into their products * **Enterprise customers** building internal, AI-powered clinical workflows * **Advanced engineering teams** that need flexibility, control, and strong safety guarantees without building bespoke agent infrastructure from scratch The platform is not limited to simple prompt-based chatbots. It is designed to make it easy to go from demo to **production-grade clinical AI systems** that operate safely in real-world healthcare environments. *** ## Agents vs. Workflows Understanding the difference between agents and workflows helps you choose the right approach for your use case: **Agents** are autonomous systems that can think, reason, and adapt to new situations. They use AI to understand context, make decisions dynamically, and take actions based on the task at hand—even when encountering scenarios they haven't seen before. Like a chef who can create a meal based on what's available, agents excel at handling unpredictable, open-ended tasks that require flexibility and judgment. **Workflows** are structured, step-by-step processes that follow predefined paths. They execute tasks in a fixed order, like following a recipe or checklist. Workflows are ideal for repeatable processes that require consistency and compliance, such as automated approval processes or scheduled maintenance tasks. For workflow-oriented needs, you can leverage our toolkit of other APIs to orchestrate well-defined, repeatable flows throughout your solution. In the Corti Agentic Framework, agents leverage the Orchestrator to compose experts dynamically, adapting their approach based on the situation. Workflows, on the other hand, provide deterministic execution paths for tasks with well-defined steps and requirements—supported by our robust library of workflow APIs and integrations. Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework. # Quickstart & First Build Source: https://docs.corti.ai/agentic/quickstart Get started with the Corti Agentic Framework This guide will walk you through setting up your first agent and getting it running end-to-end. After completing this quickstart, you'll have a working agent and know where to go next based on your use case. ### Prerequisites * API access credentials * Development environment set up * Basic understanding of REST APIs Start by creating a project in the Corti console. This gives you a workspace and access to manage your clients and credentials. If you haven't set up authentication before, follow the Creating Clients and authentication quickstart guides. Use the Corti Agentic API to create your first agent. You'll need an access token (obtained using your client credentials) and your tenant name. ```js JavaScript theme={null} const myAgent = await client.agents.create({ name: "My First Agent", description: "A simple agent to get started with the Corti Agentic Framework" }); ``` ```py Python expandable theme={null} import requests BASE_URL = "https://api..corti.app/v2" # e.g. "https://api.eu.corti.app/v2" HEADERS = { "Authorization": "Bearer ", "Tenant-Name": "", # e.g. "base" "Content-Type": "application/json", } def create_agent(): """Create a new agent using the Corti Agentic API.""" url = f"{BASE_URL}/agents" payload = { "name": "My First Agent", "description": "A simple agent to get started with the Corti Agentic Framework", } response = requests.post(url, json=payload, headers=HEADERS) response.raise_for_status() return response.json() ``` Use your stored credentials to authenticate, then run your agent end-to-end and verify it processes input and returns the expected outputs. ```js JavaScript theme={null} const agentResponse = await client.agents.messageSend(myAgent.id, { message: { role: "user", parts: [{ kind: "text", text: "Hello there. This is my first message." }], messageId: crypto.randomUUID(), kind: "message" } }); console.log(agentResponse.task.status.message.parts[0].text) ``` ```py Python expandable theme={null} import uuid import requests BASE_URL = "https://api..corti.app/v2" # e.g. "https://api.eu.corti.app/v2" HEADERS = { "Authorization": "Bearer ", "Tenant-Name": "", # e.g. "base" "Content-Type": "application/json", } def send_message(agent_id: str): """Send a message to an existing agent and return the task response.""" url = f"{BASE_URL}/agents/{agent_id}/v1/message:send" payload = { "message": { "role": "user", "parts": [ { "kind": "text", "text": "Hello there. This is my first message.", } ], "messageId": str(uuid.uuid4()), "kind": "message", } } response = requests.post(url, json=payload, headers=HEADERS) response.raise_for_status() task_response = response.json() # Print the task status message text (equivalent to agentResponse.task.status.message.parts[0].text) print(task_response["task"]["status"]["message"]["parts"][0]["text"]) return task_response # Assuming you created the agent in the previous step: # my_agent = create_agent() # send_message(my_agent["id"]) ``` ### Next Steps Depending on your use case: * **Building custom agents**: See [Core Concepts](/agentic/core-concepts) * **Integrating with existing systems**: See [SDKs & Integrations](/agentic/sdks-integrations) * **Understanding the architecture**: See [Architecture Overview](/agentic/architecture) * **Working with protocols**: See [A2A Protocol](/agentic/a2a-protocol) and [MCP Protocol](/agentic/mcp-protocol) Please [contact us](https://help.corti.app/) if you need more information about the Corti Agentic Framework. # SDKs & Integrations Source: https://docs.corti.ai/agentic/sdks-integrations Official SDKs and integration options for the Corti Agentic Framework The Corti Agentic Framework provides official SDKs and supports community integrations to help you build quickly. ## Official Corti SDK Official Corti Agentic SDK for Node and browser environments.
@corti/sdk on npm →
## Official A2A Project SDKs Build A2A-compliant agents and servers in Python.
a2a-python (Stable) →
Official JavaScript/TypeScript SDK for A2A.
a2a-js (Stable) →
Build A2A-compliant agents and services in Java.
a2a-java (Stable) →
Implement A2A agents and servers in Go.
a2a-go (Stable) →
Build A2A-compatible agents in .NET ecosystems.
a2a-dotnet (Stable) →
## Other libraries * **shadcn/ui component library for chatbots**: [`ai-elements` on npm](https://www.npmjs.com/package/ai-elements) * **A2A Inspector**: [a2a-inspector on GitHub](https://github.com/a2aproject/a2a-inspector) * **Awesome A2A**: [awesome-a2a on GitHub](https://github.com/ai-boost/awesome-a2a) # Authenticate user and get access token Source: https://docs.corti.ai/api-reference/admin/auth/authenticate-user-and-get-access-token api-reference/admin/admin-openapi.yml post /auth/token Authenticate using email and password to receive an access token. This `Admin API` is separate from the `Corti API` used for speech recognition, text generation, and agentic workflows:
  • Authentication and scope for the `Admin API` uses email-and-password to obtain a bearer token via `/auth/token`. This token is only used for API administration.
Please [contact us](https://help.corti.app) if you have interest in this functionality or further questions.
# Create a new customer Source: https://docs.corti.ai/api-reference/admin/customers/create-a-new-customer api-reference/admin/admin-openapi.yml post /projects/{projectId}/customers # Delete a customer Source: https://docs.corti.ai/api-reference/admin/customers/delete-a-customer api-reference/admin/admin-openapi.yml delete /projects/{projectId}/customers/{customerId} # Get quotas for a customer Source: https://docs.corti.ai/api-reference/admin/customers/get-quotas-for-a-customer api-reference/admin/admin-openapi.yml get /projects/{projectId}/customers/{customerId}/quotas # List customers for a project Source: https://docs.corti.ai/api-reference/admin/customers/list-customers-for-a-project api-reference/admin/admin-openapi.yml get /projects/{projectId}/customers # Update a customer Source: https://docs.corti.ai/api-reference/admin/customers/update-a-customer api-reference/admin/admin-openapi.yml patch /projects/{projectId}/customers/{customerId} Update specific fields of a customer. Only provided fields will be updated. # Get quotas for all tenants within a project Source: https://docs.corti.ai/api-reference/admin/projects/get-quotas-for-all-tenants-within-a-project api-reference/admin/admin-openapi.yml get /projects/{projectId}/quotas # Create a new user and add it to the customer Source: https://docs.corti.ai/api-reference/admin/users/create-a-new-user-and-add-it-to-the-customer api-reference/admin/admin-openapi.yml post /projects/{projectId}/customers/{customerId}/users # Delete a user Source: https://docs.corti.ai/api-reference/admin/users/delete-a-user api-reference/admin/admin-openapi.yml delete /projects/{projectId}/customers/{customerId}/users/{userId} # List users for a customer Source: https://docs.corti.ai/api-reference/admin/users/list-users-for-a-customer api-reference/admin/admin-openapi.yml get /projects/{projectId}/customers/{customerId}/users # Update a user Source: https://docs.corti.ai/api-reference/admin/users/update-a-user api-reference/admin/admin-openapi.yml patch /projects/{projectId}/customers/{customerId}/users/{userId} # Generate Codes Source: https://docs.corti.ai/api-reference/codes/generate-codes api-reference/auto-generated-openapi.yml post /interactions/{id}/codes/ `Limited Access - Contact us for more information`

Generate codes within the context of an interaction.
This endpoint is only accessible within specific customer tenants. It is not available in the public API.

For stateless code prediction based on input text string or documentId, please refer to the [Predict Codes](/api-reference/codes/predict-codes) API, or [contact us](https://help.corti.app) for more information.
# List Codes Source: https://docs.corti.ai/api-reference/codes/list-codes api-reference/auto-generated-openapi.yml get /interactions/{id}/codes/ `Limited Access - Contact us for more information`

List predicted codes within the context of an interaction.
This endpoint is only accessible within specific customer tenants. It is not available in the public API.

For stateless code prediction based on input text string or documentId, please refer to the [Predict Codes](/api-reference/codes/predict-codes) API, or [contact us](https://help.corti.app) for more information.
# Predict Codes Source: https://docs.corti.ai/api-reference/codes/predict-codes api-reference/auto-generated-openapi.yml post /tools/coding/ Predict medical codes from provided context.
This is a stateless endpoint, designed to predict ICD-10-CM, ICD-10-PCS, and CPT codes based on input text string or documentId.

More than one code system may be defined in a single request, and the maximum number of codes to return per system can also be defined.

Code prediction requests have two possible values for context:
- `text`: One set of code prediction results will be returned based on all input text defined.
- `documentId`: Code prediction will be based on that defined document only.

The response includes two sets of results:
- `Codes`: Highest confidence bundle of codes, as selected by the code prediction model
- `Candidates`: Full list of candidate codes as predicted by the model, rank sorted by model confidence with maximum possible value of 50.

All predicted code results are based on input context defined in the request only (not other external data or assets associated with an interaction).
# Select Codes Source: https://docs.corti.ai/api-reference/codes/select-codes api-reference/auto-generated-openapi.yml put /interactions/{id}/codes/ `Limited Access - Contact us for more information`

Select predicted codes within the context of an interaction.
This endpoint is only accessible within specific customer tenants. It is not available in the public API.

For stateless code prediction based on input text string or documentId, please refer to the [Predict Codes](/api-reference/codes/predict-codes) API, or [contact us](https://help.corti.app) for more information.
# Delete Document Source: https://docs.corti.ai/api-reference/documents/delete-document api-reference/auto-generated-openapi.yml delete /interactions/{id}/documents/{documentId} # Generate Document Source: https://docs.corti.ai/api-reference/documents/generate-document api-reference/auto-generated-openapi.yml post /interactions/{id}/documents/ Generate Document. # Get Document Source: https://docs.corti.ai/api-reference/documents/get-document api-reference/auto-generated-openapi.yml get /interactions/{id}/documents/{documentId} Get Document. # List Documents Source: https://docs.corti.ai/api-reference/documents/list-documents api-reference/auto-generated-openapi.yml get /interactions/{id}/documents/ List Documents # Update Document Source: https://docs.corti.ai/api-reference/documents/update-document api-reference/auto-generated-openapi.yml patch /interactions/{id}/documents/{documentId} # Add Facts Source: https://docs.corti.ai/api-reference/facts/add-facts api-reference/auto-generated-openapi.yml post /interactions/{id}/facts/ Adds new facts to an interaction. # Extract Facts Source: https://docs.corti.ai/api-reference/facts/extract-facts api-reference/auto-generated-openapi.yml post /tools/extract-facts Extract facts from provided text, without storing them. # List Fact Groups Source: https://docs.corti.ai/api-reference/facts/list-fact-groups api-reference/auto-generated-openapi.yml get /factgroups/ Returns a list of available fact groups, used to categorize facts associated with an interaction. # List Facts Source: https://docs.corti.ai/api-reference/facts/list-facts api-reference/auto-generated-openapi.yml get /interactions/{id}/facts/ Retrieves a list of facts for a given interaction. # Update Fact Source: https://docs.corti.ai/api-reference/facts/update-fact api-reference/auto-generated-openapi.yml patch /interactions/{id}/facts/{factId} Updates an existing fact associated with a specific interaction. # Update Facts Source: https://docs.corti.ai/api-reference/facts/update-facts api-reference/auto-generated-openapi.yml patch /interactions/{id}/facts/ Updates multiple facts associated with an interaction. # Create Interaction Source: https://docs.corti.ai/api-reference/interactions/create-interaction api-reference/auto-generated-openapi.yml post /interactions/ Creates a new interaction. # Delete Interaction Source: https://docs.corti.ai/api-reference/interactions/delete-interaction api-reference/auto-generated-openapi.yml delete /interactions/{id} Deletes an existing interaction. # Get Existing Interaction Source: https://docs.corti.ai/api-reference/interactions/get-existing-interaction api-reference/auto-generated-openapi.yml get /interactions/{id} Retrieves a previously recorded interaction by its unique identifier (interaction ID). # List All Interactions Source: https://docs.corti.ai/api-reference/interactions/list-all-interactions api-reference/auto-generated-openapi.yml get /interactions/ Lists all existing interactions. Results can be filtered by encounter status and patient identifier. # Update Interaction Source: https://docs.corti.ai/api-reference/interactions/update-interaction api-reference/auto-generated-openapi.yml patch /interactions/{id} Modifies an existing interaction by updating specific fields without overwriting the entire record. # Delete Recording Source: https://docs.corti.ai/api-reference/recordings/delete-recording api-reference/auto-generated-openapi.yml delete /interactions/{id}/recordings/{recordingId} Delete a specific recording for a given interaction. # Get Recording Source: https://docs.corti.ai/api-reference/recordings/get-recording api-reference/auto-generated-openapi.yml get /interactions/{id}/recordings/{recordingId} Retrieve a specific recording for a given interaction. # List Recordings Source: https://docs.corti.ai/api-reference/recordings/list-recordings api-reference/auto-generated-openapi.yml get /interactions/{id}/recordings/ Retrieve a list of recordings for a given interaction. # Upload Recording Source: https://docs.corti.ai/api-reference/recordings/upload-recording api-reference/auto-generated-openapi.yml post /interactions/{id}/recordings/ Upload a recording for a given interaction. There is a maximum limit of 60 minutes in length and 150MB in size for recordings. # Real-time conversational transcript generation and fact extraction (FactsR™) Source: https://docs.corti.ai/api-reference/stream WebSocket Secure (WSS) API Documentation for /stream endpoint ## Overview The WebSocket Secure (WSS) `/stream` API enables real-time, bidirectional communication with the Corti system for interaction streaming. Clients can send and receive structured data, including transcripts and fact updates. Learn more about [FactsR™ here](/textgen/factsr/). This documentation provides a structured guide for integrating the Corti WSS API for real-time interaction streaming. This `/stream` endpoint supports real-time ambient documentation interactions and clinical decision support workflows. * If you are looking for a stateless endpoint that is geared towards front-end dictation workflows you should use the [/transcribe WSS](/api-reference/transcribe/) * If you are looking for asynchronous ambient documentation interactions, then please refer to the [/documents endpoint](/api-reference/documents/generate-document/) *** ## 1. Establishing a Connection Clients must initiate a WebSocket connection using the `wss://` scheme and provide a valid interaction ID in the URL. When creating an interaction, the 200 response provides a `websocketUrl` for that interaction including the `tenant-name` as url parameter. The authentication for the WSS stream requires in addition to the `tenant-name` parameter a `token` parameter to pass in the Bearer access token. ### Path Parameters Unique interaction identifier ### Query Parameters `eu` or `us` Specifies the tenant context Bearer \$token #### Using SDK You can use the Corti SDK (currently in "beta") to connect to a stream endpoint. ```ts title="JavaScript (Beta)" theme={null} import { CortiClient, CortiEnvironment } from "@corti/sdk"; const cortiClient = new CortiClient({ tenantName: "YOUR_TENANT_NAME", environment: CortiEnvironment.Eu, auth: { accessToken: "YOUR_ACCESS_TOKEN" }, }); const streamSocket = await cortiClient.stream.connect({ id: "" }); ``` *** ## 2. Handshake Responses ### 101 Switching Protocols Indicates a successful WebSocket connection. Once connected, the server streams data in the following formats: ### Transcripts Data Streams | Property | Type | Description | | :--------------------------- | :--------------- | :------------------------------------------------------- | | `type` | string | "transcript" | | `data` | array of objects | Transcript segments | | `data[].id` | string | Unique identifier for the transcript | | `data[].transcript` | string | The transcribed text | | `data[].final` | boolean | Indicates whether the transcript is finalized or interim | | `data[].speakerId` | integer | Speaker identifier (-1 if diarization is off) | | `data[].participant.channel` | integer | Audio channel number (e.g. 0 or 1) | | `data[].time.start` | number | Start time of the transcript segment | | `data[].time.end` | number | End time of the transcript segment | ```json theme={null} { "type": "transcript", "data": [ { "id": "UUID", "transcript": "Patient presents with fever and cough.", "final": true, "speakerId": -1, "participant": { "channel": 0 }, "time": { "start": 1.71, "end": 11.296 } } ] } ``` ### Facts Data Streams | Property | Type | Description | | :------------------- | :------------------------- | :--------------------------------------------------- | | `type` | string | "facts" | | `fact` | array of objects | Fact objects | | `fact[].id` | string | Unique identifier for the fact | | `fact[].text` | string | Text description of the fact | | `fact[].group` | string | Categorization of the fact (e.g., "medical-history") | | `fact[].groupId` | string | Unique identifier for the group | | `fact[].isDiscarded` | boolean | Indicates if the fact was discarded | | `fact[].source` | string | Source of the fact (e.g., "core") | | `fact[].createdAt` | string (date-time) | Timestamp when the fact was created | | `fact[].updatedAt` | string or null (date-time) | Timestamp when the fact was last updated | ```json theme={null} { "type": "facts", "fact": [ { "id": "UUID", "text": "Patient has a history of hypertension.", "group": "medical-history", "groupId": "UUID", "isDiscarded": false, "source": "core", "createdAt": "2024-02-28T12:34:56Z", "updatedAt": "2024-02-28T12:35:56Z" } ] } ``` By default, incoming audio and returned data streams are persisted on the server, associated with the interactionId. You may query the interaction to retrieve the stored `recordings`, `transcripts`, and `facts` via the relevant REST endpoints. Audio recordings are saved as .webm format; transcripts and facts as json objects. Data persistence can be disabled by Corti upon request when needed to support compliance with your applicable regulations and data handling preferences. #### Using SDK You can use the Corti SDK (currently in "beta") to subscribe to stream messages. ```ts title="JavaScript (Beta)" theme={null} streamSocket.on("message", (message) => { // Distinguish message types switch (message.type) { case "transcript": // Handle transcript message console.log("Transcript:", message); break; case "facts": // Handle facts message console.log("Facts:", message); break; case "error": // Handle error message console.error("Error:", message); break; default: // Handle other message types console.log("Other message:", message); } }); streamSocket.on("error", (error) => { // Handle error console.error(error); }); streamSocket.on("close", () => { // Handle socket close console.log("Stream closed"); }); ``` *** ## 3. Sending Messages Clients must send a stream configuration message and wait for a response of type `CONFIG_ACCEPTED` before transmitting other data. Once the server responds with `{"type": "CONFIG_ACCEPTED"}` Clients can proceed with sending audio or controlling the stream status. ### Stream Configuration | Property | Type | Required | Description | | :--------------------------------------------------- | :------------ | :----------- | :---------------------------------------------------- | | `type` | string | Yes | "config" | | `configuration` | object | Yes | Configuration settings | | `configuration.transcription.primaryLanguage` | string (enum) | Yes | Primary spoken language for transcription | | `configuration.transcription.isDiarization` | boolean | No - `false` | Enable speaker diarization | | `configuration.transcription.isMultichannel` | boolean | No - `false` | Enable multi-channel audio processing | | `configuration.transcription.participants` | array | Yes | List of participants with roles assigned to a channel | | `configuration.transcription.participants[].channel` | integer | Yes | Audio channel number (e.g. 0 or 1) | | `configuration.transcription.participants[].role` | string (enum) | Yes | "doctor", "patient", or "multiple" | | `configuration.mode.type` | string (enum) | Yes | "facts" or "transcription" | | `configuration.mode.outputLocale` | string (enum) | No | Output language locale (required for `facts`) | #### Example ```json wss:/stream configuration example theme={null} { "type": "config", "configuration": { "transcription": { "primaryLanguage": "en", "isDiarization": false, "isMultichannel": false, "participants": [ { "channel": 0, "role": "multiple" } ] }, "mode": { "type": "facts", "outputLocale": "en" } } } ``` #### Using SDK You can use the Corti SDK (currently in "beta") to send stream configuration. You can provide the configuration either directly when connecting, or send it as a separate message after establishing the connection: ```ts title="JavaScript (Beta, recommended)" theme={null} const configuration = { transcription: { primaryLanguage: "en", isDiarization: false, isMultichannel: false, participants: [ { channel: 0, role: "multiple" } ] }, mode: { type: "facts", outputLocale: "en" } }; const streamSocket = await cortiClient.stream.connect({ id: "", configuration }); ``` ```ts title="JavaScript (Beta, handle configuration manually)" theme={null} const streamSocket = await cortiClient.stream.connect({ id: "" }); streamSocket.on("open", () => { streamSocket.sendConfiguration({ type: "config", configuration }); }); ``` ### Sending Audio Data Ensure that your configuration was accepted before starting to send audio and that your initial audio chunk is not too small as it needs to contain the headers to properly decode the audio. We recommend sending audio in chunks of 500ms. In terms of buffering, the limit is 64000 bytes per chunk. Audio data should be sent as raw binary without JSON wrapping. For bandwidth and efficiency reasons, utilizing the webm/opus encoding is recommended; however, you can send a variety of common audio formats as the audio you send first passes through a transcoder. Similarly, you do not need to specify any sample rate, depth or other audio settings. See more details [here](/about/asr/audio). ### Channels, participants and speakers In a typical **on-site setting** you should send mono-channel audio. If the microphone is a stereo-microphone, you can ensure to set `isMultichannel: false` and audio will be converted to mono-channel, ensuring no duplicate transcripts are being returned. In a **virtual setting** such as telehealth, you would typically have the virtual audio on one channel from webRTC and mix in on a separate channel the microphone of the local client. In this scenario, define `isMultichannel: true` and assign each channel the relevant participant role (e.g., if the doctor is on the local client and channel 0, then you can set the role for channel 0 to `doctor`). **Diarization** is independent of audio channels and participant roles. If you want transcript segments to be assigned to automatically identified speakers, set `isDiarization: true`. If `false`, transcript segments will be returned with `speakerId: -1`. If set to `true`, then diarization will try to identify speakers separately on each channel. The first identified speaker on each channel will have transcript segments with `speakerId: 0`, the second `speakerId: 1` and so forth. SpeakerIds are not related or matched to participant roles. #### Using SDK You can use the Corti SDK (currently in "beta") to send audio data to the stream. To send audio, use the sendAudio method on the stream socket. Audio should be sent as binary chunks (e.g., ArrayBuffer): ```ts title="JavaScript (Beta)" theme={null} streamSocket.sendAudio(chunk); // method doesn't do the chunking ``` ### Flush the Audio Buffer To flush the audio buffer, forcing transcript segments to be returned over the web socket (e.g., when turning off or muting the microphone for the patient to share something private, not to be recorded, during the conversation), send a message - ```json theme={null} { "type":"flush" } ``` The server will return text for audio sent before the `flush` message and then respond with message - ```json theme={null} { "type":"flushed" } ``` The web socket will remain open so recording can continue. FactsR generation (i.e., when working in `configuration.mode: facts`) is not impacted by the `flush` event and will continue to process as normal. Client side considerations: 1 If you rely on a `flush` event to separate data (e.g., for different sections in an EHR template), then be sure to receive the `flushed` event before moving on to the next data field. 2 When using a web browser `MediaRecorder` API, audio is buffered and only emitted at the configured timeslice interval. Therefore, *before* sending a `flush` message, call `MediaRecorder.requestData()` to force any remaining buffered audio on the client to be transmitted to the server. This ensures all audio reaches the server before the `flush` is processed. *** ## 4. Ending the Session To end the `/stream` session, send a message - ```json theme={null} { "type": "end" } ``` This will signal the server to send any remaining transcript segments and facts (depending on `mode` configuration). Then, the server will send two messages - ```json theme={null} { "type":"usage", "credits":0.1 } ``` ```json theme={null} { "type":"ENDED" } ``` Following the message type `ENDED`, the server will close the web socket. You can at any time open the WebSocket again by sending the configuration. #### Using SDK You can use the Corti SDK (currently in "beta") to control the stream status. When using automatic configuration (passing configuration to connect), the socket will close itself without reconnecting when it receives an ENDED message. When using manual configuration, the socket will attempt to reconnect after the server closes the connection. To prevent this, you must subscribe to the ENDED message and manually close the connection. ```ts title="JavaScript (Beta, recommended)" theme={null} const streamSocket = await cortiClient.stream.connect({ id: "", configuration }); streamSocket.sendEnd({ type: "end" }); streamSocket.on("message", (message) => { if (message.type === "usage") { console.log("Usage:", message); } // message is received, but connection closes automatically if (message.type === "ENDED") { console.log("ENDED:", message); } }); ``` ```ts title="JavaScript (Beta, manual configuration)" theme={null} const streamSocket = await cortiClient.stream.connect({ id: "" }); streamSocket.sendEnd({ type: "end" }); streamSocket.on("message", (message) => { if (message.type === "usage") { console.log("Usage:", message); } if (message.type === "ENDED") { streamSocket.close(); // Prevents unwanted reconnection } }); ``` *** ## 5. Error Handling In case of an invalid or missing interaction ID, the server will return an error before opening the WebSocket. From opening the WebSocket, you need to commit the configuration within 15 seconds, else the WebSocket will close again At the beginning of a WebSocket session the following messages related to configuration can be returned. ```json theme={null} {"type": "CONFIG_DENIED"} // in case the configuration is not valid {"type": "CONFIG_MISSING"} {"type": "CONFIG_NOT_PROVIDED"} {"type": "CONFIG_ALREADY_RECEIVED"} ``` In addition, a reason will be supplied, e.g. `reason: language unavailable` Once configuration has been accepted and the session is running, you may encounter runtime or application-level errors. These are sent as JSON objects with the following structure: ```json theme={null} { "type": "error", "error": { "id": "error id", "title": "error title", "status": 400, "details": "error details", "doc":"link to documentation" } } ``` In some cases, receiving an "error" type message will cause the stream to end and send a message of type `usage` and type `ENDED`. #### Using SDK You can use the Corti SDK (currently in "beta") to handle error messages. With recommended configuration, configuration errors (e.g., CONFIG\_DENIED, CONFIG\_MISSING, etc.) and runtime errors will both trigger the error event and automatically close the socket. You can also inspect the original message in the message handler. With manual configuration, configuration errors are only received as messages (not as error events), and you must close the socket manually to avoid reconnection. ```ts title="JavaScript (Beta, recommended)" theme={null} const streamSocket = await cortiClient.stream.connect({ id: "", configuration }); streamSocket.on("error", (error) => { // Emitted for both configuration and runtime errors console.error("Error event:", error); // The socket will close itself automatically }); // still can be accessed with normal "message" subscription streamSocket.on("message", (message) => { if ( message.type === "CONFIG_DENIED" || message.type === "CONFIG_MISSING" || message.type === "CONFIG_NOT_PROVIDED" || message.type === "CONFIG_ALREADY_RECEIVED" || message.type === "CONFIG_TIMEOUT" ) { console.log("Configuration error (message):", message); } if (message.type === "error") { console.log("Runtime error (message):", message); } }); ``` ```ts title="JavaScript (Beta, manual configuration)" theme={null} const streamSocket = await cortiClient.stream.connect({ id: "" }); streamSocket.on("message", (message) => { if ( message.type === "CONFIG_DENIED" || message.type === "CONFIG_MISSING" || message.type === "CONFIG_NOT_PROVIDED" || message.type === "CONFIG_ALREADY_RECEIVED" || message.type === "CONFIG_TIMEOUT" ) { console.error("Configuration error (message):", message); streamSocket.close(); // Must close manually to avoid reconnection } if (message.type === "error") { console.error("Runtime error (message):", message); streamSocket.close(); // Must close manually to avoid reconnection } }); ``` # Get Template Source: https://docs.corti.ai/api-reference/templates/get-template api-reference/auto-generated-openapi.yml get /templates/{key} Retrieves template by key. # List Template Sections Source: https://docs.corti.ai/api-reference/templates/list-template-sections api-reference/auto-generated-openapi.yml get /templateSections/ Retrieves a list of template sections with optional filters for organization and language. # List Templates Source: https://docs.corti.ai/api-reference/templates/list-templates api-reference/auto-generated-openapi.yml get /templates/ Retrieves a list of templates with optional filters for organization, language, and status. # Real-time stateless dictation Source: https://docs.corti.ai/api-reference/transcribe WebSocket Secure (WSS) API Documentation for /transcribe endpoint ## Overview The WebSocket Secure (WSS) `/transcribe` API enables real-time, bidirectional communication with the Corti system for stateless speech to text. Clients can send and receive structured data, including transcripts and detected commands. This documentation provides a comprehensive guide for integrating these capabilities. This `/transcribe` endpoint supports real-time stateless dictation. * If you are looking for real-time ambient documentation interactions, you should use the [/stream WSS](/api-reference/stream/) * If you are looking for transcript generation based on a pre-recorded audio file, then please refer to the [/transcripts endpoint](/api-reference/transcripts/create-transcript/) *** ## 1. Establishing a Connection Clients must initiate a WebSocket connection using the `wss://` scheme. The authentication for the WSS stream requires in addition to the `tenant-name` parameter a `token` parameter to pass in the Bearer access token. ### Query Parameters `eu` or `us` Specifies the tenant context Bearer \$token ```bash Example wss:/transcribe request theme={null} curl --request GET \ --url wss://api.${environment}.corti.app/audio-bridge/v2/transcribe?tenant-name=${tenant}&token=Bearer%20${accessToken} ``` #### Using SDK You can use the Corti SDK (currently in "beta") to connect to the /transcribe endpoint. ```ts title="JavaScript (Beta)" theme={null} import { CortiClient, CortiEnvironment } from "@corti/sdk"; const cortiClient = new CortiClient({ tenantName: "YOUR_TENANT_NAME", environment: CortiEnvironment.Eu, auth: { accessToken: "YOUR_ACCESS_TOKEN" }, }); const transcribeSocket = await cortiClient.transcribe.connect(); ``` *** ## 2. Handshake Response ### 101 Switching Protocols Indicates a successful WebSocket connection. Upon successful connection, send a `config` message to define the configuration: Specify the input language and expected output preferences. The config message must be sent within 10 seconds of the web socket being opened to prevent `CONFIG-TIMEOUT`, which will require establishing a new wss connection. *** ## 3. Sending Messages ### Configuration Declare your `/transcribe` configuration using the message `"type": "config"` followed by defining the `"configuration": {""}`. Defining the type is required along with the `primaryLanguage` configuration parameter. The other parameters are optional for use, depending on your need and workflow. Configuration notes: * Clients must send a stream configuration message and wait for a response of type `CONFIG_ACCEPTED` before transmitting other data. * If the configuration is not valid it will return `CONFIG_DENIED`. * The configuration must be committed within 10 seconds of opening the WebSocket, else it will time-out with `CONFIG_TIMEOUT`. The locale of the primary spoken language. See supported languages codes and more information [here](/about/languages). When true, converts spoken punctuation such as `period` or `slash` into `.`or `/`. Read more about supported punctuation [here](/stt/punctuation). When true, automatically punctuates and capitalizes in the final transcript. Spoken and Automatic Punctuation are mutually exclusive - only one should be set to true in a given configuration request. If both are included and set to `true`, then `spokenPunctuation` will take precedence and override `automaticPunctuation`. Provide the commands that should be registered and detected - Read more about commands [here](/stt/commands). Unique value to identify the command. This, along with the command phrase, will be returned by the API when the command is recognized during dictation. One or more word sequence(s) that can be spoken to trigger the command. At least one phrase is required per command. Placeholders that can (optionally) be added in `phrases` to define multiple words that should trigger the command. Define the variable used in command phrase here. The only variable type supported at this time is `enum`. List of values that should be recognized for the defined variable. Define each type of formatting preferences using the `enum` options described below. Read more about formatting [here](/stt/formatting). Formatting is currently `beta` status. API details subject to change ahead of general release. Defining formatting configuration is `optional`. When these preferences are not configured, the `default` values listed below will be applied automatically. | Option | Format | Example | | :------------ | :------------------------- | :--------------------------------------------------------- | | `as_dictated` | Preserve spoken phrasing | "February third twenty twenty five" -> "February 3rd 2025" | | `long_text` | Long date (default) | "3 February 2025" | | `eu_slash` | Short date (EU) | "03/02/2025" | | `us_slash` | Short date (US) | "02/03/2025" | | `iso_compact` | ISO (basic, no separators) | "20250302" | | Option | Format | Example | | :------------ | :---------------- | :-------------------------------------------------------- | | `as_dictated` | As dictated | "Four o'clock" or "four thirty five" or "sixteen hundred" | | `h12` | 12-hour | "4:00 PM" | | `h24` | 24-hour (default) | "16:00" | | Option | Format | Example | | :-------------------- | :----------------------------------------------------- | :--------------------------------------- | | `as_dictated` | As dictated | "one, two, ... nine, ten, eleven, twelve | | `numerals_above_nine` | Single digit as words, multi-digit as number (default) | "One, two ... nine, 10, 11, 12" | | `numerals` | Numbers only | "1, 2, ... 9, 10, 11, 12" | | Option | Format | Example | | :------------ | :-------------------- | :------------------------------------------------------------------------ | | `as_dictated` | As dictated | "Millimeters, centimeters, inches; Blood pressure one twenty over eighty" | | `abbreviated` | Abbreviated (default) | "mm, cm, in; BP 120/80" | [Click here](/stt/formatting#list-of-supported-units) to see a full list of supported units [Click here](/stt/formatting#list-of-supported-measurements) to see a full list of supported measurements | Option | Format | Example | | :------------ | :------------------- | :----------- | | `as_dictated` | As dictated | "one to ten" | | `numerals` | As numbers (default) | "1-10" | | Option | Format | Example | | :------------ | :-------------------- | :--------------------- | | `as_dictated` | As dictated | "First, second, third" | | `numerals` | Abbreviated (default) | "1st, 2nd, 3rd" | #### Example Here is an example configuration for transcription of dictated audio in English with spoken punctuation enabled, two commands defined, and (default) formatting options defined: ```json wss:/transcribe configuration example theme={null} { "type": "config", "configuration":{ "primaryLanguage": "en", "spokenPunctuation": true, "commands": [ { "id": "next_section", "phrases": [ "next section", "go to next section" ] }, { "id": "insert_template", "phrases": [ "insert my {template_name} template", "insert {template_name} template" ], "variables": [ { "key": "template_name", "type": "enum", "enum": [ "soap", "radiology", "referral" ] } ] } ], "formatting": { "dates": "long_text", "times": "h24", "numbers": "numerals_above_nine", "measurements": "abbreviated", "numericRanges": "numerals", "ordinals": "numerals" } } } ``` #### Using SDK You can use the Corti SDK (currently in "beta") to send configuration. You can provide the configuration either directly when connecting, or send it as a separate message after establishing the connection: ```ts title="JavaScript (Beta, recommended)" theme={null} const configuration = { primaryLanguage: "en", spokenPunctuation: true, commands: [ { id: "next_section", phrases: ["next section", "go to next section"] }, ] }; const transcribeSocket = await cortiClient.transcribe.connect( { configuration } ); ``` ```ts title="JavaScript (Beta, handle configuration manually)" theme={null} const transcribeSocket = await cortiClient.transcribe.connect(); transcribeSocket.on("open", () => { transcribeSocket.sendConfiguration({ type: "config", configuration: config }); }); ``` ### Sending audio Audio data chunks, sent as binary, to be transcribed. See more details on audio formats and best practices for audio streaming [here](/stt/audio). #### Using SDK You can use the Corti SDK (currently in "beta") to send audio data. ```ts title="JavaScript (Beta)" theme={null} transcribeSocket.sendAudio(audioChunk); // method doesn't do the chunking ``` ### Flush the Audio Buffer To flush the audio buffer, forcing transcript segments and detected commands to be returned over the web socket (e.g., when turning off or muting the microphone in a "hold-to-talk" dictation workflow, or in applications that support mic "go to sleep"), send a message - ```json theme={null} { "type":"flush" } ``` The server will return text/commands for audio sent before the `flush` message and then respond with message - ```json theme={null} { "type":"flushed" } ``` The web socket will remain open so dictation can continue. Client side considerations: 1 If you rely on a `flush` event to separate data (e.g., for different sections in an EHR template), then be sure to receive the `flushed` event before moving on to the next data field. 2 When using a web browser `MediaRecorder` API, audio is buffered and only emitted at the configured timeslice interval. Therefore, *before* sending a `flush` message, call `MediaRecorder.requestData()` to force any remaining buffered audio on the client to be transmitted to the server. This ensures all audio reaches the server before the `flush` is processed. ### Ending the Session To end the `/transcribe` session, send a message - ```json theme={null} { "type": "end" } ``` This will signal the server to send any remaining transcript segments and/or detected commands. Then, the server will send two messages - ```json theme={null} { "type":"usage", "credits":0.1 } ``` ```json theme={null} { "type":"ended" } ``` Following the message type `ended`, the server will close the web socket. #### Using SDK You can use the Corti SDK (currently in "beta") to end the /transcribe session. When using automatic configuration (passing configuration to connect), the socket will close itself without reconnecting when it receives an ended message. When using manual configuration, the socket will attempt to reconnect after the server closes the connection. To prevent this, you must subscribe to the ended message and manually close the connection. ```ts title="JavaScript (Beta, recommended)" theme={null} const transcribeSocket = await cortiClient.transcribe.connect({ configuration }); transcribeSocket.sendEnd({ type: "end" }); ``` ```ts title="JavaScript (Beta, manual configuration)" theme={null} const transcribeSocket = await cortiClient.transcribe.connect(); transcribeSocket.sendEnd({ type: "end" }); transcribeSocket.on("message", (message) => { if (message.type === "ended") { transcribeSocket.close(); // Prevents unwanted reconnection } }); ``` *** ## 4. Responses ### Configuration Returned when sending a valid configuration. Returned when sending a valid configuration. ### Transcripts Transcript segment with punctuations applied and command phrases removed The raw transcript without spoken punctuation applied and without command phrases removed Start time of the transcript segment in seconds End time of the transcript segment in seconds If false, then interim transcript result ```jsx Transcript response theme={null} { "type": "transcript", "data": { "text": "patient reports mild chest pain.", "rawTranscriptText": "patient reports mild chest pain period", "start": 0.0, "end": 3.2, "isFinal": true } } ``` ### Commands To identify the command when it gets detected and returned over the WebSocket The variables identified The raw transcript without spoken punctuation applied and without command phrases removed Start time of the transcript segment in seconds End time of the transcript segment in seconds ```jsx Command response theme={null} { "type": "command", "data": { "id": "insert_template", "variables": { "template_name": "radiology" }, "rawTranscriptText": "insert my radiology template", "start": 2.3, "end": 2.9, } } ``` ### Flushed Returned by server after processing `flush` event from client and returning transcript segments/ detected commands ```json theme={null} { "type":"flushed" } ``` ### Ended Returned by server after processing `end` event from client to convey amount of credits consumed ```json theme={null} { "type":"usage", "credits":0.1 } ``` Returned by server after processing `end` event from client before closing the web socket ```json theme={null} { "type":"ended" } ``` #### Using SDK You can use the Corti SDK (currently in "beta") to subscribe to responses from the /transcribe endpoint. ```ts title="JavaScript (Beta)" theme={null} transcribeSocket.on("message", (message) => { switch (message.type) { case "transcript": console.log("Transcript:", message.data.text); break; case "command": console.log("Command detected:", message.data.id, message.data.variables); break; case "error": console.error("Error:", message.error); break; case "usage": console.log("Usage credits:", message.credits); break; default: // handle other messages break; } }); ``` *** ## 5. Error Responses Returned when sending an invalid configuration. Possible errors `CONFIG_DENIED`, `CONFIG_TIMEOUT` The reason the configuration is invalid. The session ID. Once configuration has been accepted and the session is running, you may encounter runtime or application-level errors. These are sent as JSON objects with the following structure: ```json theme={null} { "type": "error", "error": { "id": "error id", "title": "error title", "status": 400, "details": "error details", "doc":"link to documentation" } } ``` In some cases, receiving an "error" type message will cause the stream to end and send a message of type `usage` and type `ENDED`. #### Using SDK You can use the Corti SDK (currently in "beta") to handle error messages. With recommended configuration, configuration errors (e.g., `CONFIG_DENIED`, etc.) and runtime errors will both trigger the error event and automatically close the socket. You can also inspect the original message in the message handler. With manual configuration, configuration errors are only received as messages (not as error events), and you must close the socket manually to avoid reconnection. ```ts title="JavaScript (Beta, recommended)" theme={null} const transcribeSocket = await cortiClient.transcribe.connect({ configuration }); transcribeSocket.on("error", (error) => { // Emitted for both configuration and runtime errors console.error("Error event:", error); // The socket will close itself automatically }); // still can be accessed with normal "message" subscription transcribeSocket.on("message", (message) => { if ( message.type === "CONFIG_DENIED" || message.type === "CONFIG_TIMEOUT" ) { console.log("Configuration error (message):", message); } if (message.type === "error") { console.log("Runtime error (message):", message); } }); ``` ```ts title="JavaScript (Beta, manual configuration)" theme={null} const transcribeSocket = await cortiClient.transcribe.connect(); transcribeSocket.on("message", (message) => { if ( message.type === "CONFIG_DENIED" || message.type === "CONFIG_TIMEOUT" ) { console.error("Configuration error (message):", message); transcribeSocket.close(); // Must close manually to avoid reconnection } if (message.type === "error") { console.error("Runtime error (message):", message); transcribeSocket.close(); // Must close manually to avoid reconnection } }); ``` # Create Transcript Source: https://docs.corti.ai/api-reference/transcripts/create-transcript api-reference/auto-generated-openapi.yml post /interactions/{id}/transcripts/ Create a transcript from an audio file attached, via `/recordings` endpoint, to the interaction.
Each interaction may have more than one audio file and transcript associated with it. While audio files up to 60min in total duration, or 150MB in total size, may be attached to an interaction, synchronous processing is only supported for audio files less than ~2min in duration.

If an audio file takes longer to transcribe than the 25sec synchronous processing timeout, then it will continue to process asynchronously. In this scenario, an empty transcript will be returned with a location header that can be used to retrieve the final transcript.

The client can poll the Get Transcript endpoint (`GET /interactions/{id}/transcripts/{transcriptId}/status`) for transcript status changes:
- `200 OK` with status `processing`, `completed`, or `failed`
- `404 Not Found` if the `interactionId` or `transcriptId` are invalid

The completed transcript can be retrieved via the Get Transcript endpoint (`GET /interactions/{id}/transcripts/{transcriptId}/`).
# Delete Transcript Source: https://docs.corti.ai/api-reference/transcripts/delete-transcript api-reference/auto-generated-openapi.yml delete /interactions/{id}/transcripts/{transcriptId} Deletes a specific transcript associated with an interaction. # Get Transcript Source: https://docs.corti.ai/api-reference/transcripts/get-transcript api-reference/auto-generated-openapi.yml get /interactions/{id}/transcripts/{transcriptId} Retrieve a transcript from a specific interaction.
Each interaction may have more than one transcript associated with it. Use the List Transcript request (`GET /interactions/{id}/transcripts/`) to see all transcriptIds available for the interaction.

The client can poll this Get Transcript endpoint (`GET /interactions/{id}/transcripts/{transcriptId}/status`) for transcript status changes:
- `200 OK` with status `processing`, `completed`, or `failed`
- `404 Not Found` if the `interactionId` or `transcriptId` are invalid

Status of `completed` indicates the transcript is finalized. If the transcript is retrieved while status is `processing`, then it will be incomplete.
# List Transcripts Source: https://docs.corti.ai/api-reference/transcripts/list-transcripts api-reference/auto-generated-openapi.yml get /interactions/{id}/transcripts/ Retrieves a list of transcripts for a given interaction. # Welcome to the Corti API Reference Source: https://docs.corti.ai/api-reference/welcome AI platform for healthcare developers This API Reference provides detailed specifications for integrating with the Corti API, enabling organizations to build bespoke healthcare AI solutions that meet their specific needs. Walkthrough and link to the repo for the Javascript SDK Download the Corti API Postman collection to start building *** #### Most Popular Detailed spec for real-time dictation and voice commands Detailed spec for real-time conversational intelligence Start here for opening a contextual messaging thread Detailed spec for creating an interaction and setting appropriate context Attach an audio file to the interaction Start or continue your contextual chat and agentic tasks Convert audio files to text Create one to many documents for an interaction Retrieve information about all available experts for use with your agents *** ### Resources Learn about upcoming changes and recent API, language models, and app updates View help articles and documentation, contact the Corti team, and manage support tickets Learn about how to use OAuth based on your workflow needs Review detailed compliance standards and security certifications
Please [contact us](https://help.corti.app/) if you need more information about the Corti API # Embedded Assistant API Source: https://docs.corti.ai/assistant/embedded-api Access an API for embedding Corti Assistant in your workflow today The Corti Embedded Assistant API enables seamless integration of [Corti Assistant](https://assistant.corti.ai) into host applications, such as Electronic Health Record (EHR) systems, web-based clinical portals, or native applications using embedded WebViews. The implementation provides a robust, consistent, and secure interface for parent applications to control and interact with embedded Corti Assistant. The details outlined below are for you to embed the Corti Assistant "AI scribe solution" natively within your application. To lean more about the full Corti API, please see more [here](/api-reference/welcome) *** ## Overview This implementation provides a robust, consistent, and secure interface for parent applications to control and interact with the embedded Corti Assistant. The API supports both asynchronous (postMessage) and synchronous (window object) integration modes. ## Integration Modes ### 1. PostMessage API This method is recommended for iFrame/WebView integration * Secure cross-origin communication * Works with any iframe or WebView implementation * Fully asynchronous with request/response pattern Available regions: * EU: [https://assistant.eu.corti.app](https://assistant.eu.corti.app) * EU MD: [https://assistantmd.eu.corti.app](https://assistantmd.eu.corti.app) (medical device compliant) * US: [https://assistant.us.corti.app](https://assistant.us.corti.app) The code block below has an example per region: ```javascript PostMessage EU example expandable theme={null} // Your desired environment to use ``` ```javascript PostMessage EU MD example expandable theme={null} // Your desired environment to use ``` ```javascript PostMessage US example expandable theme={null} // Your desired environment to use ``` ### 2. Window API This method is recommended for direct integration * Synchronous typescript API via `window.CortiEmbedded` * Promise-based methods * Ideal for same-origin integrations ```javascript Window API expandable theme={null} // Wait for the embedded app to be ready window.addEventListener("message", async (event) => { if ( event.data?.type === "CORTI_EMBEDDED_EVENT" && event.data.event === "ready" ) { // Use the window API directly const api = window.CortiEmbedded.v1; const user = await api.auth({ mode: "stateful", accessToken: "your-access-token", refreshToken: "your-refresh-token", }); console.log("Authenticated user:", user); } }); ``` ## Authentication Authenticate the user session with the embedded app: ```javascript PostMessage expandable theme={null} iframe.contentWindow.postMessage({ type: 'CORTI_EMBEDDED', version: 'v1', action: 'auth', requestId: 'unique-id', payload: { mode: 'stateless' | 'stateful', // we currently do not take this value into account and will always refresh the token internally access_token: string, refresh_token?: string, id_token?: string, expires_in?: number, token_type?: string } }, '*'); ``` ```javascript Window API theme={null} const api = window.CortiEmbedded.v1; const user = await api.auth({ mode: "stateful", access_token: "token", refresh_token: "refresh-token", }); ``` ## Configure interface The configure command allows you to configure the Assistant interface for the current session, include toggling which UI features are visible, the visual appearance of assistant, and locale settings. ```javascript PostMessage expandable theme={null} iframe.contentWindow.postMessage( { type: "CORTI_EMBEDDED", version: "v1", action: "configure", payload: { features: { interactionTitle: false, aiChat: false, documentFeedback: false, navigation: true, virtualMode: true, syncDocumentAction: false, }, appearance: { primaryColor: "#00a6ff", }, locale: { interfaceLanguage: "de-DE", dictationLanguage: "da-DK", overrides: { key: "value", } }, }, }, "*" ); ``` ```javascript Window API expandable theme={null} const api = window.CortiEmbedded.v1; await api.configure({ features: { interactionTitle: false, aiChat: false, documentFeedback: false, navigation: true, virtualMode: true, syncDocumentAction: false, }, appearance: { primaryColor: "#00a6ff", }, locale: { interfaceLanguage: "de-DE", dictationLanguage: "da-DK", overrides: { key: "value", }, }, }); ``` The defaults are as follows: * `features.interactionTitle: true` * `features.aiChat: true` * `features.documentFeedback: true` * `features.navigation: false` * `features.virtualMode: true` (regardless of user preferences, the option can be hidden and "disabled", which means it won't show up on the UI and even if the session is configured to be virtual, it won't be if this is toggled after the session configuration message) * `features.syncDocumentAction: false` (show a sync button in toolbar when a document is focused. if `false`, a simple copy button is displayed) * `appearance.primaryColor: null` (uses built-in styles, which is blue-ish) * `locale.interfaceLanguage: null` (uses the current user's specified default language or determined by browser setting if not specified) * `locale.dictationLanguage: "en"` * `locale.overrides: {}` (override translation by specifying the key to override and label to replace with) The `configure` command can be invoked at any time and will take effect instantly. The command can be invoked with a partial object, and only the specified properties will take effect. The command returns the full currently applied configuration object. Note that if `appearance.primaryColor` has not been set, it will always return as `null` indicating default colors will be used, unlike `locale.interfaceLanguage` or `locale.dictationLanguage` which will return whatever actual language is currently used for the given setting. ### Appearance Disclaimer - always ensure WCAG 2.2 AA conformance Corti Assistant’s default theme has been evaluated against WCAG 2.2 Level AA and meets applicable success criteria in our supported browsers. This conformance claim applies only to the default configuration. Customer changes (e.g., color palettes, CSS overrides, third-party widgets, or content) are outside the scope of this claim. Customers are responsible for ensuring their customizations continue to meet WCAG 2.2 AA (including color contrast and focus visibility). When supplying a custom accent or theme, Customers must ensure WCAG 2.2 AA conformance, including: * 1.4.3 Contrast (Minimum): normal text ≥ 4.5:1; large text ≥ 3:1 * 1.4.11 Non-text Contrast: UI boundaries, focus rings, and selected states ≥ 3:1 * 2.4.11 Focus Not Obscured (Minimum): focus indicators remain visible and unobstructed Corti provides accessible defaults. If you override them, verify contrast for all states (default, hover, active, disabled, focus) and on all backgrounds you use. ### Available interface languages Updated as per November 2025 | Language code |  Language | | ------------- | --------- | | `en` | English | | `de-DE` | German | | `fr-FR` | French | | `it-IT` | Italian | | `sv-SE` | Swedish | | `da-DK` | Danish | ### Available dictation languages Updated as per November 2025 #### EU | Language code |  Language | | ------------- | --------------- | | `en` | English | | `en-GB` | British English | | `de` | German | | `fr` | French | | `sv` | Swedish | | `da` | Danish | | `nl` | Dutch | | `no` | Norwegian | #### US | Language code |  Language | | ------------- | --------- | | `en` | English | ### Known strings to override Currently, only the following keys are exposed for override: | Key | Default value |  Purpose | | --------------------------------------- | ------------------------ | ------------------------------------------------------------------- | | `interview.document.syncDocument.label` | *"Synchronize document"* | The button text for the *"synchronize document"* button if enabled. | ## Create interaction ```javascript PostMessage expandable theme={null} iframe.contentWindow.postMessage({ type: 'CORTI_EMBEDDED', version: 'v1', action: 'createInteraction', payload: { assignedUserId: null, encounter: { identifier: `encounter-${Date.now()}`, status: "planned", type: "first_consultation", period: { startedAt: new Date().toISOString(), }, title: "Initial Consultation", }, } }, '*'); ``` ```javascript Window API expandable theme={null} const api = window.CortiEmbedded.v1; await api.createInteraction({ assignedUserId: null, encounter: { identifier: `encounter-${Date.now()}`, status: "planned", type: "first_consultation", period: { startedAt: new Date().toISOString(), }, title: "Initial Consultation", }, }); ``` ## Add Facts Add contextual facts to the current interaction: ```javascript PostMessage expandable theme={null} iframe.contentWindow.postMessage({ type: 'CORTI_EMBEDDED', version: 'v1', action: 'addFacts', requestId: 'unique-id', payload: { facts: [ { text: "Chest pain", group: "other" }, { text: "Shortness of breath", group: "other" }, { text: "Fatigue", group: "other" }, { text: "Dizziness", group: "other" }, { text: "Nausea", group: "other" }, ] } }, '*'); ``` ```javascript Window API theme={null} const api = window.CortiEmbedded.v1; await api.addFacts({ facts: [ { text: "Chest pain", group: "other" }, { text: "Shortness of breath", group: "other" }, { text: "Fatigue", group: "other" }, ], }); ``` ## Configure Session Set session-level defaults and preferences: ```javascript PostMessage expandable theme={null} iframe.contentWindow.postMessage({ type: 'CORTI_EMBEDDED', version: 'v1', action: 'configureSession', requestId: 'unique-id', payload: { defaultLanguage: 'en', defaultOutputLanguage: 'en', defaultTemplateKey: 'corti-soap', defaultMode: 'virtual' } }, '*'); ``` ```javascript Window API expandable theme={null} const api = window.CortiEmbedded.v1; await api.configureSession({ defaultLanguage: "en", defaultOutputLanguage: "en", defaultTemplateKey: "corti-soap", defaultMode: "virtual", }); ``` ## Navigate Navigate to a specific path within the embedded app: ```javascript PostMessage expandable theme={null} iframe.contentWindow.postMessage({ type: 'CORTI_EMBEDDED', version: 'v1', action: 'navigate', requestId: 'unique-id', payload: { path: '/session/interaction-123' } }, '*'); ``` ```javascript Window API theme={null} const api = window.CortiEmbedded.v1; await api.navigate({ path: "/session/interaction-123", }); ``` Navigable URL's include: * `/` – start a new session * `/session/` - go to an existing session identified by `` * `/templates` - browse and create templates * `/settings/preferences` - edit defaults like languages and default session settings * `/settings/input` - edit dictation input settings * `/settings/account` - edit general account settings * `/settings/archive` - view items in and restore from archive (only relevant if navigation is visible) ## Set credentials Change the credentials of the currently authenticated user. This can be used both to set the credentials for a user without a password (if only authenticated via identity provider) or to change the password of a user with an existing password. The current password policy must be followed: * At least 1 uppercase, 1 lowercase, 1 numerical and 1 special character * At least 8 characters long ```javascript PostMessage expandable theme={null} iframe.contentWindow.postMessage({ type: 'CORTI_EMBEDDED', version: 'v1', action: 'setCredentials', requestId: 'unique-id', payload: { password: 'new-password' } }, '*'); ``` ```javascript Window API theme={null} const api = window.CortiEmbedded.v1; await api.setCredentials({ password: "new-password" }); ``` ## Recording controls Start and stop recording within the embedded session: ```javascript PostMessage expandable theme={null} // Start recording iframe.contentWindow.postMessage({ type: 'CORTI_EMBEDDED', version: 'v1', action: 'startRecording', requestId: 'unique-id' }, '*'); // Stop recording iframe.contentWindow.postMessage({ type: 'CORTI_EMBEDDED', version: 'v1', action: 'stopRecording', requestId: 'unique-id' }, '\*'); ``` ```javascript Window API theme={null} const api = window.CortiEmbedded.v1; await api.startRecording(); await api.stopRecording(); ``` ## Get status The `getStatus` method allows you to request information about the current state of the application, including: authentication status, current user, current URL and interaction details. ```javascript PostMessage theme={null} iframe.contentWindow.postMessage( { type: "CORTI_EMBEDDED", version: "v1", action: "getStatus" }, "*" ); ``` ```javascript Window API theme={null} const api = window.CortiEmbedded.v1; await api.getStatus(); ``` The response is an object containing information about the current state of the application. The `interaction` in particular contains a list of resources combined with their respective metadata. ```json Response object expandable theme={null} { "auth": { "isAuthenticated": "", "user": { "id": "", "email": "" }, // User OR null if not authenticated }, "currentUrl": "", "interaction": { "id": "", "title": "", "state": "", "startedAt": "", "endedAt": "", "endsAt": "", "transcripts": [{ "utterances": [{ "id": "", "start": "", "duration": "", "text": "", "isFinal": "", "participantId": "", }], "participants": [{ "id": "", "channel": "", "role": "" }], "isMultiChannel": "" }], "documents": [{ "id": "", "name": "", "templateRef": "", "isStream": "", "sections": [{ "key": "", "name": "", "text": "", "sort": "", "createdAt": "", "updatedAt": "", "markdown": "", "htmlText": "", "plainText": "", }], "outputLanguage": "", }], "facts": [{ "id": "", "text": "", "group": "", "isDiscarded": "", "source": "", "createdAt": "", "updatedAt": "", "isNew": "", "isDraft": "", }], "websocketUrl": "" } } ``` ## Events The embedded app sends events to notify the parent application of important state changes: ### Event Types | | | | :------------------ | :-------------------------------------- | | `ready` | Embedded app is loaded and ready | | `loaded` | Navigation to a specific path completed | | `recordingStarted` | Recording has started | | `recordingStopped` | Recording has stopped | | `documentGenerated` | A document has been generated | | `documentUpdated ` | A document has been updated | | `documentSynced` | A document has been synced to EHR | ### Listening for Events ```javascript Listening for Events expandable theme={null} window.addEventListener("message", (event) => { if (event.data?.type === "CORTI_EMBEDDED_EVENT") { switch (event.data.event) { case "ready": console.log("Embedded app ready"); break; case "documentGenerated": console.log("Document generated:", event.data.payload.document); break; case "recordingStarted": console.log("Recording started"); break; // ... handle other events } } }); ``` ## Complete Integration Flow Here's a complete example showing the recommended integration steps: ```javascript Example Embedded Integration expandable theme={null} // State management let iframe = null; let isReady = false; let currentInteractionId = null; let pendingRequests = new Map(); // Initialize the integration function initializeCortiEmbeddedIntegration(iframeElement) { iframe = iframeElement; isReady = false; currentInteractionId = null; setupEventListeners(); } function setupEventListeners() { window.addEventListener('message', (event) => { if (event.data?.type === 'CORTI_EMBEDDED_EVENT') { handleEvent(event.data); } }); } function handleEvent(eventData) { switch (eventData.event) { case 'ready': isReady = true; startIntegrationFlow(); break; case 'documentGenerated': onDocumentGenerated(eventData.payload.document); break; // ... handle other events } } async function startIntegrationFlow() { try { // 1. Authenticate await authenticate(); // 2. Configure session await configureSession(); // 3. Create interaction const interaction = await createInteraction(); // 4. Add relevant facts await addFacts(); // 5. Navigate to interaction UI await navigateToSession(interaction.id); console.log('Integration flow completed successfully'); } catch (error) { console.error('Integration flow failed:', error); } } async function authenticate() { return new Promise((resolve, reject) => { const requestId = generateRequestId(); pendingRequests.set(requestId, { resolve, reject }); iframe.contentWindow.postMessage({ type: 'CORTI_EMBEDDED', version: 'v1', action: 'auth', requestId, payload: { mode: 'stateful', accessToken: 'your-accesstoken', refreshToken: 'your-refreshtoken', ... } }, '*'); }); } // Usage example: const iframeElement = document.getElementById('corti-iframe'); initializeCortiEmbeddedIntegration(iframeElement); ``` ## Error Handling All API methods can throw errors. Always wrap calls in try-catch blocks: ```javascript Error Handling expandable theme={null} try { const api = window.CortiEmbedded.v1; const user = await api.auth(authPayload); console.log("Authentication successful:", user); } catch (error) { console.error("Authentication failed:", error.message); // Handle authentication failure } ```
Please [contact us](https://help.corti.app) for help or questions. # What is Corti Assistant? Source: https://docs.corti.ai/assistant/how_it_works The AI scribe for healthcare that listens, understands, and writes your documentation **Corti Assistant is the next-generation ambient AI solution that transforms clinical conversations into structured facts and flexible documentation — putting healthcare professionals back in control.** Unlike traditional ambient tools that go directly from audio to a single note, Corti Assistant introduces a **fact-first workflow**, where clinicians can review, edit, and expand upon AI-generated facts before generating one or more clinical documents — such as SOAP notes, patient summaries, or referral letters. This gives clinicians greater accuracy, flexibility, and trust in their documentation, all while drastically reducing manual effort. **Corti Assistant can be embedded directly into the care navigation workflow** or accessed via web, desktop, mobile (iOS/Android), or even Apple Watch — providing seamless ambient documentation wherever and however clinicians work. *** ## Why Corti Assistant? Clinicians today are overwhelmed by administrative tasks, spending up to a third of their time on documentation. Corti Assistant helps reclaim that time by enabling fast, accurate documentation — but without compromising clinician oversight. This new version of Corti Assistant is designed around a core belief: AI should support, not replace, clinical expertise. By surfacing editable, structured facts from each patient encounter, it allows clinicians to: * Stay in control of what gets documented * Quickly produce high-quality notes across multiple formats * Save time without sacrificing quality or accuracy ### Key Features #### 🧠 Fact-Based Documentation Instead of jumping directly to a note, Corti Assistant extracts structured medical facts from the interaction. These can be reviewed, edited, or supplemented by the clinician before any documentation is generated. #### 📄 Multi-Note Generation From a single set of curated facts and notes, clinicians can generate multiple types of documentation — SOAP notes, referral letters, discharge summaries, patient instructions, and more. #### 🤖 Interactive AI Assistant Corti offers real-time, contextual prompts during or after the consultation. Clinicians can ask questions or request summaries, guidance, or resources — making it a true co-pilot, not just a scribe. #### 💬 Ambient Documentation Automatically transcribes, summarizes, and organizes the conversation in real time — reducing manual effort while improving clarity and consistency. #### 🏥 Medical Coding Assistance Tags documentation with relevant ICD/CPT codes to support accurate billing and compliance. #### 📱 Cross-Platform Convenience Use Corti Assistant on desktop, web, mobile (iOS, Android), or even Apple Watch — ideal for consultations on the go or in varied clinical environments. ### Seamless Integration Corti Assistant is API-driven and designed for fast integration into EHR systems and clinical workflows. Whether as a standalone product or embedded widget, implementation is smooth and scalable — with minimal disruption to provider operations. ### Use Cases #### 🏥 Primary Care Clinics Document consultations effortlessly while maintaining high standards of care and clinical detail. #### 🧬 Specialist Consultations Capture detailed and nuanced patient histories in complex specialties, with support for multi-document needs. #### 📞 Telemedicine Automatically document virtual consultations in real time, streamlining remote workflows. #### 🏃 On-the-Go Care Record and manage documentation conveniently via mobile or Apple Watch — ideal for home visits or emergency settings. ## Conclusion With its fact-first, clinician-centered design, Corti Assistant represents a step-change in ambient AI. It not only automates documentation — it elevates it, making it safer, more flexible, and more intuitive. For organizations aiming to improve care quality while reducing administrative burden, Corti Assistant is the AI healthcare deserves. Please [contact us](https://help.corti.app) for more information on Corti Assistant or to get started today. # Welcome to Corti Assistant Source: https://docs.corti.ai/assistant/welcome The AI scribe for healthcare that listens, understands, and writes your documentation Designed for healthcare teams with no time to waste. Start a session in seconds; Corti Assistant listens, captures the transcript, and extracts clinical facts as you speak. This documentation site provides some overview information about Corti Assistant, an AI scribe that can be used as a standalone application or embedded directly into your native solution. Learn more about Corti Assistant [here](https://assistant.corti.ai) and sign up for a free trial, or [contact us](https://help.corti.app) to get started with embedding Corti Assistant in to your workflow today. ***
# Creating Clients Source: https://docs.corti.ai/authentication/creating_clients Quick steps to creating your first client on the Corti Developer Console. export const Button = ({href, children, variant = "secondary", external = false}) => { const sharedStyles = "inline-flex items-center gap-1.5 px-3 py-1.5 text-sm rounded-xl"; const isPrimary = variant === "primary"; const primaryStyles = "bg-primary-dark text-white border-0"; const secondaryStyles = "text-gray-700 dark:text-gray-300 border border-gray-200 dark:border-white/[0.07] bg-background-light dark:bg-background-dark hover:bg-gray-600/5 dark:hover:bg-gray-200/5"; const className = `${sharedStyles} ${isPrimary ? primaryStyles : secondaryStyles}`; return {children} {external && } ; };