Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.corti.ai/llms.txt

Use this file to discover all available pages before exploring further.

Modern AI coding tools like Claude Code, Cursor, Codex, and other LLM-powered assistants can dramatically accelerate your development with Corti APIs. This guide shows you how to leverage these tools effectively.
AI coding assistants excel at generating boilerplate code, understanding API patterns, and helping you iterate quickly. Use them to scaffold integrations, generate SDK wrappers, and explore the API surface.

Why Use AI Coding Tools with Corti?

Faster Integration

Generate working code examples from natural language descriptions of your use case

API Discovery

Quickly understand endpoint patterns, request/response structures, and authentication flows

Error Handling

Generate robust error handling and retry logic based on Corti’s error codes

Code Generation

Create SDK wrappers, test suites, and integration examples tailored to your stack

Corti API Documentation for LLMs

Corti provides machine-readable documentation specifically formatted for LLMs and AI coding tools:

llms.txt

Concise API reference optimized for LLM context windows. Perfect for quick lookups and code generation.

llms-full.txt

Comprehensive documentation including guides, examples, and detailed specifications. Use when you need full context.
These files are updated automatically and contain the complete Corti API documentation in a format optimized for AI tools. Reference them directly in your prompts or configure your AI coding assistant to use them as context.

Getting Started

1

Configure Your AI Tool

Most AI coding assistants can be configured to use external documentation sources. Here are common approaches:
Claude Code can fetch documentation on demand. Point it at the llms.txt files directly in your prompt, or add them to your project’s CLAUDE.md so every session has the context:
I'm building with the Corti API. Always consult
https://docs.corti.ai/llms-full.txt before generating code.
Generate a complete example for real-time transcription using the
official JavaScript SDK (@corti/sdk).
Cursor can index external documentation as a custom Doc source. Add https://docs.corti.ai/llms.txt (or llms-full.txt) under Settings → Features → Docs, then reference it with @Docs in chat or Composer:
@Docs Corti API — generate a TypeScript function using @corti/sdk
that authenticates with client credentials and creates an interaction.
Codex (OpenAI’s coding agent, in ChatGPT or the Codex CLI) works best when you anchor it to the Corti docs at the start of a task. Paste the llms.txt URL into the initial message, or commit an AGENTS.md file to your repo that points Codex at it:
Reference: https://docs.corti.ai/llms-full.txt
Using the official Corti JavaScript SDK (@corti/sdk), build a
function that uploads a recording to an interaction and polls for
the resulting transcript.
2

Prompt the AI to use the official SDK

For JavaScript/TypeScript and C#/.NET, the official SDKs (@corti/sdk, Corti.Sdk) are the recommended foundation — they handle client credentials, token refresh, WebSocket reconnection, pagination, retries, and typed errors. Tell your AI assistant to build on the SDK rather than re-implementing any of that.
Using https://docs.corti.ai/llms-full.txt and the official Corti
JavaScript SDK (`@corti/sdk`), generate a Node.js module that:
1. Creates a `CortiClient` with Client Credentials auth
   (environment, tenantName, clientId, clientSecret)
2. Creates an interaction via `client.interactions.create(...)`
   with a valid `encounter` object
3. Opens a real-time stream via `client.stream.connect({ id: interactionId, ... })`
4. Sends a configuration message, waits for `CONFIG_ACCEPTED`, then
   streams PCM audio and logs `transcript` / `facts` messages
5. Closes the stream cleanly with `{ type: "end" }`

Prefer SDK methods over raw HTTP/WebSocket calls. The SDK already
handles token refresh, reconnection, and typed errors — do not
re-implement them.
Browser-only dictation UIs should use the Dictation Web Component instead of calling the WebSocket directly. Ask your AI assistant to embed the component and wire up its events.
3

Hand-rolling the REST API (no SDK available)

For languages without an official SDK (Python, Go, Ruby, etc.), prompt the AI to call the REST API directly. Use the accordions below as prompt templates — they describe the real shape of each endpoint.
Using Corti API documentation (https://docs.corti.ai/llms.txt),
generate a [Python/Go/Ruby] function that:
1. POSTs to https://auth.$ENVIRONMENT.corti.app/realms/$TENANT_NAME/protocol/openid-connect/token
2. Sends form-encoded body with:
   client_id, client_secret, grant_type=client_credentials, scope=openid
3. Parses the access_token and expires_in (typically 300s)
4. Refreshes the token before expiry (thread-safe, single in-flight refresh)
5. Returns a reusable client object that injects Authorization and
   Tenant-Name headers on every request
Using Corti API documentation (https://docs.corti.ai/llms.txt),
POST /v2/interactions with:
- Authorization: Bearer <access_token>
- Tenant-Name: <your tenant>
- JSON body containing an `encounter` object with:
    identifier (UUID), status ("planned"),
    type ("first_consultation"),
    period: { startedAt: ISO-8601 timestamp }
Return the `interactionId` and `websocketUrl` from the response.
Using Corti API documentation (https://docs.corti.ai/llms.txt),
generate a WebSocket client that:
1. Connects to the `websocketUrl` from the interaction response,
   appending the access token as a query parameter:
     ?token=Bearer%20<access_token>
2. Sends a `{ type: "config" }` message containing:
   - transcription: { primaryLanguage, isDiarization, participants }
   - mode: { type: "facts" | "transcription", outputLocale }
3. Waits for the server's `CONFIG_ACCEPTED` response before
   sending PCM audio frames
4. Handles incoming `transcript`, `facts`, and `error` messages
5. Reconnects on transport errors
6. Closes the session with `{ type: "end" }`

For `/transcribe` (dictation) the config is simpler — just
`primaryLanguage` and optional `spokenPunctuation` / `commands`.
Using Corti API documentation (https://docs.corti.ai/llms.txt),
add error handling that:
1. Surfaces 4xx responses with clear messages (validation, auth,
   not-found, rate-limit) using the `detail` field from the
   RFC 9457 problem-details response body
2. Retries 5xx responses with exponential backoff
3. Retries 429 responses after the Retry-After interval

Parse error messages from the response body at runtime rather
than hard-coding an enum of codes.
4

Refine and Test

Use AI assistants to:
  • Generate unit tests for your integration (mock the SDK client or the HTTP layer)
  • Create mock responses for development and offline work
  • Document generated code with runnable examples
  • Review error-handling coverage against real API responses (parse the RFC 9457 problem-details body at runtime)

Best Practices

The official SDKs (@corti/sdk, Corti.Sdk) are the recommended foundation even when an AI assistant writes the code. Prompt tools to use the SDK instead of hand-rolling OAuth, WebSocket reconnection, or error mapping — the SDK already implements those paths correctly, and staying on them keeps upgrades easy.

Effective Prompting

1

Be Specific

Include details about:
  • Your programming language and framework
  • Specific endpoints you want to use
  • Expected behavior and error handling
  • Authentication requirements
2

Reference Documentation

Always mention the llms.txt or llms-full.txt URLs in your prompts to ensure the AI uses current, accurate API information.
3

Iterate Incrementally

Start with simple examples, then ask the AI to extend them. For example:
  1. “Create a function to authenticate”
  2. “Now add a function to create an interaction”
  3. “Add error handling and retry logic”
4

Validate Generated Code

Always review and test AI-generated code. Check that it:
  • Uses correct endpoint URLs and parameters
  • Handles authentication properly
  • Follows Corti API patterns
  • Includes appropriate error handling

Common Use Cases

Ambient scribing flow

End-to-end: create an interaction, upload a recording, generate a transcript, and produce a document from a template — all via the official SDK.

Real-time dictation

Stream audio over the /transcribe or /streams WebSocket using the SDK’s managed connection and typed event stream — no manual reconnection logic.

Agent orchestration

Create an agent, attach custom experts and MCP servers, and send messages via /agents/{id}/v1/message:send. Prompt the AI to wire events and artifacts end-to-end.

Test harness

Generate mocked SDK responses, error scenarios, and integration tests so your Corti code is covered before it reaches production.

Step-by-Step Examples

Example 1: Generate Document from Transcript

Using Corti API documentation (https://docs.corti.ai/llms.txt),
create a function that:
1. Takes an interactionId and transcript text as input
2. Makes a POST request to /v2/interactions/{id}/documents
3. Passes a `context` array containing an object with `type: "transcript"` and `data.text` set to the transcript
4. Specifies a `templateKey` (e.g. "corti-brief-clinical-note" — list available templates via `/v2/templates`)
5. Sets outputLanguage to "en"
6. Returns the generated document sections
7. Handles the response structure with sections array

Example 2: Upload Recording and Create Transcript

Using Corti API documentation (https://docs.corti.ai/llms.txt),
create a workflow function that:
1. Uploads an audio file to /v2/interactions/{id}/recordings
2. Uses multipart/form-data with the audio file
3. Extracts recordingId from the response
4. Creates a transcript via POST /v2/interactions/{id}/transcripts
5. Polls the transcript status endpoint if processing is async
6. Retrieves the final transcript when status is "completed"
7. Handles the 25-second synchronous timeout scenario

Example 3: Extract Facts from Text

Using Corti API documentation (https://docs.corti.ai/llms.txt),
create a function for the /tools/extract-facts endpoint that:
1. Takes unstructured text as input
2. Makes a POST request with context type "text"
3. Parses the response to extract facts with their groups
4. Returns structured fact objects with id, text, group, source
5. Handles the stateless nature (no interaction required)

Example 4: Create Agent with Custom Expert

Using Corti Agentic Framework documentation (https://docs.corti.ai/llms.txt),
create code that:
1. Creates an agent via POST /agents
2. Defines a custom expert with:
   - name and description
   - systemPrompt
   - mcpServers configuration (transportType, authorizationType, url)
3. Sends a message to the agent via POST /agents/{id}/v1/message:send
4. Handles the task response structure
5. Processes artifacts if returned

Resources

API Reference

Browse the complete API reference with interactive examples

JavaScript SDK

Reference implementation showing best practices

C# .NET SDK

Reference implementation for .NET applications

Agentic Quickstart

Step-by-step guide for building with the Agentic Framework

Next Steps

AI-generated code should always be reviewed and tested before use in production. While AI tools can accelerate development, human oversight ensures correctness, security, and compliance with healthcare regulations.