Skip to main content
Modern AI coding tools like Cursor, GitHub Copilot, Claude Code, and other LLM-powered assistants can dramatically accelerate your development with Corti APIs. This guide shows you how to leverage these tools effectively.
AI coding assistants excel at generating boilerplate code, understanding API patterns, and helping you iterate quickly. Use them to scaffold integrations, generate SDK wrappers, and explore the API surface.

Why Use AI Coding Tools with Corti?

Faster Integration

Generate working code examples from natural language descriptions of your use case

API Discovery

Quickly understand endpoint patterns, request/response structures, and authentication flows

Error Handling

Generate robust error handling and retry logic based on Corti’s error codes

Code Generation

Create SDK wrappers, test suites, and integration examples tailored to your stack

Corti API Documentation for LLMs

Corti provides machine-readable documentation specifically formatted for LLMs and AI coding tools:
These files are updated automatically and contain the complete Corti API documentation in a format optimized for AI tools. Reference them directly in your prompts or configure your AI coding assistant to use them as context.

Getting Started

1

Configure Your AI Tool

Most AI coding assistants can be configured to use external documentation sources. Here are common approaches:
Cursor can access web URLs directly. Reference the llms.txt files in your prompts:
Using the Corti API documentation at https://docs.corti.ai/llms.txt,
generate a Python function to create an interaction and upload a recording.
Copilot works best when you provide context in comments. Reference the documentation:
# Using Corti API (docs: https://docs.corti.ai/llms.txt)
# Create a function to authenticate and get an access token
def get_corti_access_token():
    ...
Include the documentation URL in your system prompt or initial message:
I'm building with Corti API. Reference: https://docs.corti.ai/llms-full.txt
Generate a complete example for real-time transcription...
2

Start with Authentication

Ask your AI assistant to generate authentication code based on Corti’s OAuth 2.0 client credentials flow. Here’s a prompt you can use:
Using Corti API documentation (https://docs.corti.ai/llms.txt),
generate a [Python/JavaScript/TypeScript] function that:
1. Authenticates using OAuth 2.0 client credentials
2. Handles token refresh automatically when tokens expire (they expire after 300 seconds)
3. Returns a reusable client object with methods for API calls
4. Includes proper error handling for authentication failures
The AI should generate code that constructs the auth URL as: https://auth.{environment}.corti.app/realms/{tenant-name}/protocol/openid-connect/tokenAnd includes the required form parameters: client_id, client_secret, grant_type: "client_credentials", and scope: "openid".
3

Create an Interaction

Once authentication is working, generate code to create an interaction:
Using Corti API documentation (https://docs.corti.ai/llms.txt),
create a function that:
1. Makes a POST request to /v2/interactions
2. Includes the Authorization header with Bearer token
3. Includes the Tenant-Name header
4. Creates an interaction with encounter details:
   - identifier (UUID)
   - status: "planned"
   - type: "first_consultation"
   - period with startedAt timestamp
5. Returns the interactionId and websocketUrl from the response
6. Handles errors according to Corti error codes
The response will include an interactionId (UUID) and a websocketUrl that you’ll need for streaming endpoints.
4

Connect to WebSocket Endpoints

For real-time features, generate WebSocket client code:
Using Corti API documentation (https://docs.corti.ai/llms.txt),
create a WebSocket client for the /stream endpoint that:
1. Connects to the websocketUrl from the interaction response
2. Appends the access token as a query parameter: &token=Bearer {token}
3. Sends a configuration message with:
   - transcription settings (primaryLanguage, isDiarization, participants)
   - mode settings (type: "facts" or "transcription", outputLocale)
4. Waits for CONFIG_ACCEPTED before sending audio
5. Handles incoming messages (transcript, facts, error types)
6. Implements reconnection logic for dropped connections
7. Properly closes the connection with an "end" message
For /transcribe endpoint, the configuration is simpler - just include primaryLanguage and optional spokenPunctuation or commands.
5

Add Error Handling

Generate robust error handling based on Corti’s error codes:
Using Corti API documentation (https://docs.corti.ai/llms.txt),
add comprehensive error handling that:
1. Maps HTTP status codes to Corti error codes (A0001-A0022)
2. Handles 403 errors (A0001, A0004, A0005) with clear messages
3. Handles 404 errors (A0007, A0009, A0011, A0012) appropriately
4. Implements retry logic for 500 errors (A0010) with exponential backoff
5. Handles 429 errors (A0021) with rate limiting
6. Validates request parameters before sending (A0003, A0006, A0008)
7. Provides user-friendly error messages with links to documentation
Reference the complete error code list in your prompt to ensure all error scenarios are covered.
6

Refine and Test

Use AI assistants to:
  • Add error handling based on Corti error codes
  • Generate unit tests for your integration
  • Create mock responses for development
  • Document your code with examples

Best Practices

Effective Prompting

1

Be Specific

Include details about:
  • Your programming language and framework
  • Specific endpoints you want to use
  • Expected behavior and error handling
  • Authentication requirements
2

Reference Documentation

Always mention the llms.txt or llms-full.txt URLs in your prompts to ensure the AI uses current, accurate API information.
3

Iterate Incrementally

Start with simple examples, then ask the AI to extend them. For example:
  1. “Create a function to authenticate”
  2. “Now add a function to create an interaction”
  3. “Add error handling and retry logic”
4

Validate Generated Code

Always review and test AI-generated code. Check that it:
  • Uses correct endpoint URLs and parameters
  • Handles authentication properly
  • Follows Corti API patterns
  • Includes appropriate error handling

Common Use Cases

SDK Wrappers

Generate language-specific SDK wrappers around Corti REST APIs. Ask your AI assistant to create classes and methods that match your preferred patterns.

WebSocket Clients

Create WebSocket clients for /transcribe and /stream endpoints with proper connection management, reconnection logic, and message handling.

Integration Examples

Generate complete integration examples for common workflows like ambient documentation, dictation, or agentic automation.

Test Suites

Create comprehensive test suites with mocked API responses, error scenarios, and integration tests.

Step-by-Step Examples

Example 1: Generate Document from Transcript

Using Corti API documentation (https://docs.corti.ai/llms.txt),
create a function that:
1. Takes an interactionId and transcript text as input
2. Makes a POST request to /v2/interactions/{id}/documents
3. Uses context type "transcript" with the transcript text
4. Specifies templateKey: "corti-soap"
5. Sets outputLanguage to "en"
6. Returns the generated document sections
7. Handles the response structure with sections array

Example 2: Upload Recording and Create Transcript

Using Corti API documentation (https://docs.corti.ai/llms.txt),
create a workflow function that:
1. Uploads an audio file to /v2/interactions/{id}/recordings
2. Uses multipart/form-data with the audio file
3. Extracts recordingId from the response
4. Creates a transcript via POST /v2/interactions/{id}/transcripts
5. Polls the transcript status endpoint if processing is async
6. Retrieves the final transcript when status is "completed"
7. Handles the 25-second synchronous timeout scenario

Example 3: Extract Facts from Text

Using Corti API documentation (https://docs.corti.ai/llms.txt),
create a function for the /tools/extract-facts endpoint that:
1. Takes unstructured text as input
2. Makes a POST request with context type "text"
3. Parses the response to extract facts with their groups
4. Returns structured fact objects with id, text, group, source
5. Handles the stateless nature (no interaction required)

Example 4: Create Agent with Custom Expert

Using Corti Agentic Framework documentation (https://docs.corti.ai/llms.txt),
create code that:
1. Creates an agent via POST /agents
2. Defines a custom expert with:
   - name and description
   - systemPrompt
   - mcpServers configuration (transportType, authorizationType, url)
3. Sends a message to the agent via POST /agents/{id}/v1/message:send
4. Handles the task response structure
5. Processes artifacts if returned

Resources

Next Steps

AI-generated code should always be reviewed and tested before use in production. While AI tools can accelerate development, human oversight ensures correctness, security, and compliance with healthcare regulations.