Modern AI coding tools like Claude Code, Cursor, Codex, and other LLM-powered assistants can dramatically accelerate your development with Corti APIs. This guide shows you how to leverage these tools effectively.Documentation Index
Fetch the complete documentation index at: https://docs.corti.ai/llms.txt
Use this file to discover all available pages before exploring further.
Why Use AI Coding Tools with Corti?
Faster Integration
Generate working code examples from natural language descriptions of your use case
API Discovery
Quickly understand endpoint patterns, request/response structures, and authentication flows
Error Handling
Generate robust error handling and retry logic based on Corti’s error codes
Code Generation
Create SDK wrappers, test suites, and integration examples tailored to your stack
Corti API Documentation for LLMs
Corti provides machine-readable documentation specifically formatted for LLMs and AI coding tools:llms.txt
Concise API reference optimized for LLM context windows. Perfect for quick lookups and code generation.
llms-full.txt
Comprehensive documentation including guides, examples, and detailed specifications. Use when you need full context.
Getting Started
Configure Your AI Tool
Most AI coding assistants can be configured to use external documentation sources. Here are common approaches:
Claude Code
Claude Code
Claude Code can fetch documentation on demand. Point it at the llms.txt files directly in your prompt, or add them to your project’s
CLAUDE.md so every session has the context:Cursor
Cursor
Cursor can index external documentation as a custom Doc source. Add
https://docs.corti.ai/llms.txt (or llms-full.txt) under Settings → Features → Docs, then reference it with @Docs in chat or Composer:Codex
Codex
Codex (OpenAI’s coding agent, in ChatGPT or the Codex CLI) works best when you anchor it to the Corti docs at the start of a task. Paste the llms.txt URL into the initial message, or commit an
AGENTS.md file to your repo that points Codex at it:Prompt the AI to use the official SDK
For JavaScript/TypeScript and C#/.NET, the official SDKs (
@corti/sdk, Corti.Sdk) are the recommended foundation — they handle client credentials, token refresh, WebSocket reconnection, pagination, retries, and typed errors. Tell your AI assistant to build on the SDK rather than re-implementing any of that.Hand-rolling the REST API (no SDK available)
For languages without an official SDK (Python, Go, Ruby, etc.), prompt the AI to call the REST API directly. Use the accordions below as prompt templates — they describe the real shape of each endpoint.
Authenticate with OAuth 2.0 Client Credentials
Authenticate with OAuth 2.0 Client Credentials
Create an interaction
Create an interaction
Connect to the real-time /streams WebSocket
Connect to the real-time /streams WebSocket
Map errors and retries
Map errors and retries
Refine and Test
Use AI assistants to:
- Generate unit tests for your integration (mock the SDK client or the HTTP layer)
- Create mock responses for development and offline work
- Document generated code with runnable examples
- Review error-handling coverage against real API responses (parse the RFC 9457 problem-details body at runtime)
Best Practices
The official SDKs (@corti/sdk, Corti.Sdk) are the recommended foundation even when an AI assistant writes the code. Prompt tools to use the SDK instead of hand-rolling OAuth, WebSocket reconnection, or error mapping — the SDK already implements those paths correctly, and staying on them keeps upgrades easy.
Effective Prompting
Be Specific
Include details about:
- Your programming language and framework
- Specific endpoints you want to use
- Expected behavior and error handling
- Authentication requirements
Reference Documentation
Always mention the llms.txt or llms-full.txt URLs in your prompts to ensure the AI uses current, accurate API information.
Iterate Incrementally
Start with simple examples, then ask the AI to extend them. For example:
- “Create a function to authenticate”
- “Now add a function to create an interaction”
- “Add error handling and retry logic”
Common Use Cases
Ambient scribing flow
End-to-end: create an interaction, upload a recording, generate a transcript, and produce a document from a template — all via the official SDK.
Real-time dictation
Stream audio over the
/transcribe or /streams WebSocket using the SDK’s managed connection and typed event stream — no manual reconnection logic.Agent orchestration
Create an agent, attach custom experts and MCP servers, and send messages via
/agents/{id}/v1/message:send. Prompt the AI to wire events and artifacts end-to-end.Test harness
Generate mocked SDK responses, error scenarios, and integration tests so your Corti code is covered before it reaches production.
Step-by-Step Examples
Example 1: Generate Document from Transcript
Example 2: Upload Recording and Create Transcript
Example 3: Extract Facts from Text
Example 4: Create Agent with Custom Expert
Resources
API Reference
Browse the complete API reference with interactive examples
JavaScript SDK
Reference implementation showing best practices
C# .NET SDK
Reference implementation for .NET applications
Agentic Quickstart
Step-by-step guide for building with the Agentic Framework
Next Steps
- Explore the Agentic Framework for building AI agents
- Check out our examples for real-world patterns
- Review authentication best practices
- Join the Corti community for support
AI-generated code should always be reviewed and tested before use in production. While AI tools can accelerate development, human oversight ensures correctness, security, and compliance with healthcare regulations.