Skip to main content
Common questions and answers to help you get the most out of the Corti Agentic Framework and the underlying A2A-based APIs.
The Orchestrator is the central coordinator of the Corti Agentic Framework. It receives user requests, reasons about what needs to be done, and delegates work to specialized Experts. The Orchestrator doesn’t perform specialized work itself—instead, it plans, selects appropriate Experts, and coordinates their activities to accomplish complex workflows.An Expert is a specialized sub-agent that performs domain-specific tasks. Experts are designed to complete small, discrete tasks efficiently, such as clinical reference lookups, medical coding, or document generation. The Orchestrator composes complex workflows by chaining multiple Experts together.In summary: the Orchestrator coordinates and delegates; Experts execute specialized work.For more details, see Orchestrator and Experts.
A2A (Agent-to-Agent) is the protocol used for accessing the Corti API and for communication between agents. It’s the standard protocol that your application uses to interact with Corti agents, send messages, receive tasks, and manage the agent lifecycle. A2A enables secure, framework-agnostic communication between autonomous AI agents.MCP (Model Context Protocol) is the way to connect additional Experts. When you create custom Experts by exposing an MCP server, Corti wraps it in a custom LLM agent. MCP handles agent-to-tool interactions, allowing Experts to interact with external systems and resources.In the Corti Agentic Framework: A2A handles agent-to-agent communication (including your API calls to Corti), while MCP handles agent-to-tool interactions for Expert integrations.For more information, see A2A Protocol and MCP Protocol.
The Corti agent typically returns Tasks rather than Messages. A Task represents a stateful unit of work with a unique ID and defined lifecycle, which is ideal for most operations in the Corti Agentic Framework.Tasks are used for:
  • Long-running operations (for example, generating a full clinical document)
  • Multi-step workflows that coordinate multiple Experts
  • Operations that may need to wait on downstream systems
  • Any work that benefits from tracking and monitoring
Messages (with immediate responses) are less common and typically used only for very quick operations like simple classifications or completions that can be resolved immediately without any asynchronous processing.For more details, see Core Concepts.
Use TextPart for messages that will be directly exposed to the Orchestrator and the LLM. TextPart content is immediately available for reasoning and response generation.Use DataPart for structured JSON data that will be stored in memory first and accessed through more indirect manipulation. DataPart content is automatically indexed and stored in the context’s memory, enabling semantic retrieval when needed. DataPart is JSON-only and is useful for structured data like patient records, clinical facts, workflow parameters, or EHR identifiers.You can combine both in a single message: use TextPart for the primary instruction or question, and DataPart to provide structured context that will be semantically retrieved when relevant.For more details, see Core Concepts and Context & Memory.
Both Message and Artifact use the same underlying Part primitives, but they serve different roles:
  • Message (with role: "agent")
    • Represents a single turn of communication from the agent to the client.
    • Best for ephemeral conversational output, intermediate reasoning, clarifications, or status updates.
    • Typically tied to a particular task step but not necessarily considered a durable business deliverable.
  • Artifact
    • Represents a tangible, durable output of a task (for example, a SOAP note, coding suggestions, a structured fact bundle, or a generated document).
    • Has its own artifactId, name/metadata, and lifecycle; can be streamed, versioned, and reused by later tasks.
    • Is what downstream systems, UIs, or audits usually consume as the final result.
A useful mental model is: Messages are how agents “talk”; Artifacts are what they “produce”. You might see several agent messages during a task (status, intermediate commentary), but only a small number of artifacts that represent the completed work.
The Corti Agentic Framework provides automatic memory management through contexts. The contextId is always created on the server—send your first message without a contextId, and the server will return one in the response. Include that contextId in subsequent messages to maintain conversation history automatically.You can also pass additional context in each request using DataPart objects to include structured data, summaries, or other specific context alongside the automatic memory.For comprehensive guidance on context and memory management, see Context & Memory.
No, you cannot share data between different contexts. Contexts provide strict data isolation—data can never leak across contexts. Each contextId creates a completely isolated conversation scope where messages, tasks, artifacts, and any data within one context are completely inaccessible to agents working in a different context.This isolation ensures:
  • Privacy and security: Patient data from one encounter cannot accidentally be exposed to another encounter
  • Data integrity: Information from different workflows remains properly separated
  • Compliance: You can confidently scope sensitive data to specific contexts without risk of cross-contamination
If you need to share information across contexts, you must explicitly pass it via DataPart objects in your messages—there is no automatic data sharing between contexts.For more details, see Context & Memory.
The current time-to-live (TTL) for context memory is 30 days. After this period, the context and its associated memory are automatically cleaned up.For more information about context lifecycle and memory management, see Context & Memory.
The Orchestrator analyzes incoming requests and uses reasoning to determine which Expert(s) are needed to fulfill the task. It considers the nature of the request, the available Experts, and their capabilities.You can control Expert selection by writing additional system prompts, both in the Orchestrator configuration and in individual Expert configurations. System prompts guide how the Orchestrator reasons about task decomposition and Expert selection, and how Experts interpret and execute their assigned work.The Orchestrator can compose multiple Experts together, calling them in sequence or parallel as needed to accomplish complex workflows. For more information, see Orchestrator and Experts.
Please contact us if you need more information about the Corti Agentic Framework.