The Window API provides a synchronous TypeScript API via window.CortiEmbedded for direct integration with Corti Assistant. This method is ideal for same-origin integrations.
This method is recommended for direct integration
Overview
The Window API offers a Promise-based, TypeScript-friendly interface for integrating Corti Assistant into your application. It provides direct access to window.CortiEmbedded.v1, making it feel like a traditional JavaScript SDK.
Requirements
Implementation Requirements
To use the Window API, you’ll need to implement a WebView or similar browser component within your native application. The embedded Corti Assistant runs as a web application and requires a modern browser environment to function properly.
Minimum Requirements
Modern WebView : Use a modern WebView implementation that supports:
WebView2 (Windows) - Recommended for Windows applications
WKWebView (iOS/macOS) - Recommended for Apple platforms
WebView (Android) - Use the latest Chromium-based WebView
Electron WebView - For Electron-based applications
Browser Compatibility : The WebView must support:
ES6+ JavaScript features
Modern Web APIs (WebRTC, MediaDevices API)
PostMessage API
Local Storage and Session Storage
Microphone Permissions : Your application must request and handle microphone permissions:
Request microphone access before initializing the embedded Assistant
Handle permission denial gracefully
Provide clear messaging to users about why microphone access is needed
Ensure permissions are granted at the OS level (not just browser level)
Windows (WebView2)
Ensure WebView2 Runtime is installed or bundled with your application
Request microphone permissions in your application manifest
Handle permission prompts appropriately
iOS/macOS (WKWebView)
Add NSMicrophoneUsageDescription to your Info.plist
Request microphone permissions using AVAudioSession or similar APIs
Ensure permissions are granted before loading the embedded Assistant
Android (WebView)
Request RECORD_AUDIO permission in your AndroidManifest.xml
Request runtime permissions using ActivityCompat.requestPermissions()
Handle permission callbacks appropriately
Recommendations
Use TypeScript for better type safety and developer experience
Implement proper error handling for all API calls
Handle token refresh to maintain user sessions
Request microphone permissions early in your application flow
Test on target platforms to ensure WebView compatibility
Quick Start
Step 1: Set Up Authentication
The Embedded Assistant API only supports user-based authentication . You must authenticate as an end user, not as an application. Client credentials and other machine-to-machine authentication methods are not supported.
Before you can use the Window API, you need to authenticate your users using OAuth2. The recommended flow is Authorization Code Flow with PKCE for secure, user-facing integrations.
For detailed information on OAuth2 flows and authentication, see our OAuth Authentication Guide .
Key points:
Use Authorization Code Flow with PKCE for embedded integrations
Obtain access_token and refresh_token for your users
Handle token refresh to maintain sessions
Never expose client secrets in client-side code
Step 2: Wait for the Embedded App to Be Ready
The embedded Corti Assistant will send a ready event when it’s loaded and ready to receive API calls:
window . addEventListener ( "message" , async ( event ) => {
if (
event . data ?. type === "CORTI_EMBEDDED_EVENT" &&
event . data . event === "ready"
) {
// The API is now available
const api = window . CortiEmbedded . v1 ;
console . log ( "Corti Assistant is ready" );
}
});
See all 10 lines
Step 3: Authenticate the User
Once the API is ready, authenticate the user with their OAuth2 tokens:
const api = window . CortiEmbedded . v1 ;
const user = await api . auth ({
mode: "stateful" ,
access_token: "your-access-token" , // From OAuth2 flow
refresh_token: "your-refresh-token" // From OAuth2 flow
});
console . log ( "Authenticated user:" , user );
See all 9 lines
After authentication, you can configure the interface and start using the Assistant:
// Configure the interface
const config = await api . configure ({
features: {
interactionTitle: false ,
aiChat: false ,
navigation: true ,
},
appearance: {
primaryColor: "#00a6ff" ,
},
locale: {
interfaceLanguage: "en" ,
dictationLanguage: "en" ,
},
});
// Create an interaction
const interaction = await api . createInteraction ({
assignedUserId: null ,
encounter: {
identifier: `encounter- ${ Date . now () } ` ,
status: "planned" ,
type: "first_consultation" ,
period: {
startedAt: new Date (). toISOString (),
},
title: "Initial Consultation" ,
},
});
// Navigate to the interaction
await api . navigate ({
path: `/session/ ${ interaction . id } ` ,
});
See all 34 lines
API Structure
The API is available at window.CortiEmbedded.v1 and provides the following methods:
window . CortiEmbedded . v1 = {
auth : ( payload ) => Promise < User > ,
configure : ( payload ) => Promise < Configuration > ,
createInteraction : ( payload ) => Promise < Interaction > ,
addFacts : ( payload ) => Promise < void > ,
configureSession : ( payload ) => Promise < void > ,
navigate : ( payload ) => Promise < void > ,
setCredentials : ( payload ) => Promise < void > ,
startRecording : () => Promise < void > ,
stopRecording : () => Promise < void > ,
getStatus : () => Promise < Status >
}
Complete Integration Example
Here’s a complete example showing the recommended integration flow:
Example Window API Integration
let api = null ;
let isReady = false ;
// Wait for the embedded app to be ready
window . addEventListener ( "message" , async ( event ) => {
if (
event . data ?. type === "CORTI_EMBEDDED_EVENT" &&
event . data . event === "ready"
) {
api = window . CortiEmbedded . v1 ;
isReady = true ;
try {
await startIntegrationFlow ();
} catch ( error ) {
console . error ( 'Integration flow failed:' , error );
}
}
});
async function startIntegrationFlow () {
try {
// 1. Authenticate (requires OAuth2 tokens)
const user = await api . auth ({
mode: "stateful" ,
access_token: "your-access-token" , // From OAuth2 flow
refresh_token: "your-refresh-token" // From OAuth2 flow
});
console . log ( "Authenticated user:" , user );
// 2. Configure interface
const config = await api . configure ({
features: {
interactionTitle: false ,
aiChat: false ,
documentFeedback: false ,
navigation: true ,
virtualMode: true ,
syncDocumentAction: false ,
},
appearance: {
primaryColor: "#00a6ff" ,
},
locale: {
interfaceLanguage: "en" ,
dictationLanguage: "en" ,
},
});
console . log ( "Configuration applied:" , config );
// 3. Configure session
await api . configureSession ({
defaultLanguage: "en" ,
defaultOutputLanguage: "en" ,
defaultTemplateKey: "corti-soap" ,
defaultMode: "virtual" ,
});
// 4. Create interaction
const interaction = await api . createInteraction ({
assignedUserId: null ,
encounter: {
identifier: `encounter- ${ Date . now () } ` ,
status: "planned" ,
type: "first_consultation" ,
period: {
startedAt: new Date (). toISOString (),
},
title: "Initial Consultation" ,
},
});
console . log ( "Interaction created:" , interaction );
// 5. Add relevant facts
await api . addFacts ({
facts: [
{ text: "Chest pain" , group: "other" },
{ text: "Shortness of breath" , group: "other" },
{ text: "Fatigue" , group: "other" },
],
});
// 6. Navigate to interaction UI
await api . navigate ({
path: `/session/ ${ interaction . id } ` ,
});
console . log ( "Integration flow completed successfully" );
} catch ( error ) {
console . error ( "Integration flow failed:" , error );
throw error ;
}
}
// Listen for events
window . addEventListener ( "message" , ( event ) => {
if ( event . data ?. type === "CORTI_EMBEDDED_EVENT" ) {
switch ( event . data . event ) {
case "documentGenerated" :
console . log ( "Document generated:" , event . data . payload . document );
break ;
case "recordingStarted" :
console . log ( "Recording started" );
break ;
case "recordingStopped" :
console . log ( "Recording stopped" );
break ;
// ... handle other events
}
}
});
See all 111 lines
Error Handling
All API methods return Promises and can throw errors. Always wrap calls in try-catch blocks:
try {
const api = window . CortiEmbedded . v1 ;
const user = await api . auth ({
mode: "stateful" ,
access_token: "your-access-token" ,
refresh_token: "your-refresh-token" ,
});
console . log ( "Authentication successful:" , user );
} catch ( error ) {
console . error ( "Authentication failed:" , error . message );
// Handle authentication failure
}
See all 12 lines
TypeScript Support
If you’re using TypeScript, you can extend the Window interface to get type safety:
interface CortiEmbeddedAPI {
auth : ( payload : AuthPayload ) => Promise < User >;
configure : ( payload : ConfigurePayload ) => Promise < Configuration >;
createInteraction : ( payload : CreateInteractionPayload ) => Promise < Interaction >;
addFacts : ( payload : AddFactsPayload ) => Promise < void >;
configureSession : ( payload : ConfigureSessionPayload ) => Promise < void >;
navigate : ( payload : NavigatePayload ) => Promise < void >;
setCredentials : ( payload : SetCredentialsPayload ) => Promise < void >;
startRecording : () => Promise < void >;
stopRecording : () => Promise < void >;
getStatus : () => Promise < Status >;
}
interface Window {
CortiEmbedded : {
v1 : CortiEmbeddedAPI ;
};
}
See all 18 lines
Helper Function
You can create a helper function to ensure the API is ready:
function waitForCortiAPI () {
return new Promise (( resolve ) => {
if ( window . CortiEmbedded ?. v1 ) {
resolve ( window . CortiEmbedded . v1 );
return ;
}
const listener = ( event ) => {
if (
event . data ?. type === "CORTI_EMBEDDED_EVENT" &&
event . data . event === "ready"
) {
window . removeEventListener ( "message" , listener );
resolve ( window . CortiEmbedded . v1 );
}
};
window . addEventListener ( "message" , listener );
});
}
// Usage
async function useAPI () {
const api = await waitForCortiAPI ();
const user = await api . auth ({
mode: "stateful" ,
access_token: "your-access-token" ,
refresh_token: "your-refresh-token" ,
});
}
See all 30 lines
Next Steps