Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.corti.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Window API provides a direct, Promise-based TypeScript API exposed on window.CortiEmbedded for integration with Corti Assistant. This method is suitable for same-origin integrations requiring direct JavaScript access.
Web Component API is recommended for most cases. The Web Component API is recommended when you embed the Assistant via a local host page. The Window API remains useful for specific scenarios, such as embedding via direct URL in webviews/iframes. It is fully supported and not deprecated.
Web Component has full working examples. Complete, runnable examples are available for the Web Component integration method, which is the recommended approach for most integrations. Window API examples for same-origin scenarios will be added to the repository in the future.

Overview

The Window API offers a Promise-based, TypeScript-friendly interface for integrating Corti Assistant into your application. It provides direct access to window.CortiEmbedded.v1, making it feel like a traditional JavaScript SDK.

Requirements

Implementation requirements

To use the Window API, you’ll need to implement a WebView or similar browser component within your native application. The embedded Corti Assistant runs as a web application and requires a modern browser environment to function properly.

Minimum requirements

  • Modern WebView: Use a modern WebView implementation that supports:
    • WebView2 (Windows) - Recommended for Windows applications
    • WKWebView (iOS/macOS) - Recommended for Apple platforms
    • WebView (Android) - Use the latest Chromium-based WebView
    • Electron WebView - For Electron-based applications
  • Browser compatibility: The WebView must support:
    • ES6+ JavaScript features
    • Modern Web APIs (WebRTC, MediaDevices API)
    • PostMessage API
    • Local Storage and Session Storage
  • Microphone permissions: Your application must request and handle microphone permissions:
    • Request microphone access before initializing the embedded Assistant
    • Handle permission denial gracefully
    • Provide clear messaging to users about why microphone access is needed
    • Ensure permissions are granted at the OS level (not just browser level)

Platform-specific considerations

Windows (WebView2)
  • Ensure WebView2 Runtime is installed or bundled with your application
  • Request microphone permissions in your application manifest
  • Handle permission prompts appropriately
iOS/macOS (WKWebView)
  • Add NSMicrophoneUsageDescription to your Info.plist
  • Request microphone permissions using AVAudioSession or similar APIs
  • Ensure permissions are granted before loading the embedded Assistant
Android (WebView)
  • Request RECORD_AUDIO permission in your AndroidManifest.xml
  • Request runtime permissions using ActivityCompat.requestPermissions()
  • Handle permission callbacks appropriately

Recommendations

  • Use TypeScript for better type safety and developer experience
  • Implement proper error handling for all API calls
  • Handle token refresh to maintain user sessions
  • Request microphone permissions early in your application flow
  • Test on target platforms to ensure WebView compatibility

Quick Start

Step 1: Set up authentication

Before using the Window API, authenticate your users using OAuth2. See the Authentication Guide for complete setup instructions including Authorization Code Flow with PKCE (recommended), obtaining tokens, and handling token refresh.
All Embedded Assistant integrations require user-based OAuth2 authentication. Client credentials and machine-to-machine flows are not supported.
  • Handle token refresh to maintain sessions
  • Never expose client secrets in client-side code

Step 2: Wait for the Embedded App to be ready

The embedded Corti Assistant will send an embedded.ready event when it’s loaded and ready to receive API calls:
Basic Setup
window.addEventListener("message", async (event) => {
  if (
    event.data?.type === "CORTI_EMBEDDED_EVENT" &&
    event.data.event === "embedded.ready"
  ) {
    // The API is now available
    const api = window.CortiEmbedded.v1;
    console.log("Corti Assistant is ready");
  }
});

Step 3: Authenticate the user

Once the API is ready, authenticate the user with their OAuth2 tokens:
Authentication
const api = window.CortiEmbedded.v1;

const user = await api.auth({
  access_token: "your-access-token", // From OAuth2 flow
  refresh_token: "your-refresh-token", // From OAuth2 flow
  id_token: "your-id-token", // From OAuth2 flow
  token_type: "Bearer",
});

console.log("Authenticated user:", user);

Step 4: Configure and use

After authentication, you can configure the interface and start using the Assistant:
Configure and Use
// Configure the interface
const config = await api.configure({
  features: {
    interactionTitle: false,
    aiChat: false,
    navigation: true,
  },
  appearance: {
    primaryColor: "#00a6ff",
  },
  locale: {
    interfaceLanguage: "en",
    dictationLanguage: "en",
  },
});

// Create an interaction
const interaction = await api.createInteraction({
  assignedUserId: null,
  encounter: {
    identifier: `encounter-${Date.now()}`,
    status: "planned",
    type: "first_consultation",
    period: {
      startedAt: new Date().toISOString(),
    },
    title: "Initial Consultation",
  },
});

// Navigate to the interaction
await api.navigate({
  path: `/session/${interaction.id}`,
});

API structure

The API is available at window.CortiEmbedded.v1 and provides the following methods:
window.CortiEmbedded.v1 = {
  auth: (payload) => Promise<User>,
  configure: (payload) => Promise<Configuration>,
  createInteraction: (payload) => Promise<Interaction>,
  addFacts: (payload) => Promise<void>,
  configureSession: (payload) => Promise<void>,
  navigate: (payload) => Promise<void>,
  setCredentials: (payload) => Promise<void>,
  startRecording: () => Promise<void>,
  stopRecording: () => Promise<void>,
  getStatus: () => Promise<Status>,
};

Same API as Web Component

The Window API provides the exact same methods as described in the API Reference. The only difference is the invocation style - with Window API, you call methods directly via window.CortiEmbedded.v1.methodName() instead of through a Web Component. Example:
  • Web Component: await api.auth({ ... })
  • Window API: await window.CortiEmbedded.v1.auth({ ... })
Same method, same parameters, same return values - just different access patterns.

Events

Corti Assistant dispatches events to notify your application of user activity, state change, data updates, and many more interactions. When using the Window API, events are delivered through the same postMessage mechanism.

Event format translation

Core events documented in the Events Reference are wrapped in the CORTI_EMBEDDED_EVENT message type: Core event structure:
{
  "event": "event-name",
  "confidential": true,
  "payload": {
    "various": "properties"
  }
}
Window API delivery:
{
  "type": "CORTI_EMBEDDED_EVENT",
  "event": "recording.started",
  "confidential": false,
  "payload": {
    "mode": "virtual",
    "language": "en",
    "interactionId": "int_123",
    "interactionState": "ongoing"
  }
}

Listening for events

Even when using the Window API for method calls, events are delivered via postMessage. Set up a listener:
Listening for Events
window.addEventListener("message", (event) => {
  // Check for Corti events
  if (event.data?.type === "CORTI_EMBEDDED_EVENT") {
    const { event: eventName, confidential, payload } = event.data;

    // Handle different event types
    switch (eventName) {
      case "recording.started":
        console.log("Recording started:", payload);
        updateRecordingState(true);
        break;
      case "recording.paused":
        console.log("Recording paused:", payload);
        updateRecordingState(false);
        break;
      case "document.generated":
        console.log("Document generated:", payload);
        handleNewDocument(payload);
        break;
      case "error.triggered":
        console.error("Error occurred:", payload);
        showErrorNotification(payload);
        break;
      default:
        console.log("Unknown event:", eventName);
    }
  }
});

function updateRecordingState(isRecording) {
  // Update your UI to reflect recording state
}

function handleNewDocument(payload) {
  const { documentId, documentName, templateId } = payload;
  // Process the new document
}

Combining API calls and events

Use the Window API for actions and events for state updates:
Combined Usage
const api = window.CortiEmbedded.v1;

// Set up event listener
window.addEventListener("message", (event) => {
  if (event.data?.type === "CORTI_EMBEDDED_EVENT") {
    const { event: eventName, payload } = event.data;

    if (eventName === "recording.started") {
      console.log("Recording started successfully");
    }
  }
});

// Trigger action via Window API
try {
  await api.startRecording();
  // Event will be received via message listener above
} catch (error) {
  console.error("Failed to start recording:", error);
}

Available events

For a complete list of events and their payload structures, see the Events Overview. Common events include:
  • recording.started - Recording has started
  • recording.paused - Recording has paused
  • document.generated - Document has been generated
  • document.updated - Document has been edited
  • interaction.loaded - Interaction has been loaded
  • error.triggered - An error occurred

Legacy events

The embedded Assistant also dispatches legacy events using camelCase names (e.g., recordingStarted, documentGenerated). These are deprecated and will be removed in a future version.

Error handling

All API methods return Promises and can throw errors. Always wrap calls in try-catch blocks:
Error Handling
try {
  const api = window.CortiEmbedded.v1;
  const user = await api.auth({
    access_token: "your-access-token",
    refresh_token: "your-refresh-token",
    id_token: "your-id-token", // From OAuth2 flow
    token_type: "Bearer",
  });
  console.log("Authentication successful:", user);
} catch (error) {
  console.error("Authentication failed:", error.message);
  // Handle authentication failure
}

TypeScript support

If you’re using TypeScript, you can extend the Window interface to get type safety:
TypeScript Definitions
interface CortiEmbeddedAPI {
  auth: (payload: AuthPayload) => Promise<User>;
  configure: (payload: ConfigurePayload) => Promise<Configuration>;
  createInteraction: (
    payload: CreateInteractionPayload,
  ) => Promise<Interaction>;
  addFacts: (payload: AddFactsPayload) => Promise<void>;
  configureSession: (payload: ConfigureSessionPayload) => Promise<void>;
  navigate: (payload: NavigatePayload) => Promise<void>;
  setCredentials: (payload: SetCredentialsPayload) => Promise<void>;
  startRecording: () => Promise<void>;
  stopRecording: () => Promise<void>;
  getStatus: () => Promise<Status>;
}

interface Window {
  CortiEmbedded: {
    v1: CortiEmbeddedAPI;
  };
}

Helper function

You can create a helper function to ensure the API is ready:
Helper Function
function waitForCortiAPI() {
  return new Promise((resolve) => {
    if (window.CortiEmbedded?.v1) {
      resolve(window.CortiEmbedded.v1);
      return;
    }

    const listener = (event) => {
      if (
        event.data?.type === "CORTI_EMBEDDED_EVENT" &&
        event.data.event === "embedded.ready"
      ) {
        window.removeEventListener("message", listener);
        resolve(window.CortiEmbedded.v1);
      }
    };

    window.addEventListener("message", listener);
  });
}

// Usage
async function useAPI() {
  const api = await waitForCortiAPI();
  const user = await api.auth({
    access_token: "your-access-token",
    refresh_token: "your-refresh-token",
    id_token: "your-id-token",
    token_type: "Bearer",
  });
}

Next steps

Please contact us for help or questions.