Before Building
Before writing integration code, align on the fundamentals:Define Your Encounter Type
Define Your Encounter Type
The coding systems you request determine the type of codes returned. Getting this right is the most important configuration decision.Inpatient encounters (hospital admissions, observation stays) use:
icd10cm-inpatientfor diagnosis codesicd10pcsfor procedure codes
icd10cm-outpatientfor diagnosis codescptfor procedure codes
icd10 for the WHO version, icd10gm for Germany). See Coding Systems for the full list.If your platform handles both inpatient and outpatient encounters, use encounter metadata (admission type, facility type, or department) to select the correct coding system at request time.Plan the Review Workflow
Plan the Review Workflow
Decide how code suggestions flow through your system and who interacts with them:
- Coder-in-the-loop: Route suggestions to a professional coder who confirms, rejects, or adjusts each code before submission. This is a common workflow in health systems with dedicated coding teams.
- Physician-facing auto-suggest: Surface suggestions directly to the treating physician during or after documentation. This works well in smaller practices or outpatient settings where physicians code their own encounters.
- Pre-populated worklist: Use API suggestions to pre-fill a coding worklist that coders then review in their existing coding tool.
- Automated pipeline: Feed API output directly into downstream billing or analytics systems without manual review.
codes list contains high-confidence predictions suitable for pre-population or automation. The candidates list contains clinically relevant but optional codes that benefit from human judgment — surface these as suggestions rather than defaults.Identify Your Integration Surface Area
Identify Your Integration Surface Area
Encounter coding is most valuable when it sits inside existing workflows rather than as a standalone tool.Determine:
- Where notes come from — Are you pulling finalized notes from an EHR, receiving them via HL7/FHIR, or generating them with Corti’s ambient documentation?
- Where codes go — Do confirmed codes write back to the EHR, feed into a billing system, or populate a claim form?
- When coding runs — Does it trigger automatically on note finalization, or does a coder manually initiate it?
Success Metrics
Identifying the right metrics early helps you evaluate the integration and build confidence with clinical and revenue cycle stakeholders.Coding Accuracy
Coding Accuracy
The primary measure of model quality is agreement with the final billed code set.Measure:
- Agreement rate between API
codesand final billed codes - False positive rate (API-suggested codes rejected by reviewers)
- False negative rate (codes added by reviewers that the API missed)
Coder Throughput
Coder Throughput
If the API is working well, coders should be able to review more encounters per hour because they are confirming suggestions rather than coding from scratch.Measure:
- Encounters coded per hour (before vs. after)
- Average time per encounter
- Percentage of API suggestions accepted without modification
Denial Rate Reduction
Denial Rate Reduction
Coding errors are a leading cause of claim denials. Better initial code suggestions should reduce denial rates over time.Measure:
- Claim denial rate (before vs. after)
- Denial reasons related to coding errors (incorrect code, missing modifier, insufficient specificity)
- Rework rate for returned claims
Time-to-Code
Time-to-Code
The elapsed time between note finalization and code submission reflects both coder efficiency and workflow friction.Measure:
- Average time from note finalization to code submission
- Backlog size (encounters awaiting coding)
Implementation
Inpatient Encounter Workflow
For inpatient encounters, you typically need both diagnosis and procedure codes. This requires two API calls — one for each coding system.Outpatient Encounter Workflow
For outpatient encounters, diagnosis and procedure codes can be requested in a single call.Using Evidence Spans
Every code in the response includesevidences — references pointing back to the context that drove the prediction. Use these to build trust in the review workflow.
- Highlight the evidence span in the original note when a coder hovers or selects a code
- Let coders see at a glance why the model suggested each code
- Use evidence spans to speed up the confirm/reject decision — coders can validate the suggestion without re-reading the full note
Input Formats
Thecontext field accepts an array of context objects. You can pass multiple context items to provide the model with more clinical information.
See How it works for details on the request schema.
Tying It All Together
Encounter diagnosis coding is the foundation that other medical coding workflows build on. Once you have a working integration:- Add CDI review workflows to surface documentation gaps and query candidates
- Use revenue cycle patterns for HCC capture and retrospective under-coding detection
Please contact us if you need help setting up your encounter coding workflow or have questions about coding system selection.