Skip to main content
An implementation guide for CDI teams and the engineering teams building tools for them. CDI programs ensure clinical documentation accurately reflects patient severity, supports correct DRG assignment, and captures all codes eligible for reimbursement or risk adjustment. The Medical Coding API accelerates both concurrent and retrospective CDI review by surfacing code suggestions and documentation gaps directly from the note — giving CDI specialists a head start on every review.
What is CDI? Clinical Documentation Integrity programs bridge the gap between what physicians document and what coders can code. CDI specialists review clinical notes, identify where documentation lacks the specificity needed for accurate coding, and issue physician queries to close those gaps. The goal is not to change clinical care — it is to ensure the documentation reflects the care that was actually delivered.

Before Building

CDI programs operate in two modes, and each has different integration requirements.Concurrent review happens while the patient is still admitted. CDI specialists review notes daily, identify documentation gaps, and issue queries to the attending physician before discharge. Speed matters — the API needs to be called as notes are updated, and results need to surface quickly.Retrospective review happens after discharge, typically before or just after claim submission. The focus is on finding missed codes and documentation gaps that affect DRG accuracy and reimbursement. Volume matters — you may process thousands of encounters in batch.Most mature CDI programs do both. Start with the mode that matches your team’s current workflow, then expand.
The highest-value output of a CDI integration is not the code list — it is the physician query. A well-designed query workflow turns API candidates into actionable questions for physicians.Consider:
  • How do CDI specialists draft queries today? Will the API pre-populate query templates, or surface candidates that specialists triage manually?
  • Where do queries live? Are they tracked in a CDI platform, in the EHR, or in a separate worklist?
  • How do you track query response and resolution? The API can help measure query yield (what percentage of queried candidates convert to confirmed codes).
The candidates list is your query pipeline. Each candidate represents a clinical concept the model found in the note that may warrant physician clarification. Evidence spans show exactly where in the note the concept was mentioned — this is the foundation of a well-supported query.
Inpatient and outpatient CDI serve different goals and use different coding systems.Inpatient CDI focuses on DRG accuracy, CC/MCC capture, and severity of illness. Use icd10cm-inpatient for diagnoses and icd10pcs for procedures.Outpatient CDI focuses on E&M level support, chronic condition capture, and medical decision-making complexity. Use icd10cm-outpatient for diagnoses and cpt for procedures.If your CDI team covers both settings, your integration needs to select the correct coding systems based on encounter type — the same note processed with inpatient vs. outpatient systems will return different results.

Success Metrics

The percentage of CDI queries that result in a documentation update and confirmed code change. This is the single most important CDI metric.Measure:
  • Queries issued per period (driven by candidates surfaced by the API)
  • Query response rate (did the physician respond?)
  • Query agreement rate (did the physician agree and update documentation?)
  • Net new codes captured from queries
Track which candidates items drove successful queries to identify the API’s most valuable predictions for your patient population.
Complication and Comorbidity (CC) and Major CC (MCC) designations directly affect DRG weight and reimbursement. Missing a single MCC can mean thousands of dollars in lost revenue.Measure:
  • CC/MCC capture rate before and after API integration
  • DRG shifts attributable to CDI queries (cases where documentation improvement changed the DRG)
  • Revenue impact of DRG shifts
The API’s candidates list often surfaces conditions documented in the note but not yet coded at the specificity needed for CC/MCC designation — these are your highest-value query targets.
CDI specialists have limited time per chart. The API should help them focus that time on the charts and findings that matter most.Measure:
  • Charts reviewed per CDI specialist per day
  • Time per chart review
  • Percentage of charts flagged for query vs. confirmed clean
A well-tuned integration lets specialists skip charts where the API’s codes align with what’s already documented and focus on charts with a high candidates count — where documentation gaps are most likely.
Over time, CDI programs should improve the baseline quality of documentation — not just catch errors after the fact.Measure:
  • Average number of candidates per note (trending down indicates improving documentation)
  • Query rate by physician (identifies who benefits most from education)
  • Repeat query topics (identifies systemic documentation gaps by condition type)
These longitudinal metrics help CDI leadership shift from reactive review to proactive physician education.

Implementation

Concurrent Review — Inpatient

In a concurrent review workflow, the API processes notes as they are updated during the admission. CDI specialists review results daily alongside the chart.
curl -X POST https://api.eu.corti.app/v2/tools/coding/ \
  -H "Authorization: Bearer <token>" \
  -H "Tenant-Name: <tenant-name>" \
  -H "Content-Type: application/json" \
  -d '{
    "system": ["icd10cm-inpatient"],
    "context": [
      {
        "type": "text",
        "text": "Progress Note — Day 3: Patient continues on IV vancomycin for MRSA bacteremia. Blood cultures from yesterday still pending. Acute kidney injury improving — creatinine down to 1.8 from 2.4. Patient also has history of CHF, currently euvolemic on home dose of furosemide. Diabetes managed with insulin sliding scale, glucose well controlled."
      }
    ]
  }'
What to do with the results:
  • codes — these are the diagnoses the model confidently predicts from the note. In a concurrent review, compare these against what’s already been coded. Missing codes may indicate documentation gaps.
  • candidates — these are your query candidates. Each represents a clinical concept found in the note that may need physician clarification before it can be coded. For example, the model might surface “acute kidney injury” as a candidate if the note mentions rising creatinine but doesn’t explicitly document the diagnosis.
Integration pattern:
  1. Call the API after each significant note update (attending notes, progress notes, operative reports, consult notes)
  2. Diff the results against the current working code list
  3. Surface net-new candidates items to the CDI specialist as potential queries
  4. Use evidence spans to show the specialist exactly where in the note each concept was mentioned
  5. Track which candidates convert to confirmed codes across the stay

Retrospective Review — Post-Discharge

In a retrospective workflow, the API processes the complete chart after discharge. The focus is on finding codes that were documented but not captured.
curl -X POST https://api.eu.corti.app/v2/tools/coding/ \
  -H "Authorization: Bearer <token>" \
  -H "Tenant-Name: <tenant-name>" \
  -H "Content-Type: application/json" \
  -d '{
    "system": ["icd10cm-inpatient"],
    "context": [
      {
        "type": "text",
        "text": "Discharge Summary: 82-year-old male admitted with acute STEMI, treated with primary PCI to LAD with drug-eluting stent placement. Hospital course complicated by cardiogenic shock requiring vasopressors for 48 hours. Also managed acute on chronic systolic heart failure (EF 25%), type 2 diabetes with peripheral neuropathy, and stage 3 CKD. Discharged to skilled nursing facility on dual antiplatelet therapy, beta blocker, ACE inhibitor, and insulin."
      }
    ]
  }'
Integration pattern:
  1. Process the complete discharge summary (or concatenated key notes) after coding is complete
  2. Compare API codes against the submitted code set — codes present in the API response but absent from the claim are review candidates
  3. Rank candidates by DRG impact — a missed MCC that shifts the DRG is higher priority than a missed secondary diagnosis
  4. Route the highest-impact findings to CDI specialists for review and potential late query or coding amendment

Outpatient CDI

Outpatient CDI workflows typically run post-encounter, comparing the API’s predictions against what was submitted.
curl -X POST https://api.eu.corti.app/v2/tools/coding/ \
  -H "Authorization: Bearer <token>" \
  -H "Tenant-Name: <tenant-name>" \
  -H "Content-Type: application/json" \
  -d '{
    "system": ["icd10cm-outpatient", "cpt"],
    "context": [
      {
        "type": "text",
        "text": "Assessment and Plan: 1. Hypertension — poorly controlled, BP 158/94. Adding amlodipine 5mg daily to existing lisinopril. 2. Type 2 diabetes with diabetic nephropathy — A1c 8.1%, increasing metformin, adding GLP-1 agonist. Urine albumin-creatinine ratio elevated at 45. 3. Obesity — BMI 34.2, counseled on diet and exercise, referral to nutrition. 4. Depression screening positive — PHQ-9 score 14, starting sertraline 50mg, follow-up in 2 weeks."
      }
    ]
  }'
For E&M audit workflows: Compare the number and complexity of conditions in codes + candidates against the billed E&M level. A note with 4+ documented conditions and prescription management supports a higher complexity level than a note with only 1-2 conditions.

Tying It All Together

CDI builds on the encounter coding foundation by adding a human review layer focused specifically on documentation quality. The API doesn’t replace CDI specialists — it gives them a head start by surfacing the candidates and evidence they need to write better queries, faster. Start with the workflow your CDI team uses today (concurrent or retrospective), measure query yield as your primary success metric, and expand from there.
Please contact us if you need help setting up a CDI workflow or have questions about concurrent or retrospective review.