What is CDI? Clinical Documentation Integrity programs bridge the gap between what physicians document and what coders can code. CDI specialists review clinical notes, identify where documentation lacks the specificity needed for accurate coding, and issue physician queries to close those gaps. The goal is not to change clinical care — it is to ensure the documentation reflects the care that was actually delivered.
Before Building
Concurrent vs Retrospective — Which Are You Building?
Concurrent vs Retrospective — Which Are You Building?
CDI programs operate in two modes, and each has different integration requirements.Concurrent review happens while the patient is still admitted. CDI specialists review notes daily, identify documentation gaps, and issue queries to the attending physician before discharge. Speed matters — the API needs to be called as notes are updated, and results need to surface quickly.Retrospective review happens after discharge, typically before or just after claim submission. The focus is on finding missed codes and documentation gaps that affect DRG accuracy and reimbursement. Volume matters — you may process thousands of encounters in batch.Most mature CDI programs do both. Start with the mode that matches your team’s current workflow, then expand.
Design the Query Workflow
Design the Query Workflow
The highest-value output of a CDI integration is not the code list — it is the physician query. A well-designed query workflow turns API
candidates into actionable questions for physicians.Consider:- How do CDI specialists draft queries today? Will the API pre-populate query templates, or surface candidates that specialists triage manually?
- Where do queries live? Are they tracked in a CDI platform, in the EHR, or in a separate worklist?
- How do you track query response and resolution? The API can help measure query yield (what percentage of queried candidates convert to confirmed codes).
candidates list is your query pipeline. Each candidate represents a clinical concept the model found in the note that may warrant physician clarification. Evidence spans show exactly where in the note the concept was mentioned — this is the foundation of a well-supported query.Inpatient vs Outpatient CDI
Inpatient vs Outpatient CDI
Inpatient and outpatient CDI serve different goals and use different coding systems.Inpatient CDI focuses on DRG accuracy, CC/MCC capture, and severity of illness. Use
icd10cm-inpatient for diagnoses and icd10pcs for procedures.Outpatient CDI focuses on E&M level support, chronic condition capture, and medical decision-making complexity. Use icd10cm-outpatient for diagnoses and cpt for procedures.If your CDI team covers both settings, your integration needs to select the correct coding systems based on encounter type — the same note processed with inpatient vs. outpatient systems will return different results.Success Metrics
Query Yield
Query Yield
The percentage of CDI queries that result in a documentation update and confirmed code change. This is the single most important CDI metric.Measure:
- Queries issued per period (driven by
candidatessurfaced by the API) - Query response rate (did the physician respond?)
- Query agreement rate (did the physician agree and update documentation?)
- Net new codes captured from queries
candidates items drove successful queries to identify the API’s most valuable predictions for your patient population.CC/MCC Capture Rate
CC/MCC Capture Rate
Complication and Comorbidity (CC) and Major CC (MCC) designations directly affect DRG weight and reimbursement. Missing a single MCC can mean thousands of dollars in lost revenue.Measure:
- CC/MCC capture rate before and after API integration
- DRG shifts attributable to CDI queries (cases where documentation improvement changed the DRG)
- Revenue impact of DRG shifts
candidates list often surfaces conditions documented in the note but not yet coded at the specificity needed for CC/MCC designation — these are your highest-value query targets.Review Efficiency
Review Efficiency
CDI specialists have limited time per chart. The API should help them focus that time on the charts and findings that matter most.Measure:
- Charts reviewed per CDI specialist per day
- Time per chart review
- Percentage of charts flagged for query vs. confirmed clean
codes align with what’s already documented and focus on charts with a high candidates count — where documentation gaps are most likely.Documentation Completeness
Documentation Completeness
Over time, CDI programs should improve the baseline quality of documentation — not just catch errors after the fact.Measure:
- Average number of
candidatesper note (trending down indicates improving documentation) - Query rate by physician (identifies who benefits most from education)
- Repeat query topics (identifies systemic documentation gaps by condition type)
Implementation
Concurrent Review — Inpatient
In a concurrent review workflow, the API processes notes as they are updated during the admission. CDI specialists review results daily alongside the chart.codes— these are the diagnoses the model confidently predicts from the note. In a concurrent review, compare these against what’s already been coded. Missing codes may indicate documentation gaps.candidates— these are your query candidates. Each represents a clinical concept found in the note that may need physician clarification before it can be coded. For example, the model might surface “acute kidney injury” as a candidate if the note mentions rising creatinine but doesn’t explicitly document the diagnosis.
- Call the API after each significant note update (attending notes, progress notes, operative reports, consult notes)
- Diff the results against the current working code list
- Surface net-new
candidatesitems to the CDI specialist as potential queries - Use evidence spans to show the specialist exactly where in the note each concept was mentioned
- Track which candidates convert to confirmed codes across the stay
Retrospective Review — Post-Discharge
In a retrospective workflow, the API processes the complete chart after discharge. The focus is on finding codes that were documented but not captured.- Process the complete discharge summary (or concatenated key notes) after coding is complete
- Compare API
codesagainst the submitted code set — codes present in the API response but absent from the claim are review candidates - Rank
candidatesby DRG impact — a missed MCC that shifts the DRG is higher priority than a missed secondary diagnosis - Route the highest-impact findings to CDI specialists for review and potential late query or coding amendment
Outpatient CDI
Outpatient CDI workflows typically run post-encounter, comparing the API’s predictions against what was submitted.codes + candidates against the billed E&M level. A note with 4+ documented conditions and prescription management supports a higher complexity level than a note with only 1-2 conditions.
Tying It All Together
CDI builds on the encounter coding foundation by adding a human review layer focused specifically on documentation quality. The API doesn’t replace CDI specialists — it gives them a head start by surfacing the candidates and evidence they need to write better queries, faster. Start with the workflow your CDI team uses today (concurrent or retrospective), measure query yield as your primary success metric, and expand from there.Please contact us if you need help setting up a CDI workflow or have questions about concurrent or retrospective review.