Skip to main content
An implementation guide for revenue cycle teams and the engineering teams building coding audit and risk adjustment tools. From HCC capture sweeps across a patient panel to retrospective audits on historical claims, the Medical Coding API surfaces missed codes and revenue leakage at scale — without requiring coders to re-read every note from scratch.

Before Building

Revenue cycle coding workflows fall into two broad categories, and they serve different business objectives.HCC capture for risk adjustment focuses on ensuring every relevant Hierarchical Condition Category is documented and coded at least once per year per patient. This directly affects RAF scores and per-member-per-month revenue in Medicare Advantage and value-based care programs. The API processes outpatient encounter notes and surfaces ICD-10-CM codes that map to HCC categories.Retrospective under-coding detection focuses on finding revenue leakage from historical encounters — missed secondary diagnoses, under-coded severity, or documentation that supported a higher DRG but wasn’t captured at the time. The API re-processes historical notes and compares against original claims.Many organizations run both. HCC sweeps are typically annual campaigns tied to risk adjustment deadlines. Retrospective audits are ongoing quality programs.
The API tells you what codes the documentation supports. To find what was missed, you need to compare against what was actually billed.Plan how you will:
  • Pull original claim data (billed codes) for comparison against API results
  • Map ICD-10-CM codes to HCC categories using CMS HCC mapping tables
  • Identify the delta — codes the API found in the documentation that were not on the original claim
The comparison logic lives in your integration layer, not in the API. The API’s job is to extract codes from text. Your system’s job is to determine which of those codes represent revenue opportunities.
Not every missed code is worth pursuing. Define how you will rank and triage findings.Consider:
  • HCC weight: Higher-weight HCC categories have more revenue impact. Prioritize candidates that map to high-weight categories.
  • DRG impact: For inpatient retrospective audits, a missed MCC that shifts the DRG is worth more than a missed secondary diagnosis that doesn’t.
  • Volume: A commonly missed code across hundreds of encounters may represent more total value than a rare high-weight miss.
  • Actionability: Some findings require physician outreach for documentation addenda. Others may be codeable from existing documentation. Prioritize findings that can be acted on without additional physician burden.

Success Metrics

The percentage of documentable HCCs that are actually coded and submitted. This is the primary metric for risk adjustment programs.Measure:
  • HCC capture rate before and after API integration
  • Net new HCCs identified per patient per year
  • RAF score improvement attributable to recaptured HCCs
  • Revenue impact (per-member-per-month change)
Track capture rate by HCC category to identify which condition types your providers most frequently under-document.
For retrospective audits, measure the actual revenue recovered from findings the API surfaced.Measure:
  • Total revenue recovered from late charges, corrective coding, and DRG upgrades
  • Revenue per encounter reviewed
  • Cost of review (coder time) vs. revenue recovered — this is your ROI
Not every finding converts to revenue. Track the conversion funnel: API finding → coder review → confirmed under-code → submitted correction → payment received.
The API should make auditors more productive by focusing their attention on the encounters most likely to have findings.Measure:
  • Encounters reviewed per auditor per day
  • Hit rate (percentage of reviewed encounters with actionable findings)
  • Time per encounter review
Use the API to pre-screen encounters and rank them by likely impact. Auditors review the highest-ranked encounters first, improving hit rate and making better use of limited review capacity.
Over time, the patterns you find in retrospective audits should feed back into provider education and documentation improvement.Measure:
  • Trending under-coded conditions by specialty or provider
  • Repeat findings for the same condition type across audit cycles
  • Reduction in findings per provider over time (indicates learning)
Use evidence spans from the API to create targeted education materials — showing physicians exactly where in their notes a condition was mentioned but not documented at codeable specificity.

Implementation

HCC Capture — Risk Adjustment Sweeps

Process outpatient encounter notes to surface ICD-10-CM codes with HCC mappings. Both codes and candidates are valuable — a chronic condition mentioned casually in a note may appear only in candidates but still qualify for HCC capture with physician confirmation.
curl -X POST https://api.eu.corti.app/v2/tools/coding/ \
  -H "Authorization: Bearer <token>" \
  -H "Tenant-Name: <tenant-name>" \
  -H "Content-Type: application/json" \
  -d '{
    "system": ["icd10cm-outpatient"],
    "context": [
      {
        "type": "text",
        "text": "Assessment and Plan: 1. COPD — stable on current inhalers, continue tiotropium and PRN albuterol. FEV1 52% predicted on last PFTs. 2. CHF with reduced EF — last echo showed EF 30%, on guideline-directed therapy with carvedilol, lisinopril, and spironolactone. Euvolemic today. 3. Chronic kidney disease stage 3b — GFR 38, stable. Monitoring potassium with spironolactone. 4. Former smoker — quit 2 years ago, counseled on continued cessation."
      }
    ]
  }'
Integration pattern:
  1. Process encounter notes — for annual sweeps, run all outpatient encounters for the measurement period
  2. Map each returned ICD-10-CM code to HCC categories using CMS mapping tables
  3. Cross-reference against previously submitted claims for the same patient in the current plan year
  4. Codes that map to HCC categories and were not already submitted are your capture candidates
  5. Prioritize by HCC weight — high-weight categories first
  6. Use evidence spans to show reviewers exactly which section of the note mentions the condition
  7. Route confirmed findings for physician outreach or coding amendment

Retrospective Under-Coding Detection

Re-process historical encounter notes and compare the API’s output against the original billed codes to identify revenue leakage.
curl -X POST https://api.eu.corti.app/v2/tools/coding/ \
  -H "Authorization: Bearer <token>" \
  -H "Tenant-Name: <tenant-name>" \
  -H "Content-Type: application/json" \
  -d '{
    "system": ["icd10cm-inpatient"],
    "context": [
      {
        "type": "text",
        "text": "Discharge Summary: 68-year-old female admitted for elective right total knee arthroplasty. PMH significant for morbid obesity (BMI 42), obstructive sleep apnea on CPAP, type 2 diabetes on insulin with peripheral neuropathy, and chronic venous insufficiency. Post-op course complicated by acute blood loss anemia requiring 2 units pRBC. DVT prophylaxis with enoxaparin. Discharged to inpatient rehab on POD 3."
      }
    ]
  }'
Integration pattern:
  1. Run historical notes through the API
  2. Compare returned codes against the original claim — codes present in the API response but absent from the claim are under-coding candidates
  3. Review candidates for additional findings — these may represent conditions documented but not at sufficient specificity for the original coder to capture
  4. Rank findings by DRG impact (for inpatient) or HCC weight (for outpatient)
  5. Use evidence spans on a stratified sample for validation before acting on results at scale
  6. Store evidence spans for any codes promoted to late charges — these form the documentation support trail for payer queries

Validating Results Before Acting at Scale

Before rolling out findings to coders or physicians, validate the API’s accuracy on a representative sample.
StepAction
1. SamplePull a stratified sample of encounters (by specialty, payer, encounter type)
2. Shadow runProcess each note through the API and compare against original claims
3. Expert reviewHave experienced coders review a subset of API findings to confirm accuracy
4. MeasureCalculate precision (what percentage of API findings are valid) and recall (what percentage of known under-codes does the API catch)
5. CalibrateAdjust your prioritization thresholds based on validation results
This step is especially important for retrospective audits where findings may trigger claim amendments — you need confidence in the results before taking action.

Tying It All Together

Revenue cycle workflows build on the encounter coding foundation by adding a comparison layer: what the documentation supports vs. what was actually billed. The delta represents your revenue opportunity. Start with the highest-value workflow for your organization — typically HCC sweeps for Medicare Advantage plans or retrospective audits for inpatient facilities — validate on a sample, and scale from there.
Please contact us if you need help with risk adjustment sweeps, retrospective audits, or processing at scale.