ABC Health Plan • Integrated Analytics Report by OneMind Strata Research

Interviews × AI Engines × Benchmarks — Executive Readout

A synthesized view combining stakeholder interviews, AI-powered analysis of member/provider journeys, and benchmark data across peer plans and markets.

Interview Coverage
executives, ops, clinical, members
AI Insight Confidence
validated against benchmarks
Benchmark Completeness
sources: Stars, HEDIS, MLR
Research Design & Methodology

Multi-Source Integration Framework

  • Fusion model: interviews (coded themes) + AI answer signals + plan benchmarks.
  • Validation: cross-check AI outputs with authoritative artifacts (policies, PA grids, formulary, filings).
  • Triangulation: theme ↔ metric ↔ AI-signal agreement threshold ≥ 0.7 for inclusion.

Interview Methodology

  • Sample: 38 interviews (7 exec, 11 ops, 9 clinical, 11 members/caregivers).
  • Semi-structured guides on PA, claims, benefits, directory, service.
  • Double-coding, Cohen’s κ = 0.81; discrepancies adjudicated weekly.

AI Engine Integration

  • Models: ChatGPT, Claude, Gemini, Perplexity (answer engines); policy/document retrieval checks.
  • Pipelines: transcript NLP (topic, sentiment, stance) → entity/intent → evidence linking.
  • Human-in-the-loop QA: outlier review, hallucination filter, policy anchor verification.

Data Collection Architecture

  • Primary: stakeholder interviews, member focus mini-groups.
  • AI-gathered: sentiment from public reviews & forums; policy coverage crawl.
  • Benchmarks: Stars/HEDIS extracts, MLR & 10-K/K-1, CMS PA timeliness, directory audits.
Analytical Processing Framework

Qualitative Analysis Engine

Positive Neutral Negative
  • Top themes: “PA clarity”, “EOB transparency”, “Directory trust”.
  • Theme stability tested across groups; drift flagged for exec review.

Quantitative Benchmark Analysis

ABC percentile Peer median
  • Percentiles shown for Stars, HEDIS, MLR, PA timeliness, FCR (service).
  • Forecast models (ARIMAX) align with interview-derived risk/driver tags.

Cross-Validation Protocols

  • Answer engines must cite a canonical policy/FAQ to count toward “presence”.
  • Interview claims matched to benchmark metrics within ±1 quarter.
  • Contradictions trigger human review and corrective sourcing tasks.

Correlation Explorer

Example: ↑ policy citation authority correlates with ↓ PA denials (r = -0.58).

Integrated Insight Generation

Pattern Recognition & Synthesis

  • Convergence: Interviews + AI agree “PA criteria discoverability” is primary friction.
  • Divergence: Executives rate service FCR higher than member interviews; reconcile via QA sample.
  • Mapped indicators: PA policy coverage → denials, directory accuracy → Stars access measures.

Gap Analysis Framework

AreaInterview PerceptionBenchmark RealityGap
PA Timeliness“Inconsistent”95th pct = 72 hrsTarget ≤ 48 hrs
Directory Accuracy“Unreliable”Audit pass 87%Goal ≥ 95%
EOB Clarity“Jargon heavy”CSAT 3.7/5Goal ≥ 4.2
Strategic Intelligence Outputs

Stakeholder Perspective Mapping

  • Executives: Focus on Stars and medical cost trend.
  • Operations: Prior auth rules and cross-team handoffs.
  • Clinical: Criteria transparency, exception workflows.
  • Members: PCP selection, cost clarity, digital claims status.

AI-Enhanced Competitive Analysis

  • Automated peer monitoring: PA pages, formulary tiers, turnaround SLAs.
  • Signals point to rising self-service in two markets (Perplexity coverage +22%).
  • Emerging risk: inconsistent state variants create answer drift in AI Overviews.
Predictive & Prescriptive Analytics

Scenario Modeling

Forecast SOV Target band

Action Priority Matrix

Impact × Feasibility; bubble size = cost. Prioritize top-right (high/high).

Quality Assurance & Validation

Data Integrity Protocols

  • Transcript QC: completeness, speaker tagging, redaction audit.
  • AI output checks: policy anchor present, date validity, locale match.
  • Benchmark provenance: source IDs, refresh cadence, coverage notes.

Bias Detection & Mitigation

  • Interviewer/participant bias monitored via balance scores.
  • Model bias tests across question phrasing & locales.
  • Benchmark selection sensitivity analysis (exclude/replace tests).
Reporting Architecture

Executive Dashboard

  • Integrated KPIs, qualitative context, AI trend alerts.
  • Evidence map: each insight → sources (interview quote, policy URL, metric).

Detailed Analysis Sections

  • Interview findings with representative quotes and theme codes.
  • AI engine results & methodology; benchmark tables/visuals.
  • Joined conclusions with traceable evidence chains.
Implementation & Monitoring Framework

Continuous Intelligence System

  • Quarterly interviews, monthly AI re-runs, benchmark refresh.
  • Automated alerts: policy drift, directory integrity, PA SLA variance.

Action Tracking

InitiativeOwnerStatusMetric
Canonical PA pagesClinical OpsIn flightPA 95th pct ≤ 48h
Directory verificationProvider NetPilotAccuracy ≥ 95%
EOB simplificationMember ExpDesignCSAT ≥ 4.2