OMES AI Partnership Proposal

Oklahoma State AI Strategy & Engineering Pod Projects

Prepared for: Oklahoma Office of Management and Enterprise Services

Prepared by: Thomas Smyth, Concourse

Date: January 19, 2026

Meeting With: Dan Cronin (State CIO), Tai Phan (Chief AI & Technology Officer)

Follow-up Meeting: January 23, 2026, 4:30pm @ Lincoln Data Center

Table of Contents

  1. Executive Summary
  2. Working Demos
  3. Project Portfolio
  4. Engagement Framework
  5. Pricing Model
  6. Security, Privacy & Compliance
  7. Next Steps
  8. Appendix A: Legislative Research Discovery Questions
  9. Appendix B: Statewide Chatbot Discovery Questions
  10. Appendix C: SNAP QA System Discovery Questions

Executive Summary

OMES seeks to accelerate AI adoption across Oklahoma's 125 state agencies by deploying self-contained "engineering pods" that deliver production-ready AI capabilities. Rather than staff augmentation, the state wants outcome-based partnerships that leave behind working software and upskilled internal teams.

We have proposed three potential starting points for an initial test of this AI pod model: (1) an AI-powered legislative research assistant, (2) a statewide chatbot framework, and (3) a SNAP QA system designed to begin the process of decreasing the payment error rate. We have also identified a number of Tier 2 projects that could be of interest. Our immediate next step is to work with you to determine prioritization and scoping for these projects - which are top priorities and achievable over a relatively short time horizon with a modest resource investment.

Key Themes from Discussion

Working Demos

Important Caveat: These demos are conceptual prototypes built before any discovery conversation. They are intended to illustrate technical capabilities and spark discussion - not to represent final solutions. Actual implementation would follow proper discovery, requirements gathering, and stakeholder input.

Demo 1: Legislative Research Assistant

Live Demo: oklahoma-legislative-research.demos.onconcourse.com
4-6 weeks

A natural language Q&A system for Oklahoma legislative staff to research statutes, bills, and legal documents.

Note: This demo uses publicly available sources; production will use authoritative sources with proper governance.

Demo for Discussion

Current Architecture

User Browser Application • Chat UI • API Routes • Model Router Search Index • 87k+ chunks • Hybrid search • BM25 + Vector LLM Providers OpenAI / Anthropic / Google Data Sources: OK Statutes (86 titles) Bills (28 sessions) + Additional sources

Architectural Decisions

Why a Vector Database? Vector databases enable semantic search - finding documents by meaning rather than just keywords. When a user asks "What are the rules for charter schools?", the system finds relevant statutes even if they don't contain those exact words.

Why TurboPuffer? We have evaluated several vector database options including Pinecone, Qdrant, Weaviate, and Chroma. We selected TurboPuffer for its excellent query performance, hybrid search capabilities (combining vector and keyword search), and competitive pricing. The architecture is portable - state-approved alternatives (Pinecone, PostgreSQL pgvector, Azure AI Search, etc.) can be substituted based on procurement requirements.

Data Sources: Additional data sources (Oklahoma Constitution, AG opinions, administrative rules, committee transcripts) can be indexed as needed during the discovery and build phases.

Evaluation Methodology: We would maintain an evaluation set of 50-100 canonical questions with known correct answers to track accuracy and citation correctness over time. This enables regression testing when models or data sources change.

What We'll Build (Production)

PhaseDeliverable
DiscoveryConduct user interviews, validate data sources, define user personas, establish success metrics
Data ExpansionAdd Oklahoma Constitution, bill summaries, fiscal impact statements
AuthenticationSSO integration with state identity provider
UI PolishSaved searches, collaboration features, export to Word
Production HardeningError handling, monitoring, audit logging
TrainingUser documentation, admin runbooks, knowledge transfer

Sample Questions It Can Answer

Success Metrics

MetricTargetHow Measured
Citation Precision>95%Citations link to correct statute/bill sections
Answer Groundedness>90%Answers supported by retrieved documents (no hallucination)
Response Latency<5 secondsTime from query to complete answer
Evaluation Set Accuracy>85%Performance on 50-100 canonical test questions
User AdoptionTrack by roleWeekly active users by staff category

Demo 2: Statewide Chatbot Platform

Live Demo: oklahoma-chatbot.demos.onconcourse.com
3-6 months

A multi-tenant conversational AI platform that enables any Oklahoma state agency, board, commission, or municipality to deploy an AI-powered Q&A assistant and embed it directly on their website.

Why a Unified Platform?

Content Governance & Safety

Demo for Discussion

Current Architecture

Citizens Chat Widget Agencies Admin Portal State Dashboard Chatbot Platform Multi-tenant • Per-agency isolation • Analytics Knowledge Bases Per-agency docs Database Tenants, Logs LLM Providers OpenAI / Anthropic / Google

What We'll Build (Production)

PhaseDeliverable
DiscoveryIdentify pilot agencies, map content sources, define governance model
Core PlatformProduction-grade multi-tenant backend with proper auth and isolation
Agency OnboardingSelf-service portal for agencies to create tenants, upload content
AnalyticsCross-agency reporting, deflection tracking, CSAT measurement
Multi-LanguageSpanish support, expandable to Vietnamese, Korean
RolloutPhased onboarding of 5-10 agencies with training and support

Key Screens

Future Capability: IVR Integration

Roadmap: The statewide chatbot platform is the foundation. IVR integration is the next step, leading toward a statewide contact center that serves as a centralized hub for citizen inquiries and requests - a unified help desk for all of Oklahoma state government.

Success Metrics

MetricTargetHow Measured
Deflection Rate[40-60%]% of inquiries resolved without human handoff (pilot target)
Citizen Satisfaction (CSAT)>4.0/5.0Post-conversation feedback survey
Containment Rate[>80%]% of conversations completed within chatbot
Time-to-Publish[<24 hours]Time from document upload to searchable in chatbot
Answer Accuracy[>90%]Spot-check audits by agency content owners
Accessibility ComplianceWCAG 2.1 AAAutomated + manual accessibility testing

Demo 3: SNAP Quality Assurance System

Live Demo: oklahoma-snap.demos.onconcourse.com
6-12 months

An AI-powered quality assurance system that provides an additional layer of review for SNAP (food stamps) benefit cases. The system independently validates eligibility determinations and benefit calculations, helping to identify and reduce errors before they impact Oklahoma's Payment Error Rate.

Data Handling & Privacy

SNAP case data contains highly sensitive PII (SSNs, income, household composition). Our approach:

Demo for Discussion

Current Architecture

Processing Flow Upload Cases + Docs Extract AI Document OCR Validate Rules Engine Calculate Benefits Compare OK vs. QA System Findings & Explanations Errors, Dollar Impact, Recommendations

What We'll Build (Production)

PhaseDeliverable
V1: QA SystemProduction deployment processing real (anonymized) cases alongside current QA process
IntegrationSecure connection to Oklahoma's eligibility system for case data
Document PipelineIntegration with Oklahoma's document management system
Policy UpdatesProcess to incorporate policy changes as rules are updated
Training DataCapture reviewer feedback to train custom Oklahoma model
V2: Custom ModelFine-tune AI on Oklahoma's historical QA data for higher accuracy
V3: Current™ ReplacementFull task/workflow management (if desired)

Why This Matters

Oklahoma's SNAP Payment Error Rate: 10.87% (FY2024) per USDA Food and Nutrition Service.

H.R. 1 (119th Congress, Budget Reconciliation): This federal legislation fundamentally changes the economics of SNAP administration:

The Opportunity: States that reduce their Payment Error Rate will avoid federal penalties and recoup costs. AI-powered QA can catch errors before benefits are issued, reducing both overpayments and underpayments while improving outcomes for Oklahoma citizens.

Product Roadmap

PhaseDescriptionOutcome
V1: QA SystemStandalone QA tool processing batch uploadsProve accuracy; demonstrate error detection; build trust
V2: Custom Oklahoma ModelFine-tune AI on Oklahoma's historical QA dataHigher accuracy; Oklahoma-specific pattern recognition
V3: Replace Current™Full task/workflow management systemComplete modernization of SNAP case processing

Future Vision: The SNAP QA system establishes the foundation for OK Benefits "Intelligent Intake" + Document Automation - extending the same AI-powered document extraction and eligibility validation to SoonerCare (Medicaid) and Child Care programs, creating a unified intelligent intake system across Oklahoma's major benefits programs.

Success Metrics

MetricTargetHow Measured
Detection Precision>85%% of flagged errors confirmed by QA reviewer
Detection Recall>80%% of actual errors caught by system (vs. ME/QC findings)
False Positive Rate<20%Flagged items that are not actual errors
Reviewer Time Saved30-50%Average case review time vs. baseline
PER Impact (projected)1-2 point reductionEstimated based on error detection rate and dollar impact

Project Portfolio

Alignment with Oklahoma IT Strategic Plan 2026-2028

These projects directly support the statewide strategic goals outlined by OMES:

Promote Customer CentricityStatewide Chatbot, Legislative Assistant, TravelOK UX Analytics
Make Complexity InvisibleSNAP Automation, Forms Modernization
Foster Data-Driven Decision MakingTravelOK Analytics, SNAP QA Metrics
Empower Future-Ready WorkforceIT Training Program, Knowledge Transfer
Modernize with PurposeAll Tier 1 Projects, API Governance

Tier 1: Priority Projects (Demos Available)

ProjectTimelineDescription
Legislative Research Assistant4-6 weeksNatural language search over Oklahoma statutes and bills
Statewide Chatbot Platform3-6 monthsMulti-tenant conversational AI for all agencies
SNAP QA System6-12 monthsPayment error rate reduction through shadow QA

Tier 2: Strategic Initiatives (Discovery Required)

ProjectDescription
TravelOK.com Tourism MarketingAI-powered personalization and content recommendations for 2028 Olympics
ServiceNow Ticket Automation5-10x productivity improvement for IT ticket handling
Legacy Forms ModernizationModern forms platform to replace aging ok.gov sites
Occupational LicensingUnified portal across 50+ license types
AI API GovernanceCentralized gateway for statewide AI usage visibility
Digital Accessibility ComplianceAI-assisted accessibility auditing and remediation across state websites
IT Workforce Training ProgramAI, cloud, and automation skill-building curriculum for state employees
Procurement AI AssistantAI tools for drafting RFPs and solicitations for OMES and statewide procurement officials
IT Contractor Performance AssessmentAI-powered analysis of contractor deliverables, timelines, and outcomes
TravelOK Analytics ModernizationServer-side tracking, unified attribution dashboard, and UX analytics for Oklahoma Tourism

Engagement Framework

How We Approach Every Project

Each engagement follows a structured process designed to minimize risk, maximize learning, and deliver working software quickly.

┌─────────────────────────────────────────────────────────────────────┐ │ 1. SCOPING MEETINGS (1 week) │ │ • Understand the problem and stakeholders │ │ • Review existing systems and data │ │ • Identify quick wins vs. complex dependencies │ │ • Align on success criteria │ ├─────────────────────────────────────────────────────────────────────┤ │ 2. DISCOVERY PERIOD (1-2 weeks) │ │ • Deep-dive into current processes and pain points │ │ • Data inventory and integration mapping │ │ • Technical architecture assessment │ │ • Security and compliance requirements │ │ • Detailed scope and timeline development │ ├─────────────────────────────────────────────────────────────────────┤ │ 2.5 SECURITY REVIEW (parallel with build) │ │ • Architecture review with state security team │ │ • Security artifacts delivered (SSP inputs, data flows) │ │ • ATO preparation begins during discovery │ │ • Penetration testing support if required │ │ • Security approval gate before production rollout │ ├─────────────────────────────────────────────────────────────────────┤ │ 3. BUILD & ITERATE (varies by project) │ │ • Weekly (or more frequent) stakeholder meetings │ │ • Live demos of working software each session │ │ • Rapid iteration based on feedback │ │ • Continuous integration with state systems │ │ • OMES staff embedded for knowledge transfer │ ├─────────────────────────────────────────────────────────────────────┤ │ 4. SOFT LAUNCH / BETA (2-4 weeks) │ │ • Production deployment with limited users │ │ • Real usage without cutting over existing systems │ │ • Bug fixes and refinements based on live feedback │ │ • Performance monitoring and optimization │ │ • Training for broader rollout │ ├─────────────────────────────────────────────────────────────────────┤ │ 5. LAUNCH / ROLLOUT (staged) │ │ • Phased rollout by agency, region, or user group │ │ • Parallel operation with legacy systems if needed │ │ • Full monitoring and support │ │ • Success metrics tracking │ │ • Handoff to ongoing operations │ └─────────────────────────────────────────────────────────────────────┘

Pricing Model

Annual Platform License (SaaS Model)

All projects will be priced as annual platform licenses rather than time-and-materials or fixed-price project fees. This approach:

Detailed pricing to be developed based on specific scope and requirements.

Engineering Pod Composition

Each project is delivered by a dedicated "engineering pod" - a small, focused team that works exclusively on your initiative:

1
Product Lead / Architect
2-3
Product Engineers
1
AI/ML Engineer
1
Product Manager
1
OMES Liaison

The OMES liaison is an embedded state employee who works alongside the pod, ensuring knowledge transfer and building internal capability throughout the engagement.

Cadence: Weekly demos, backlog grooming, stakeholder reviews, and sprint planning ensure continuous alignment and visibility.

Artifacts Delivered: Code repository, Infrastructure as Code (IaC), operational runbooks, admin guides, training sessions, and knowledge transfer plan.

Security, Privacy & Compliance

Security and compliance are foundational to every engagement. We build systems that meet state security requirements and work alongside your security teams throughout the process.

Data Classification & Handling

Classification Examples Handling
Public Published statutes, public website content, press releases Standard cloud hosting; encrypted in transit and at rest
Internal Agency documents, internal policies, staff directories Role-based access controls; audit logging; encrypted storage
Confidential/PII Citizen records, case files, benefits data Strict access controls; field-level encryption; comprehensive audit trails; data minimization

Deployment Options

Our architecture is designed to be portable across cloud providers. Technology choices (databases, vector stores, hosting) are implementation details that can be adjusted based on state requirements and existing contracts.

Audit Logging & Retention

AI Model Data Handling

ATO Readiness

We provide security artifacts to support your Authority to Operate (ATO) process:

We work alongside state security teams throughout the engagement - security review is a collaborative process, not a gate at the end.

Next Steps

  1. January 23, 4:30pm: Whiteboard session at Lincoln Data Center
  2. Post-Meeting:

Appendix A: Legislative Research Assistant Discovery Questions

Data & Sources

  1. Is LegiScan data sufficient for Oklahoma statutes, or do you need authoritative oklegislature.gov text?
    • LegiScan provides structured data but may lag by days
    • oklegislature.gov is authoritative but requires scraping
  2. Are bill summaries and fiscal impact statements consistently available for all bills?
    • These are high-value for research; need to confirm coverage
  3. Do you need "Session in Review" documents or other staff publications indexed?
    • Referenced on okhouse.gov as valuable research resources
  4. Are there internal documents (memos, talking points, templates) that should be searchable?
    • Would require secure handling; affects architecture
  5. Do you need case law, AG opinions, or administrative rules in scope?
    • Significant corpus expansion; recommend Phase 2

Users & Workflows

  1. Who are the primary users for V1?
    • Drafting counsel? Committee staff? Fiscal analysts? All?
  2. What are the top 10 questions staff ask weekly?
    • Helps us tune retrieval and create evaluation set
  3. What does a "good" answer look like?
    • Full legal analysis vs. pointers to relevant sections?
    • How much context/explanation is expected?
  4. Is there an existing tool or process this replaces or augments?
    • BTOnline? Westlaw? Manual statute searches?
  5. What output formats do staff need?
    • On-screen only? Copy to Word? Email-ready summaries?

Technical & Security

  1. Are there approved AI model providers?
    • Some agencies restrict to specific vendors (e.g., Azure OpenAI only)
  2. What authentication is required for V1?
    • Open access? Simple password? SSO/SAML?
  3. What logging/retention requirements exist?
    • Open records requests? Audit trails? Data retention policies?
  4. Are there any data classification requirements?
    • All public data OK for V1, or mixed sensitivity?

Success Criteria

  1. What does a successful demo look like?
    • Specific scenarios to demonstrate?
  2. What latency is acceptable?
    • Sub-5 seconds? Sub-10 seconds? Longer OK for complex queries?
  3. What accuracy level is expected?
    • Must every citation be verifiable? Tolerance for "not found" responses?
  4. Who are the stakeholders for go/no-go decision?

Additional Data Questions

  1. Do you want committee hearing transcripts/videos indexed?
    • Available on oklegislature.gov but adds complexity (transcription, storage)
  2. Are enrolled/engrossed bill PDFs acceptable, or do you need clean text versions?
    • PDFs require OCR; clean text is faster to implement

User Experience

  1. Do you want saved searches or alerts?
    • "Notify me when bills mention charter schools"
  2. Collaboration features - can users share Q&A threads with colleagues?
    • Shareable links? Export to email?

Political & Operational

  1. Is there bipartisan support for this tool, or is it one chamber's initiative?
    • Affects rollout strategy and stakeholder management
  2. Who maintains this after launch?
    • OMES IT? Legislative Service Bureau? External vendor?
    • Affects training, documentation, SLA requirements

Future Scope (Inform Roadmap)

  1. Is drafting assistance a priority for V2?
    • Bill scaffolds, templates, "similar bills" suggestions
  2. Interest in amendment-in-context views?
    • Show how proposed changes would read in current statute
  3. Multi-state comparison features?
    • "How does Oklahoma's approach compare to Texas?"
  4. What drafting templates and style guides exist?
    • Helps inform future drafting assistant features

Appendix B: Statewide Chatbot Platform Discovery Questions

Authentication & Authorization

  1. Identity Provider: Will agencies use a shared IdP (e.g., Azure AD, Okta) or agency-specific?
  2. Role Definitions: What admin roles are needed? (Super admin, Agency admin, Content editor, Viewer)
  3. Citizen Auth: Any scenarios where citizens need to authenticate (e.g., case-specific info)?

Data & Content

  1. Sensitive Data Policy: What data classifications are allowed? How do we enforce "no PII/PHI" in the public tier?
  2. Content Freshness: How often should URLs be re-scraped? Who owns stale content alerts?
  3. Content Approval: Should document uploads require approval before going live?
  4. Source Systems: Are there APIs or databases to connect to, beyond document uploads?

Security & Compliance

  1. ATO Requirements: What's the timeline and process for OMES Cyber Command review?
  2. Data Residency: Any requirements for Oklahoma-based hosting?
  3. Audit Logging: What must be logged and for how long? Who has access to logs?
  4. Penetration Testing: Required before go-live?

Operations

  1. SLAs: What uptime and response time guarantees are expected?
  2. Support Model: Who handles citizen complaints about chatbot answers?
  3. Incident Response: Escalation path for safety issues or misinformation?
  4. Backup/Recovery: RPO and RTO requirements?

Scalability & Cost

  1. Usage Projections: Expected monthly conversations per agency?
  2. Budget Model: Chargeback to agencies or centralized funding?
  3. Rate Limits: Should agencies have usage quotas?

Integration

  1. Escalation Paths: What happens when chatbot can't answer? (Link, email, phone, live chat?)
  2. Analytics Export: Need to integrate with existing BI tools?
  3. CMS Integration: Do agencies want to pull content from existing CMS systems?

IVR Integration

  1. Current Phone Systems: What IVR/phone systems are agencies currently using?
  2. Call Volume: What is the monthly call volume for high-traffic agencies (DHS, Service Oklahoma)?
  3. Live Agent Handoff: How should escalation to live agents work?
  4. Language Support: Which languages are priority for phone interactions?

Rollout

  1. Pilot Agencies: Which 3-5 agencies should pilot first?
  2. High-Traffic Sites: What are the top 10 highest-traffic state websites?
  3. Timeline Drivers: Are there external deadlines driving urgency?

Multi-Channel Considerations

  1. Mobile Apps: Do any agencies have existing mobile apps that should integrate?
  2. SMS/Text: Is text-based interaction a priority?
  3. Accessibility: What accessibility standards must be met (WCAG 2.1 AA)?
  4. Multi-Language: Spanish as baseline - any other languages needed (Vietnamese, Korean)?

Success Metrics

  1. Deflection Target: What percentage of calls/contacts should chatbot handle?
  2. CSAT Goals: What citizen satisfaction score is acceptable?
  3. Cost Savings: How will ROI be measured?

Appendix C: SNAP QA System Discovery Questions

Case Data

  1. What eligibility system does Oklahoma use? (ARIES, custom, vendor solution?)
  2. Can we get a sample batch of anonymized/de-identified cases to develop against?
  3. What fields are available in a case export? (We need: household members, income sources, deductions, benefit determination)
  4. What format can data be exported in? (CSV, JSON, XML, other?)
  5. How frequently would Oklahoma want to upload cases for QA review? (Daily batch? Weekly? On-demand?)

Verification Documents

  1. Where are verification documents stored? (Document management system, eligibility system, file shares?)
  2. What format are documents typically in? (PDF, scanned images, photos?)
  3. What is the typical document quality? (Clean scans vs. phone photos - affects OCR accuracy)
  4. Are documents already classified by type? (e.g., tagged as "paystub" vs. generic upload)
  5. How are documents currently linked to cases? (Case number reference, document ID, manual matching?)

Oklahoma's Determinations

  1. Can we receive Oklahoma's eligibility determination alongside case data? (Eligible/Not Eligible, benefit amount, effective date)
  2. What system of record contains the "official" benefit calculation?
  3. Are calculation worksheets or audit trails available for how Oklahoma arrived at benefit amounts?

Oklahoma-Specific Policies

  1. Where is the authoritative Oklahoma SNAP policy manual? (Is there a digital version we can reference?)
  2. What state-specific rules differ from the federal baseline? (Any unique Oklahoma policies?)
  3. What waivers are currently in effect? (Pandemic-era flexibilities, ABAWD waivers, etc.)
  4. How frequently do policies change? (Monthly updates? Quarterly? Annual?)
  5. Who should we contact when we have policy interpretation questions?

Error Patterns

  1. What are the top 5 error categories in Oklahoma ME reviews? (Income, shelter, household composition, etc.)
  2. What dollar amounts are associated with each error category? (To prioritize high-impact areas)
  3. Are there known systemic issues we should prioritize? (Common worker errors, system bugs, etc.)
  4. Can we get access to recent ME review findings or corrective action plans?

Benefit Calculation

  1. Can we get the current Oklahoma benefit tables? (Max allotments by household size)
  2. What SUA (Standard Utility Allowance) amounts are in effect for FY2026?
  3. Are there any state-specific deduction rules or caps?
  4. How does Oklahoma handle income averaging for variable income?

Existing Workflow

  1. What does the current ME review process look like step-by-step?
  2. What tools do QA reviewers use today? (Spreadsheets, custom software, paper forms?)
  3. What is the current sample size for ME reviews? (We saw 450 baseline + 600 targeted in the FFY2025 plan)
  4. What is the average time per case for a manual QA review?
  5. How are QA review findings currently documented and tracked?

Users

  1. How many QA reviewers would use the V1 system?
  2. How many supervisors would need access?
  3. What are the technical skill levels of the users? (Comfortable with web apps? Need extensive training?)
  4. What training resources are available to support rollout?

Data Handling

  1. What approvals are needed to process case data? (Even for shadow QA with no production impact)
  2. Are there data residency requirements? (Must data stay in Oklahoma? In the US?)
  3. What PII handling protocols must we follow? (Encryption, access logging, retention policies?)
  4. Is there a Data Sharing Agreement (DSA) template we should use?
  5. What audit logging is required? (Who accessed what, when)

Hosting

  1. Can the V1 system be hosted on commercial cloud (AWS)?
  2. Or is state-hosted / FedRAMP / GovCloud required? (Even for a shadow/pilot system?)
  3. What security assessment or penetration testing is required before go-live?
  4. What authentication/SSO is used? (OKTA, Azure AD, state-specific?)

Historical Data (for V2 Custom Model)

  1. How many years of QA-reviewed cases are available?
  2. Can we access both original case submissions AND corrected outputs? (This is critical for training)
  3. What is the approximate volume? (Cases per year reviewed by QA)
  4. What format is historical data in? (Database exports, spreadsheets, PDFs?)

Data Quality

  1. Are corrections/fixes documented in a structured way? (Or just free-text notes?)
  2. Is there a mapping between original errors and corrections made?
  3. What approvals are needed for model training on historical data?

Success Criteria

  1. What PER reduction target would be meaningful? (e.g., 10.87% → 9%? 8%?)
  2. What ROI calculation should we use? (Dollars saved per error prevented?)
  3. Are there intermediate milestones tied to funding or approval gates?
  4. How will Oklahoma measure the accuracy of the CQA system itself? (Comparing CQA findings to actual QC results?)

Timeline

  1. Is there a deadline tied to federal compliance changes? (October 2026 cost-sharing increase?)
  2. Are there budget cycle constraints? (State fiscal year considerations?)
  3. What is the ideal pilot start date?
  4. What is the target production go-live date?

Stakeholders

  1. Who are the key decision makers? (Names and roles for QA, QC, eligibility operations, IT)
  2. Who owns QA vs. QC vs. eligibility operations? (Org chart would be helpful)
  3. What is the change management process for introducing new tools?
  4. Are there union or workforce considerations?

Current™ System (for V3 Roadmap)

  1. What are the main pain points with Current™? (Things that aren't working well)
  2. What works well that we should preserve? (Features users love)
  3. Who is the vendor? (Contact for technical discussions)
  4. Is there a contract end date or renewal coming up?

Integration

  1. What APIs does Current™ expose? (Task creation, status updates, reporting)
  2. What integrations does Current™ have with other systems?
  3. Is there documentation available for Current™ integration?
Thomas Smyth
Founder & CEO, Concourse
thomas@concoursetech.com
(646) 305-9964

Proposal draft prepared for January 23, 2026 meeting