Why This Landed on My Radar
While we’re all drowning in EHR clicks and documentation nightmares, there’s a quiet regulatory fight happening that could determine whether you’ll actually understand the AI tools being embedded into your workflow. Provider groups are pushing back against ONC’s plan to eliminate certain health IT certification requirements - specifically the transparency standards that force vendors to explain how their AI actually works. This isn’t theoretical anymore, colleagues. These tools are already making clinical suggestions in your inbox.
Here’s What’s Going On
The Office of the National Coordinator for Health Information Technology (ONC) has proposed a rule that would eliminate or revise dozens of health IT certification criteria in an effort to reduce vendor burden and streamline regulations. Sounds reasonable on the surface - less red tape, faster innovation, all that. But buried in those proposed cuts are the “model card” requirements for AI tools.
Model cards are basically transparency labels for artificial intelligence. They require vendors to disclose what data the AI was trained on, how accurate it is, what its limitations are, and where it might produce biased results. Think of them as nutrition labels for the clinical decision support tools showing up in your EHR. Provider groups - including medical associations and health systems - are now pushing back hard, arguing that eliminating these requirements shifts the compliance burden back onto us. Without vendor certification requirements, we’re left to figure out on our own whether the AI flagging potential sepsis or suggesting medication changes is actually reliable.
The timing matters because AI integration in health IT is accelerating fast. These aren’t future considerations - these are tools going live in practices right now, often with minimal transparency about their accuracy, training data, or failure modes.
What This Means for Your Practice
Here in Texas, where we’re already operating with thin margins and the nation’s highest uninsured rate, we don’t have the luxury of IT departments to vet every AI tool our EHR vendor pushes out. The big systems in Houston, Dallas, and Austin might have informaticists and data scientists who can evaluate these tools. Independent practices? We’re getting a software update notification and a two-page PDF if we’re lucky.
This matters more in Texas than in expansion states because our patient mix is more precarious. When an AI tool trained primarily on commercially insured populations starts making recommendations for your uninsured or Medicaid patients, those model cards tell you whether the algorithm has even seen patients like yours. Without that transparency, you’re flying blind - and potentially inheriting liability for decisions influenced by algorithms you can’t evaluate.
The bigger picture: we’ve spent years fighting with BCBS Texas and United over prior authorizations and payment decisions that feel like black boxes. Now imagine that same lack of transparency baked into your clinical workflow. Some vendors are already pitching AI scribes, coding assistants, and clinical decision support as practice-savers. They might be. But without certification requirements forcing vendors to document accuracy rates, failure modes, and training data, you’re essentially beta testing on your patient panel.
The smart play is recognizing that transparency requirements protect independent practices more than they burden us. When vendors have to document how their AI works, we can make informed decisions about what to adopt. When they don’t, we’re back to trusting marketing materials and hoping for the best.
Key Takeaways
- ONC’s proposed rule would eliminate AI transparency requirements (“model cards”) that tell you how clinical algorithms actually work and where they fail
- Provider groups are fighting to keep these certification criteria because they shift evaluation burden off practices and onto vendors where it belongs
- Without model cards, you’re left guessing whether AI tools are accurate for your specific patient population - particularly risky in Texas’s high-uninsured, diverse market
- This affects tools already in your workflow: clinical decision support, documentation assistants, coding suggestions, and diagnostic flags
- Independent practices lack the IT infrastructure that health systems use to vet AI tools - certification standards level the playing field
What Smart Practices Are Doing
Forward-thinking independent physicians are starting to ask vendors direct questions before adoption: What data trained this? What’s the accuracy rate? How does it perform across different patient populations? They’re treating AI tools like any other clinical device - demanding evidence before integration. Some are also joining state and national advocacy efforts to preserve transparency requirements, recognizing that regulatory standards protect small practices more than they help big vendors.
Source
Provider groups push to preserve some IT certification criteria, including AI ‘model cards’ - Healthcare Dive
Primary Care Perspective delivers curated intelligence from trusted healthcare sources.
© 2026 Primary Care Perspective | Texas Edition