Why This Landed on My Radar

The HIMSS conference just wrapped, and while everyone’s talking about AI in healthcare (again), there’s something different this time. We’re moving past the “AI as a helpful tool” phase into autonomous agents that make decisions without us in the loop. That’s a fundamental shift in how our practices operate, and most of us don’t have the governance structures in place to manage it. If you’re already using AI scribes or prior auth tools, you need to understand where this is heading - because the compliance and liability landscape is about to get a lot more complex.

Here’s What’s Going On

Healthcare leaders at HIMSS26 are signaling that 2026 is the year autonomous AI agents move from pilot projects to production environments. We’re not just talking about tools that assist with documentation or flag abnormal labs anymore. These are systems that can independently schedule appointments, triage messages, submit prior authorizations, and make clinical recommendations without human intervention at every step.

The consensus from the conference? Healthcare organizations need to fundamentally rethink their governance models and cybersecurity postures as these agents proliferate. The current frameworks most practices use - including ours - were built for humans making decisions with computer assistance, not for algorithms making autonomous decisions that we review after the fact.

The urgency comes from a collision of two realities: the technology is advancing faster than our ability to regulate it, and the staffing crisis means we’re all desperate for solutions that reduce administrative burden. That makes it tempting to implement without fully thinking through the governance implications.

What This Means for Your Practice

Here in Texas, this matters more than you might think at first glance. We’re already operating in a high-stakes environment - largest uninsured population in the country, no Medicaid expansion, and payer contracts with BCBS Texas and United that nickel-and-dime us on every claim. Many of us have already adopted AI scribes or revenue cycle tools to stay afloat while our staff burns out managing prior auths and patient messages.

The shift to autonomous agents is the next logical step, and it promises real relief. Imagine a system that actually handles the entire prior authorization process from start to finish, or one that triages your patient portal messages and only escalates what truly needs physician eyes. For practices barely keeping up with the 100+ messages a day hitting our inboxes, that’s not a luxury - it’s survival.

But here’s the catch: when an autonomous agent makes a mistake, who’s liable? If it denies a refill that should have been approved, or misses a critical symptom in triage, that’s on us. Our medical licenses, our malpractice coverage, our practice viability. The current governance structures at most independent practices - including mine, if I’m honest - aren’t built for this. We don’t have AI oversight committees. We don’t have protocols for auditing algorithmic decisions. We don’t even have clear lines of accountability when something goes wrong.

The cybersecurity piece is equally concerning. Every autonomous agent is another potential entry point for bad actors. Texas practices are already targets - we’re managing high volumes of patient data with smaller IT budgets than the big health systems. As these agents integrate deeper into our EHRs and communicate with external systems, the attack surface expands exponentially.

The TMA has been quiet on specific AI governance guidance so far, which means we’re largely on our own to figure this out. The practices that establish solid governance frameworks now will be positioned to adopt these efficiency-boosting tools safely. The ones that rush to implement without the proper guardrails risk regulatory action, liability exposure, or worse - actual patient harm.

Key Takeaways

  • Autonomous AI agents that make decisions without real-time human oversight are moving from pilot to production in 2026
  • Current governance structures at most independent practices aren’t designed to manage autonomous algorithmic decision-making
  • Liability for AI errors ultimately falls on the physician and practice, regardless of where the technology came from
  • Each new AI agent integration expands cybersecurity vulnerabilities, especially for smaller practices with limited IT resources
  • Early adopters who build proper governance frameworks now will gain efficiency advantages while managing risk appropriately

What Smart Practices Are Doing

The forward-thinking groups I’m talking to aren’t waiting for TMA guidance or regulatory clarity. They’re proactively building AI governance committees - even if it’s just two physicians and their practice manager meeting monthly - to review what tools they’re using, audit outcomes, and establish clear accountability lines. They’re also demanding transparency from vendors about how these agents make decisions and what audit trails exist when things go wrong.

Source

“Balancing AI innovation and risk: 5 takeaways from HIMSS26” - Healthcare Dive


Primary Care Perspective delivers curated intelligence from trusted healthcare sources.

© 2026 Primary Care Perspective | Texas Edition

PCP

Primary Care Perspective

Healthcare business intelligence for primary care physicians. We translate national news into local impact.

Back to All Articles