It's Tuesday afternoon. Your front desk coordinator has a tricky email to write — a balance-due follow-up to a patient who just lost a parent. She wants it to sound warm, not robotic. So she opens a new tab, pastes the patient's name, treatment history, and account balance into free ChatGPT, and asks it to "rewrite this more compassionately."
That 30-second shortcut is almost certainly a HIPAA breach. And a version of it is plausibly happening in dental practices right now.
This is "shadow AI" — staff using consumer AI tools your practice never approved, to do real work with real patient data. Cross-industry surveys in 2025 found that more than a third of employees had used generative AI at work without formal approval, and Cyberhaven's analysis of 1.6 million workers found roughly 11% of the data pasted into ChatGPT was classified as confidential. Healthcare is not exempt. If anything, it is more exposed.
What Happened
The rise of consumer AI in workplaces was quiet, fast, and organic. Front desk teams discovered ChatGPT drafts polished insurance appeal letters. Hygienists used Gemini to translate post-op instructions into Spanish. Office managers fed production reports into free Copilot to summarize the month.
Most of it never went through IT. Most of it was never formally approved. And a meaningful slice of it touched PHI.
Meanwhile, HHS OCR has been tightening the rails. The proposed 2025 HIPAA Security Rule update explicitly calls out AI-driven cybersecurity risks, and OCR's recent cybersecurity guidance reinforces that any third-party system touching ePHI must be included in your risk analysis. That includes the chatbot tab your receptionist has open right now.
The rule HHS has held consistently: if a vendor processes PHI on your behalf, you need a signed Business Associate Agreement (BAA). No BAA, no PHI. Full stop.
Here is where the framing matters. The major AI vendors aren't the problem — they have built compliant tiers. The issue is that practices are defaulting to the wrong tier.
| Tool | Consumer tier (no BAA) | Enterprise tier (BAA available) |
|---|---|---|
| OpenAI | ChatGPT Free, Plus, Team, Business | ChatGPT Enterprise, ChatGPT Edu, API with zero-retention |
| Microsoft | Copilot in Windows/Edge, personal accounts | Copilot for Microsoft 365 (E3/E5 with executed BAA) |
| Consumer Gemini (gemini.google.com) | Gemini in Google Workspace (Business/Enterprise with BAA addendum) |
OpenAI, Microsoft, and Google all sign BAAs — on their enterprise products, with the right license, with the agreement actually executed. The free tab your team uses does not qualify.
Why It Matters for Your Practice
A BAA is not paperwork for the sake of paperwork. It is the legal instrument that says: this vendor has agreed to HIPAA obligations, won't train their models on your data, will honor retention rules, and will notify you if something goes wrong.
Without one, every prompt containing a patient's name, chart number, diagnosis, insurance info, or treatment plan is, by default, a disclosure to a vendor who owes your practice nothing. Some consumer tools retain prompts for extended periods. Some may use them for model improvement. Analyses of the August 2025 ChatGPT shared-chat leak found medical conditions and other sensitive information among the exposed content.
The breach math is straightforward. A single unauthorized disclosure can trigger OCR notification obligations, state-level breach laws, and — if a plaintiff's attorney finds it first — a civil claim. And OCR has been clear that "we didn't know the receptionist was doing that" is not a defense. Workforce training and sanctions are your responsibility as a covered entity.
TMR Take: Shadow AI isn't an "AI problem." It's a training and policy gap. Your team isn't trying to break HIPAA — they're trying to get their jobs done faster. The fix isn't to ban AI. It's to give them approved tools with BAAs in place, then tell them which tab to use. Practices that do this in the next 12 months will have a real operational edge over ones still pretending it isn't happening.
What to Do Now
Four moves, in order:
1. Write an acceptable-use AI policy. One page. It names which tools are approved, what can be pasted where, and what happens if someone uses an unapproved tool with PHI. Put it in your employee handbook and get signatures. This alone moves you from "accidental violator" to "workforce managed the risk."
2. Pick an approved stack. Most dental practices already pay for Microsoft 365 or Google Workspace. Upgrade to a tier that includes the AI tool and execute the BAA. ChatGPT Enterprise is an option for larger DSOs. For clinical documentation, dental-specific vendors like Pearl Voice and a growing list of AI scribes sign BAAs as a matter of course — see our AI agents for dental practices rundown for the current landscape.
3. Train your team — specifically. Generic HIPAA training doesn't yet cover this. Spend 20 minutes in a huddle showing staff the difference between the approved tool and the consumer version, and what to do when a coworker suggests "just ask ChatGPT." Document it.
4. Turn on audit logs and review them. Enterprise AI tools log every prompt. That is a feature, not a burden — it is the evidence you need in a risk analysis and the backstop if something goes sideways. Pair this with your broader HIPAA compliance checklist for dental software and your pre-audit prep workflow.
If ransomware has your attention, shadow AI should too — they share a root cause, which is workforce behavior outpacing written policy. Our ransomware guide for dental practices covers the other half of that picture.
The Bottom Line
AI will run meaningful parts of your practice within 24 months. The question isn't whether your team uses it — they already do. It's whether they use the version with your BAA, or the one without.
Pick the stack. Write the policy. Train the team. Point your staff at the compliant version.



