It’s no secret that since my COVID vaccine injury, I and others have had significant trust issues with the healthcare system in general. But what Meritus just announced makes me feel even more uneasy.
In a move framed as patient-centered progress, Meritus Health in Hagerstown, Maryland, has announced the adoption of Dragon Ambient eXperience (DAX) Copilot, an AI-driven documentation tool developed by Microsoft. While the technology promises to ease administrative burdens for doctors by transcribing patient conversations into clinical notes in real time, it also introduces troubling questions about accuracy, privacy, and the increasing dependence on unproven artificial intelligence systems in critical healthcare settings.
Continues after this brief message…
Did you know? Paid supporters get a reduced ad experience!

We know you value local news and entertainment that is free, open, and independent. Producing high-quality, paywall-free content isn't free. It takes time, effort, and resources to keep you informed and entertained. Unlike corporate media, we don’t have deep-pocketed investors or big advertisers funding our work—we rely on you, our readers and listeners, to keep Radio Free Hub City running. We're literally running on a shoe-string budget, but keep working hard to provide as much news and entertainment as we can.
If just 5% of local area residents became paid supporters at only $5 per month, we could meet our fundraising goals and continue expanding our news coverage depth while continuing to provide free access for everyone. So, if you'd like to help us in our continuing quest to improve our coverage, please consider becoming a paying supporter.
Will you be one of the 5%?
Thank you for supporting Radio Free Hub City!
Microsoft touts the DAX Copilot as a breakthrough for clinical efficiency, claiming it enables doctors to “look patients in the eye” while AI handles the paperwork. But that glowing narrative overlooks the essential risk: this tool, like all AI systems, is not immune to error. In fact, Microsoft’s broader Copilot suite—on which this medical version is surely based—has been repeatedly criticized as unreliable and ineffective. In fields like word processing and data management, Copilot often misinterprets instructions and fails to perform basic tasks. That margin of error is tolerable in office software. In healthcare, where mistakes carry life-and-death consequences, it’s simply not acceptable. When a medical professional makes a neglectful clinical mistake, that’s called malpractice. Does malpractice insurance cover clinical mistakes due to AI misunderstanding or misinterpreting a visit with a practitioner? Who’s responsible if AI results in misdiagnosis?
The assumption that this AI will flawlessly translate verbal exchanges into accurate clinical documentation rests on a dangerous level of trust in a system already proven to underperform in less consequential environments. Even if the provider reviews the notes afterward, the initial draft from Copilot could influence decisions, mislead other healthcare personnel, or simply go uncorrected in high-volume settings. Meritus’ strategy hinges on the idea that providers will maintain oversight—but in reality, overworked doctors may begin to rely too heavily on the AI’s output.
Article continues after these messages…
While other outlets focus on getting quotes from politicians who don't even live in our congressional district, we're focused on providing the hard-hitting truths and facts without political spin. We don't lock our news behind a paywall, will you help us keep it that way? If you're tired of news sweetened with confirmation bias, consider becoming a monthly supporter. But if you're not, that's fine too—we're confident in our mission and will be here if you decide you're ready for the truth. Just $5/month helps fund our local reporting, live election night coverage, and more.
Become a paid supporter for reduced ad experience!
Moreover, this shift raises important privacy issues. Patients must now consent to being digitally recorded during sensitive medical discussions, trusting that these recordings and their transcriptions won’t be misused or breached. While Meritus assures patients that the data is “highly secure,” I’m sure that Frederick Health made similar assurances prior to their recent data breach and ransomware incident. The very presence of big tech in the exam room should raise alarm bells among those who value individual privacy and limited corporate interference in personal health matters. And even more terrifying, this puts us only one step away from AI making actual healthcare decisions. I know it’s coming, I know it’s going to happen, and I weep for the medical industry.
Ultimately, the adoption of DAX Copilot reflects a growing trend in healthcare: replacing human judgment with artificial systems under the guise of efficiency. While it’s true that doctors spend too much time on administrative work, the answer is not to outsource their responsibility to an AI that has already struggled in less complex environments. Real solutions involve reducing regulatory red tape and empowering providers—not turning patient care into just another software experiment.
Opinion article by Ken Buckler, President of Radio Free Hub City. All opinions are his own, and do not reflect those of our clients or sponsors.
Article based upon information from a Meritus Health press release and public comments regarding Microsoft Copilot.
Do you believe we got something wrong? Please read our publishing standards and corrections policy.
Did you know? Supporters get a reduced ad experience!
Sponsored Articles
Get daily and breaking news for Washington County, MD area from Radio Free Hub City. Sign up with your email today!
Paid supporters have a reduced ad experience!
Discover more from Radio Free Hub City
Subscribe to get the latest posts sent to your email.



