AI is turning unwitting physicians into stars of deepfake videos that hawk questionable products and spread medical misinformation, prompting the American Medical Association to declare a public health crisis. The AMA is calling on federal and state lawmakers to close legal gaps and modernize identity protections against what its CEO John Whyte described as a threat to both safety and trust.
The proliferation of such content on social media risks further eroding public confidence in the medical establishment, the group warns. Beyond reputational harm, officials fear the deepfakes could fuel insurance fraud, facilitate data theft, or directly endanger patients by promoting dangerous treatments.
The physicians group is demanding stricter rules that force tech platforms to remove impersonations more quickly and a crackdown on deepfake creators. California has already moved to require disclosures on AI-generated ads and is debating a measure that would explicitly ban doctor deepfakes outright.
Pennsylvania's medical board took action Tuesday, ordering a tech company to cease and desist after one of its chatbots impersonated a licensed physician in the state. The episode underscores the regulatory patchwork as AI-generated medical content outpaces existing identity protections.
Physicians themselves describe feeling powerless as their likenesses are weaponized online without consent. Critics argue that broad identity protection laws could inadvertently limit legitimate AI applications in medical education and research, though the AMA maintains patient safety must come first.