Fraudulent AI Books Flood Amazon, Hijacking Experts, Imperiling Public Health

Scammers harness AI to flood Amazon with fake books, hijacking expert identities and spreading dangerous misinformation.

August 16, 2025

Fraudulent AI Books Flood Amazon, Hijacking Experts, Imperiling Public Health
A tidal wave of fraudulent, AI-generated books is flooding Amazon, with scammers hijacking the identities of prominent experts to deceive consumers and peddle potentially dangerous misinformation. The latest high-profile target is Dr. Eric Topol, a renowned physician and scientist, who has issued a stark warning about dozens of fake cookbooks and health guides being sold under his name.[1] This phenomenon highlights a burgeoning crisis on the world's largest online marketplace, where the ease of self-publishing, combined with the power of generative artificial intelligence, has created a fertile ground for sophisticated scams that threaten both consumer safety and the reputations of trusted professionals. The issue extends far beyond simple plagiarism, representing a direct and harmful misuse of AI that challenges the very integrity of information commerce.
Dr. Topol, a respected figure in the medical community, discovered numerous fabricated books bearing his name and image, covering topics on which he has never written.[1] He has decried the situation as outright fraud, reporting the unauthorized publications to Amazon multiple times with little to no substantive response.[2] One consumer reported purchasing one of the fake books, trusting in Topol's established reputation, only to be disappointed by the low-quality content.[1] This incident is not isolated but is indicative of a wider, troubling trend. Authors and experts across various fields have found themselves in similar situations, battling a deluge of sham books that mimic their work or appropriate their identity.[3] Author Jane Friedman, for instance, discovered multiple poorly written, AI-generated books falsely attributed to her on both Amazon and Goodreads.[4][5] Similarly, tech journalist Kara Swisher found fake biographies of herself for sale, prompting her to publicly criticize Amazon for the financial harm these scams cause legitimate authors.[6] These cases underscore the vulnerability of even well-known figures to identity misuse in the age of AI.
The engine driving this surge of literary fraud is generative artificial intelligence. Tools like ChatGPT have made it astonishingly simple and virtually cost-free for bad actors to churn out vast quantities of text.[7] Scammers can now instantly create content that mimics the style and branding of established authors, flooding the marketplace with counterfeit summaries, workbooks, and guides designed to piggyback on the success of legitimate titles.[7][1] This represents a significant escalation from previous scams that required at least hiring human writers to produce low-quality content.[7] The sheer volume of these AI-generated fakes presents a formidable challenge for platforms like Amazon, whose content filters appear unable to keep pace with the speed of fraudulent uploads.[7] While Amazon has been quick to remove infringing books upon receiving complaints, the fact that they are published in the first place reveals a critical gap in the platform's vetting process.[7]
The implications of this trend are deeply concerning, particularly when the fraudulent books dispense medical and health advice. The proliferation of AI-generated health guides under the names of trusted physicians like Dr. Topol poses a significant public health risk.[8] Misleading medical information can lead individuals to delay proper treatment, opt for unproven remedies, or suffer harmful interactions, undermining public trust in healthcare professionals.[8] Studies have shown that AI chatbots can be manipulated to deliver polished but dangerously false health advice, complete with fabricated references to real medical journals, making the misinformation appear credible.[9][10] This is compounded by the use of AI to create deepfake videos and fake endorsements from medical professionals to sell questionable products, a tactic that further blurs the line between legitimate advice and harmful scams.[11][12][13] The American Medical Association has recognized these dangers, urging physicians to educate patients on the risks and calling for federal action to protect consumers from misleading AI-generated medical content.[14]
In response to the growing outcry, Amazon has implemented some measures, such as limiting the number of daily publications from self-publishers and requiring authors to disclose the use of AI in their work.[1] However, this disclosure is not made visible to customers, limiting its effectiveness as a tool for informed purchasing.[1][3] The Authors Guild and other advocates argue that more robust solutions are necessary, including clear labeling of AI-generated content for consumers and more effective verification systems to prevent author impersonation.[7][4] The case of Dr. Topol and others serves as a critical wake-up call, demonstrating how AI's capabilities can be exploited not just for copyright infringement, but for outright fraud that endangers the public. As AI technology continues to evolve, the challenge for online platforms will be to balance the accessibility of self-publishing with the urgent need to protect consumers from a digital marketplace increasingly polluted with sophisticated and harmful deception.

Sources
Share this article