Google Fuels Transparency Debate with Unlabeled AI-Generated Search Ad
Betting on viewer apathy, Google's unlabelled AI ad for Search sparks a critical debate over transparency and ethics.
October 31, 2025

In a significant move that has ignited debate across the advertising and technology sectors, Google recently launched a major advertising campaign for its AI-powered search features, created entirely with its advanced text-to-video AI model, Veo 3. The advertisement, a whimsical animation featuring a turkey named Tom seeking a pre-Thanksgiving getaway, was rolled out across television, social media, and in cinemas without a clear and conspicuous label indicating its AI origins. Google has defended this decision, citing internal research suggesting that viewers are largely apathetic to whether the content they watch is generated by artificial intelligence, a stance that has drawn both support and scrutiny from industry observers.
The ad at the center of the discussion, titled "Quick Getaway," was developed by Google's in-house marketing group, the Google Creative Lab.[1] It portrays a charmingly rendered turkey, Tom, who, facing the impending Thanksgiving holiday, uses Google's AI Mode in Search to plan his escape.[1][2] The entire visual and audio landscape of the ad was generated using Veo 3, one of the latest and most capable AI video generation tools that can create high-definition, photorealistic scenes with synchronized audio from simple text prompts.[1] A vice president at Google Creative Lab, Robert Wong, stated that the ad's creative direction was intended to evoke a sense of seasonal nostalgia, reminiscent of classic animated holiday specials from viewers' childhoods.[1] This creative choice, paired with the decision to forgo an explicit "Made with AI" disclaimer, has pushed the conversation about transparency in AI-generated content to the forefront.
Google's justification for not labeling the ad hinges on the assertion of viewer indifference. While the specific internal study has not been made public, the company's position is bolstered by some external research. A survey from LG Ad Solutions, for instance, found that nearly half of consumers (49%) do not care if an ad is made with AI, as long as the final product appears authentic.[3] This suggests a segment of the audience is more concerned with the quality and resonance of the content rather than its method of creation. This perspective views AI as just another tool in the creative arsenal, akin to computer-generated imagery (CGI), which has long been used in advertising without explicit disclaimers. Proponents of this view argue that over-labeling could lead to consumer desensitization, diluting the impact of such disclosures when they are truly critical, for instance, in distinguishing deepfakes from reality.[4]
However, this view is not universally shared, and the lack of a disclaimer has raised concerns among ethicists and some industry professionals who advocate for greater transparency. Research from NielsenIQ presents a contrasting view of consumer perception, indicating that AI-generated ads are often perceived as more "annoying," "boring," and "confusing" than traditionally made commercials.[5] Critics of Google's move argue that failing to disclose the use of AI, particularly when promoting an AI product, is a missed opportunity for transparency and education. The core of the concern is that as AI-generated content becomes indistinguishable from reality, the absence of clear labeling could erode public trust. Federal regulators, such as the Federal Trade Commission (FTC), have emphasized that all advertising must be truthful and not misleading.[6] While there are no blanket laws in the U.S. or the U.K. mandating the disclosure of AI in commercial advertising, the FTC has made it clear that deceptive practices are prohibited, and omissions of material information can be actionable.[2][7][6] This situation is further complicated by Google's own policies, which require a "clear and conspicuous" disclosure for any AI-generated content in political advertising that realistically depicts people or events.[8][9] This has led to accusations of a double standard, where a higher bar for transparency is set for political speech than for commercial advertising from one of the world's most influential technology companies.
The implications of Google's decision are far-reaching for the AI industry and the future of creative production. As generative AI tools become more accessible and powerful, the debate over labeling and transparency is likely to intensify. Advertising industry bodies like the IAB are already working to establish guidelines for AI disclosure to create a consistent and trustworthy ecosystem before regulators are forced to intervene.[4] The controversy surrounding the "Tom the Turkey" ad, and previous backlashes against other Google AI-related commercials, highlights a growing tension.[10][11] On one hand, tech companies are eager to showcase the creative potential of their AI innovations. On the other, there is a rising public and regulatory concern about the ethical implications of this technology. The path forward will require a careful balancing act between fostering innovation and maintaining the trust of consumers who are increasingly navigating a media landscape where the lines between human and machine-generated content are becoming ever more blurred.
Sources
[2]
[4]
[6]
[7]
[8]
[9]
[11]