Tokyo Issues Stern Warning to OpenAI Over Sora 2 Anime Infringement

Japan takes a stand against OpenAI's AI, defending its cultural treasures and leading the charge for global AI content regulations.

October 15, 2025

Tokyo Issues Stern Warning to OpenAI Over Sora 2 Anime Infringement
The Japanese government has issued a stern warning to OpenAI, the creator of the viral text-to-video generator Sora 2, demanding the company take measures to prevent its powerful new tool from infringing on the nation's deeply valued anime and manga copyrights.[1] The formal request follows an online firestorm ignited by a wave of AI-generated videos that flawlessly mimic the styles of iconic Japanese animations, raising profound questions about intellectual property, cultural preservation, and the unchecked advancement of generative AI. This direct intervention by Tokyo positions Japan at the forefront of a growing global conflict between technological innovation and the rights of creative industries, signaling a potential turning point in how AI companies will be held accountable for the outputs of their models.
The controversy erupted shortly after the October launch of Sora 2, a model capable of producing high-definition videos with audio from simple text prompts.[2][3][4][5][6] Social media platforms were quickly inundated with clips featuring unauthorized depictions of beloved characters from world-renowned franchises such as Pokémon, Dragon Ball, and One Piece.[7][2][4][6][8] The startling accuracy and stylistic fidelity of these videos sparked immediate backlash from fans and creators in Japan, who accused OpenAI of training its model on protected works without permission.[7][9] This public outcry prompted a swift and unequivocal response from the Japanese government. Minoru Kiuchi, the minister of state for Intellectual Property and AI Strategy, publicly stated that the government had formally requested OpenAI to prohibit the generation of content that could constitute copyright infringement, describing anime and manga as "irreplaceable treasures that we can be proud of around the world."[2][10][11][12][6] This sentiment was echoed by other officials, who emphasized that Japan's cultural assets must be respected and that the government would take appropriate measures to protect them.[7][10]
The Japanese government's stance is backed by a broader strategy to navigate the complexities of artificial intelligence. While the country has been progressive in fostering AI development through its AI Promotion Act, which came into full effect in September 2025, the legislation also provides a framework for addressing problematic uses like copyright infringement.[2][4][6] Officials have indicated that if OpenAI does not voluntarily comply with the request, measures under this act could be invoked.[4][5][6] The law empowers the government to investigate cases where AI is used improperly or infringes on rights, though it currently leans on cooperation from businesses rather than imposing explicit penalties.[4] This episode has spurred calls from within Japan's ruling party for the nation to take a leading role in establishing international rules for AI and copyright.[2][12] Parliament member Akihisa Shiozaki argued that as a country that has captivated the world with its creative content, Japan has a responsibility to lead the way in crafting these new regulations.[2][12]
In response to the escalating pressure, OpenAI has acknowledged the issue and pledged to implement safeguards.[7] CEO Sam Altman issued an apology, stating that "the mistakes will be corrected swiftly," and confirmed that the company was updating its filters to block the unauthorized generation of anime content.[7] Following the initial backlash, Altman announced that OpenAI would provide rights holders with more granular controls, allowing them to decide how their characters can be used, including the ability to block them entirely.[7][4][5][6] This represents a significant shift from a potential "opt-out" system, which would have placed the burden on creators to police the AI model, to a more proactive "opt-in" framework.[13] The company has reportedly admitted the seriousness of the copyright misuse in meetings with Japanese officials and promised cooperation.[7] However, criticism remains over OpenAI's initial approach, with some Japanese lawmakers noting the company launched the service without consulting the government or rights holders, causing unnecessary conflict.[7]
This confrontation between a tech giant and a nation fiercely protective of its cultural heritage highlights a critical global flashpoint in the age of AI. The ease with which Sora 2 replicated distinct and protected art styles has laid bare the legal and ethical gray areas surrounding the data used to train these massive models. Legal experts note that while creating AI content for personal use might not breach copyright, sharing it online can lead to violations, and the sheer volume of infringing Sora 2 videos presents a significant legal risk for OpenAI.[9][5] The situation in Japan is mirrored by concerns from Hollywood studios and other global media companies, who are also grappling with the unauthorized use of their intellectual property by AI systems.[10][8][14] Japan's decisive action may serve as a blueprint for other nations and creators, pushing the AI industry towards greater transparency and respect for the intellectual property that fuels their models. The outcome of this standoff will likely have lasting implications, shaping the legal frameworks and ethical standards that govern the future of AI-driven content creation worldwide.

Sources
Share this article