India cracks down on deepfakes, demands platforms label AI media.
India mandates platforms to label and verify AI-generated content, grappling with implementation hurdles to combat digital misinformation.
October 24, 2025

In a significant move to combat the growing menace of digitally manipulated content, India has proposed a stringent regulatory framework targeting deepfakes and other AI-generated media. The draft amendments to the Information Technology (IT) Rules, 2021, signal a determined crackdown on synthetic content, placing new and substantial obligations on social media platforms and content creators. This initiative aims to enhance transparency and accountability in the digital ecosystem, responding to a series of high-profile incidents and rising concerns over the potential for such technology to spread misinformation, damage reputations, and influence democratic processes. The proposed regulations are among the first formal steps by the Indian government to establish legal guardrails around artificial intelligence, indicating a tougher stance on fake content that could have far-reaching implications for the technology industry.
The core of the government's proposal is a set of explicit mandates for identifying and labeling AI-generated content. The draft rules introduce a clear legal definition for "synthetically generated information," describing it as any content artificially or algorithmically created, modified, or altered in a way that makes it appear authentic.[1][2][3] Social media intermediaries, particularly significant platforms with over five million users, will be required to compel users to declare if their uploaded content is synthetically generated.[4][5][6][7] To ensure compliance, these platforms must deploy "reasonable and appropriate technical measures" to verify these declarations.[8] If content is identified as synthetic, it must be prominently marked. For visual media, the label must cover at least 10% of the surface area, while audio content must feature an audible disclosure within the first 10% of its duration.[9] Furthermore, platforms will be obligated to embed permanent metadata or unique identifiers within the content to ensure traceability.[2][3] Failure to adhere to these rules could result in platforms losing their "safe harbour" protection, which shields them from liability for user-generated content.[9][7]
Driving this regulatory push are escalating concerns over the weaponization of generative AI. An explanatory note from the Ministry of Electronics and Information Technology (MeitY) cited recent viral deepfake videos and audio clips as evidence of the technology's potential to create convincing falsehoods.[5][10] The government has explicitly warned that such content can be used to manipulate elections, commit financial fraud, and cause significant reputational harm.[5][10] The issue gained national prominence following a viral deepfake video of actor Rashmika Mandanna in 2023, an incident that Prime Minister Narendra Modi termed a new "crisis."[8] The proliferation of accessible and powerful AI tools, such as Google's Gemini and OpenAI's Sora, has further heightened these worries by making the creation of realistic synthetic content easier than ever before.[8][9] To streamline enforcement and increase government accountability, a related amendment to the IT Rules now specifies that only senior officials—at the level of Joint Secretary in the central government or Deputy Inspector General of Police for law enforcement—are authorized to issue content takedown orders.[1][2][9][11] This change aims to ensure that such decisions are made with greater scrutiny and responsibility.[1]
While the move to regulate deepfakes has been acknowledged as a necessary step, it has also sparked debate and concern among industry experts and civil society groups regarding its implementation and broader impact. A primary challenge highlighted is the significant compliance burden, especially for smaller platforms that may lack the resources and advanced technology to effectively detect and label all AI-generated content.[12][4][3][13] Experts caution that the technical feasibility of ensuring all synthetic media is accurately identified is a major hurdle, as AI detection tools are still evolving.[12][14] There are also concerns about the ambiguity in the definition of "synthetically generated," which could inadvertently capture benign edits, artistic filters, or satire, potentially chilling free speech and creativity.[12][15] Striking a balance between preventing harm and protecting legitimate expression will be crucial to the framework's success. This regulatory approach places India in the company of other global powers like the European Union and China, which are also implementing rules for AI-generated content, though with varying methods.[16][8][4][17]
In conclusion, India's proposed crackdown on deepfakes represents a pivotal moment in the country's approach to AI governance. By mandating clear labeling and placing the onus of verification on social media platforms, the government is taking a firm stand against the deceptive potential of synthetic media. The amendments to the IT Rules aim to create a more transparent online environment where users can distinguish between authentic and artificial content, thereby mitigating risks of misinformation and malicious impersonation. However, the path to effective implementation is fraught with challenges, including the technical complexity of detection, the financial strain on smaller companies, and the delicate balance required to avoid stifling innovation and free expression. As the government gathers feedback from stakeholders, the final form of these regulations will be critical in shaping the future of AI in the world's largest democracy, determining whether it can foster a safe and accountable digital space without hampering technological progress.
Sources
[6]
[9]
[11]
[12]
[15]
[17]