India’s tech regulator just changed the game for AI content. The Ministry of Electronics and Information Technology (MeitY) has been rolling out rules that affect everyone from big tech companies to everyday users.
Let me break down what’s happening and why it matters to you.

Understanding MeitY and Why They Care About AI
MeitY is India’s government body for electronics and IT policy. Think of them as the rule-makers for everything digital in India.
According to the MeitY website, they handle internet governance, IT industry growth, and e-governance. They’re basically keeping the digital world running smoothly.
But why the sudden focus on AI? The answer is deepfakes.
The Deepfake Problem That Started Everything
CyberPeace Foundation reported a troubling incident with cricketer Virat Kohli. Old footage from a festival was falsely presented as him attending a religious ceremony.
Major newspapers published this misleading content. It showed how easily fake AI content spreads and deceives people.
With India’s 2024 elections approaching, MeitY knew they had to act fast. AI-generated lies could threaten democracy itself.
The March 2024 Advisory: First Attempt
What MeitY Originally Proposed
On March 1, 2024, MeitY dropped their first AI advisory. According to ELP Law, it had strict requirements for platforms.
The rules said AI systems must prevent bias and electoral threats. Any “unreliable” AI models needed government permission before launch.
Platforms had just 15 days to submit compliance reports. The tech industry immediately panicked.
The Backlash Was Swift
Companies erupted in protest. Lexology noted the advisory’s scope was unclear and potentially overreaching.
Minister Rajeev Chandrasekhar clarified on X (formerly Twitter) that it targeted big tech, not startups. But this clarification wasn’t in the official document.
The biggest problem? Nobody knew what “unreliable” meant or how to get government approval.
MeitY Changed Course Quickly
Just 15 days later, on March 15, 2024, MeitY revised everything. JSA Law described this as effectively replacing the original.
The mandatory government permission was gone. Platforms only needed to label unreliable AI and inform users.
This was a huge relief for AI developers across India.
What the Revised 2024 Advisory Required
Core Rules for Platforms
The March 15, 2024 advisory kept some key requirements. PSA Legal outlined these obligations.
AI-generated content can’t violate Indian laws. Models can’t permit bias, discrimination, or threaten elections.
When using untested AI, platforms must explicitly inform users. This happens through consent popups showing potential unreliability.
Labeling Became Key
Platforms must use labels or metadata for AI content. According to CyberPeace Foundation, this helps users identify synthetic information immediately.
The goal was transparency, not blocking AI entirely. Users deserve to know what’s real and what’s AI-generated.
The Enforceability Question
Many experts questioned whether advisories have legal teeth. Lexology noted advisories aren’t legally binding in India.
The advisory didn’t define “unreliable” or “under-tested.” This made voluntary compliance challenging for companies trying to follow rules.
February 2026: Rules Get Serious
From Suggestions to Law
On February 10, 2026, MeitY took it to the next level. They formally amended the IT Rules with legal enforcement power.
According to MediaNama, these amendments bring “synthetically generated information” under enforceable IT Rules. The rules took effect on February 20, 2026.
These aren’t suggestions anymore. They’re laws with real penalties.
Defining Synthetic Content
The 2026 rules precisely define what counts as synthetic. Internet Freedom Foundation analyzed this definition carefully.
Synthetically Generated Information (SGI) means AI-created audio, visual, or audio-visual content. It must appear “indistinguishable from a natural person or real-world event.”
Important exclusions exist though. According to MeitY’s FAQs, routine editing doesn’t count.
Brightening photos, compressing videos, or making presentations isn’t SGI. The rules target deepfakes specifically.
The Game-Changing 3-Hour Rule
The biggest shock? Platforms must remove unlawful synthetic content within 3 hours of government orders.
Outlook Business reported this dramatic change from the previous 36-hour window. For certain harmful content, authorized police officers can issue removal orders.
MediaNama explained this reflects how viral deepfakes spread. Harmful content explodes within minutes.
Mandatory Labels for Everything
All synthetic content must be “prominently labelled.” Business Today detailed these requirements.
Platforms must embed permanent metadata with unique identifiers. These track which computer resource created or modified content.
Crucially, platforms can’t let users remove these labels. Features that strip watermarks or export without metadata are now prohibited.
Who Gets Hit Hardest?
Big Social Media Platforms
Major platforms face extra obligations. Lexology outlined these requirements.
Before publishing, platforms must get user declarations about synthetic content. They must deploy technical measures to verify these claims.
Users must receive quarterly notifications about violation consequences. This includes immediate account suspension risks.
What Content Is Banned
Obhan & Associates listed prohibited categories. Child sexual abuse material and non-consensual intimate imagery top the list.
Fake documents, misleading celebrity endorsements, and fabricated news all count as unlawful SGI. MediaNama’s FAQ gave examples like synthetic “undressing” and forged government IDs.
These are the deepfakes causing real harm.
The Implementation Challenge
Just 10 Days to Comply
Companies got barely any warning. Obhan & Associates called this a “watershed moment.”
Platforms must overhaul content moderation, deploy detection tools, and build labeling systems. All in 10 days.
That’s an incredibly tight timeline for major technical changes.
The Technology Isn’t Perfect
Detection tech lags behind creation speed. VARIndia highlighted this fundamental problem.
AI generates deepfakes in seconds. Reliable detection still struggles to keep pace.
Large platforms may manage with automated tools. Smaller startups face serious cost and capability challenges.
The Metadata Problem
Even embedded labels have issues. When content gets downloaded or screen-recorded, it loses traceability.
VARIndia noted metadata gets stripped when shared across apps. The provenance system has limits.
Plus, compliance assumes honest user disclosure. Bad actors won’t self-declare synthetic content.
What Experts Are Saying
Support for the Framework
Rohit Kumar from The Quantum Hub told APAC News Network the rules mark “a more calibrated approach.”
By narrowing SGI’s definition and exempting legitimate uses, the government responded to industry concerns. It balances accountability with practicality.
Concerns About Overreach
Internet Freedom Foundation raised serious concerns. The broad SGI definition could still capture low-risk domestic AI uses.
Labeling requirements may amount to compelled speech. The rules encourage proactive monitoring that could lead to over-censorship.
On X, user mithyā called aspects of the approach “arrogant.” Another post from The Analyzer called it a “big move” for labeling.
Social media shows mixed reactions from tech professionals.
Timeline of Key Changes
| Year | What Happened | Key Requirement |
|---|---|---|
| 2023 | Initial deepfake concerns | Advisory warnings issued |
| March 2024 | First AI advisory | Label untested AI, get permission |
| March 2024 | Revised advisory | Permission dropped, labels remain |
| February 2026 | IT Rules amended | All SGI labeled, 3-hour takedowns |
What This Means for You
If You’re a Regular User
You’ll start seeing more “AI-generated” labels on social media. Platforms will ask you to declare if your content is synthetic before posting.
According to MeitY’s FAQs, violating rules can mean immediate account suspension. Be honest about what’s AI-created.
If You Create Content
Legitimate AI use for education or creativity is still fine. Just don’t create fake documents or misleading content.
The Hindu’s Facebook post noted AI still has major opportunities in jobs and governance. The rules target harm, not innovation.
If You Run a Platform
The compliance burden is real. You need detection systems, labeling infrastructure, and 24/7 monitoring for takedown orders.
India Briefing noted foreign platforms must comply too. This affects global tech companies operating in India.
Looking Forward
MeitY evolved from the rocky March 2024 advisory to more refined February 2026 rules. They learned from feedback and adjusted.
The shift from permission-based to disclosure-based regulation was smart. It maintains safety without blocking innovation entirely.
But challenges remain around technical feasibility and enforcement. According to VARIndia, success depends on “Compliance + Detection + Digital Literacy” working together.
India is attempting something ambitious. They want to stop deepfake harm without killing AI innovation.
Whether it works depends on execution, adaptation, and cooperation. For now, the message is clear: AI content must be transparent, traceable, and accountable.
The days of unchecked deepfakes are ending in India.