Published on October 25 , 2025
Delhi, India
In a proactive move to preserve electoral integrity amid the rising tide of digital deception, the Election Commission of India (ECI) issued a stringent advisory on October 24, 2025, requiring all political parties and candidates to prominently label AI-generated or synthetic campaign materials. This directive, invoked under Article 324 of the Constitution, targets the “deep threat” posed by deepfakes and hyper-realistic synthetics that could mislead voters and erode trust in democracy. Timed just weeks before the Bihar Assembly elections (November 6 and 11, with results on November 14), the guidelines build on prior advisories from May 2024 and January 2025, emphasizing compliance with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. By enforcing transparency, the ECI aims to level the playing field and foster responsible tech adoption in India’s vibrant electoral arena.
Advisory Overview and Legal Basis
The ECI’s latest advisory underscores the Commission’s plenary powers to superintend elections, addressing how AI tools can fabricate electorally sensitive messages that masquerade as authentic. It warns that unchecked synthetic content risks “contaminating the level playing field” and distorting public opinion.
- Issuance Date and Scope: October 24, 2025; applies immediately to all general and bye-elections until further notice.
- Constitutional Authority: Article 324 empowers ECI to regulate campaign practices for free and fair polls.
- Precedent Alignment: Reiterates May 6, 2024, guidelines on social media ethics and January 16, 2025, norms for labeling synthetic media.
- Core Rationale: Hyper-realistic deepfakes of leaders delivering false narratives could unwittingly trap stakeholders in misinformation loops, undermining voter confidence.
This step reflects global concerns, with similar measures in the EU and US, but tailors to India’s digital-savvy electorate where 80% of campaigns now leverage social media.
Definition of AI-Generated and Synthetic Content
The advisory defines target materials broadly to capture evolving tech threats, focusing on content that alters reality through algorithms.
| Content Type | Description | Examples |
|---|---|---|
| AI-Generated Images/Videos | Fully created or significantly altered visuals using tools like deep learning models. | Fabricated photos of candidates at rallies; morphed videos showing leaders endorsing rivals. |
| Synthetic Audio | Algorithmically produced or edited sound clips mimicking voices. | Fake speeches inciting communal tensions or promising unfeasible policies. |
| Hyper-Realistic Synthetics | Blends of real and AI elements indistinguishable from originals. | Deepfakes blending archival footage with scripted dialogues. |
| Exclusions | Purely factual edits (e.g., cropping) without intent to deceive; must still comply if AI-assisted. | Standard photo enhancements for clarity. |
The ECI stresses that even consensual use requires disclosure if it risks misperception, urging parties to prioritize ethical innovation over viral sensationalism.
Disclosure and Labeling Requirements
To demystify synthetic media, the ECI mandates unmistakable markers, ensuring voters can discern fact from fiction at a glance.
- Mandatory Labels: Use phrases like “AI-Generated”, “Digitally Enhanced”, or “Synthetic Content” – legible and non-removable.
- Visual Standards: Labels must cover ≥10% of screen area; positioned at the top band for videos; persistent throughout playback.
- Audio Protocols: Disclosure in the first 10% of duration (e.g., voiceover or text-to-speech alert).
- Creator Attribution: Include entity responsible (party/candidate name) alongside the label.
- Platform Applicability: Enforced across social media, websites, OTT, and offline prints; integrates with IT Rules for intermediary takedowns.
Parties must train campaign teams on these specs, with non-compliance flagged as a breach of Model Code of Conduct.
Record-Keeping and Verification Obligations
Transparency extends beyond labels to auditable trails, enabling ECI scrutiny during or post-campaigns.
- Internal Logs: Maintain records for all AI content, detailing creators, timestamps, metadata, and consent proofs.
- Retention Period: At least until election results; available for ECI inspection on demand.
- Reporting Fake Assets: Parties urged to report impersonating accounts or deepfakes proactively via ECI portals.
- Verification Process: Random audits; parties bear burden of proof for compliance.
This framework mirrors corporate data governance, adapting it to politics for accountability without stifling creativity.
Enforcement Mechanisms and Penalties
The ECI blends persuasion with enforcement, leveraging existing laws to deter violations.
- Swift Removal: Misleading content on official channels must be deleted within 3 hours of detection/reporting.
- Legal Recourse: Violations invoke IT Rules 2021 (fines up to Rs 50 lakh for platforms; personal liability for creators); potential MCC breaches leading to campaign bans.
- Monitoring Tools: ECI’s Social Media War Room (active since 2019) will use AI itself to scan for unlabeled synthetics; collaborates with Meta, Google for rapid flags.
- Grievance Redressal: 24/7 cVIGIL app for voter complaints; parties notified for self-correction before escalation.
Past enforcement, like 2024 Lok Sabha deepfake takedowns, shows 90% compliance rates, but experts call for stiffer fines to match AI’s speed.
Context: Bihar Elections 2025 and Broader Implications
Issued amid Bihar’s high-stakes polls – 243 seats, NDA vs. INDIA bloc rivalry – the advisory preempts a surge in digital warfare, where 2024 saw 1,200+ deepfake complaints nationwide.
- Election Timeline: Notification October 25; polling November 6/11; results November 14 – leaving ~2 weeks for adaptation.
- Political Reactions: BJP hails it as “voter empowerment”; opposition seeks clearer tech guidelines to avoid selective enforcement.
- Societal Impact: Protects marginalized voters from targeted disinformation; aligns with Digital India Act drafts for AI ethics.
- Global Echoes: Parallels US FEC rules post-2024 elections; positions India as a leader in AI election safeguards.
Chief Election Commissioner Rajiv Kumar emphasized: “Technology must illuminate, not obscure, the democratic process.” Analysts predict a 20-30% drop in synthetic misinformation incidents if enforced rigorously.
Actionable Steps for Compliance
Parties and candidates can operationalize these rules swiftly with this checklist:
- Audit Existing Assets: Scan campaigns for AI elements; retro-label as needed.
- Tool Integration: Adopt watermarking software (e.g., Adobe Content Authenticity Initiative) for auto-labeling.
- Team Training: Conduct workshops by October 28; designate AI compliance officers.
- Vigilance Protocols: Set up internal alert systems for 3-hour removals; report fakes via ECI’s SUGAM portal.
- Voter Education: Share ECI infographics on spotting deepfakes to build collective resilience.
- Seek Clarifications: Contact state ECI units for edge cases, like meme edits.
Future Outlook: Evolving AI in Indian Elections
This advisory signals a paradigm shift, from reactive takedowns to proactive disclosure, as AI evolves faster than regulations. With Bihar as a testbed, successful implementation could influence 2026 state polls and beyond, potentially inspiring a national AI Election Code. Yet challenges persist: enforcement in rural Bihar’s low-digital-literacy pockets and balancing innovation with oversight. As India strides toward 1 billion voters by 2030, such measures ensure tech amplifies voices, not silences trut






