Context
The Government of India has initiated a reassessment of the Copyright Act, 1957 to address challenges posed by Generative AI (GenAI) and Artificial General Intelligence (AGI). In this backdrop, the DPIIT working paper proposes a hybrid licensing framework for AI training and addresses copyrightability of AI-generated content.
While the effort is timely, the proposed framework suffers from serious conceptual, legal, and institutional flaws that risk undermining creator rights, privacy, and transparency.
Key Problems In The Proposed Framework
1. Blanket Licence Without Opt-Out: A Threat To Consent & Privacy
The proposal introduces a mandatory blanket licence for AI training, without giving copyright holders the option to opt out.
-
AI developers would gain access to vast quantities of lawfully available online content.
-
Such content often includes personal data unintentionally published online, such as addresses, Aadhaar numbers, caste details, and phone numbers.
-
The paper offers no substantive safeguards for personal data, apart from a cursory footnote referencing existing data protection laws.
This approach raises serious concerns under the right to privacy and informational self-determination.
2. Paywalled Content & Technological Protection Measures Ignored
The framework fails to address downstream copyright infringement caused by AI outputs.
-
AI systems can reproduce or summarise paywalled content for free, even if end users never accessed it lawfully.
-
Section 65A of the Copyright Act penalises circumvention of technological protection measures.
-
Courts (e.g., Elsevier v. Sci-Hub) have treated functional access as infringement, not just formal circumvention.
The paper does not clarify whether AI-mediated access to paywalled content exposes users or developers to liability, effectively shifting losses onto publishers and creators.
3. Weak & Unclear Royalty Distribution Mechanism
The proposed Copyright Royalties Collective for AI Training (CRCAT) is structurally exclusionary.
-
CRCAT will consist only of organised Collective Management Organisations (CMOs).
-
Independent creators and non-members:
-
Have no say in licensing
-
Have no role in royalty distribution decisions
-
-
The framework provides no objective formula for royalty calculation (usage, contribution, revenue linkage, etc.).
Small and unregistered creators — whom the proposal claims to protect — are effectively sidelined.
4. Government-Dominated Rate-Setting Undermines Creator Agency
Royalty rates will be fixed by a government-appointed committee dominated by officials and technical experts.
-
Only one representative each from CRCAT and the AI industry.
-
No meaningful representation of independent creators such as journalists, researchers, or artists.
This concentrates power in intermediaries and the State, marginalising actual rights holders.
5. “Revenue Sharing” Model Creates Illusion Of Compensation
Payment obligations arise only upon “commercialisation” of AI systems.
-
Revenue generation is wrongly equated with commercialisation.
-
AI companies may generate massive revenues while remaining unprofitable.
-
Supreme Court precedent (CIT v. Surat Art Silk) distinguishes surplus from commercial intent.
Deferring payment legitimises uncompensated extraction of creative labour, offering creators only speculative future returns.
6. Faulty Broadcasting Analogy
The paper analogises AI training with broadcasting — a comparison that does not hold.
-
Broadcasting: one-to-many, fixed expressive use.
-
AI training: ingestion, abstraction, and reuse for indefinite downstream applications.
-
Collective licensing works where use is identifiable and repetitive — not opaque and transformative.
Treating AI training like broadcasting obscures how value is actually generated in AI systems.
Conclusion: A Framework That Satisfies No One
The proposed approach:
-
Fails creators by removing consent and fair compensation
-
Fails citizens by ignoring personal data risks
-
Fails industry by creating legal uncertainty
What Is Needed
✔ Explicit opt-out rights
✔ Strong personal data safeguards
✔ Transparent valuation & royalty formulas
✔ Meaningful creator representation
✔ Clear rules on downstream AI liability
India’s AI mission must not come at the cost of citizens’ rights, creator agency, and constitutional values.

