In a significant intervention reflecting the judiciary’s growing engagement with technological shifts in legal practice, Surya Kant has advised newly designated Advocates-on-Record (AoRs) of the Supreme Court to personally draft pleadings and refrain from outsourcing this core professional responsibility to artificial intelligence tools. The remarks, delivered during an interaction with newly inducted AoRs, underscore an emerging judicial concern: that while technology may assist, it cannot substitute the intellectual rigour and accountability inherent in legal drafting.
Addressing the new AoRs, CJI Surya Kant emphasised that drafting is not a mechanical exercise but a deeply intellectual process requiring legal reasoning, contextual understanding, and ethical responsibility. He cautioned that excessive reliance on AI-generated drafts risks diluting the advocate’s independent application of mind an element central to the credibility of pleadings before constitutional courts.
The advice is particularly significant given the AoR system itself, which imposes a heightened responsibility on designated advocates. As officers of the Court with exclusive rights to file and act in the Supreme Court, AoRs are expected to maintain the highest standards of accuracy, diligence, and professional integrity. Delegating drafting whether to juniors without supervision or to AI systems thus raises concerns not merely of efficiency but of accountability.
CJI Surya Kant’s remarks do not reject technology per se but seek to delineate its boundaries. Artificial intelligence tools, increasingly used for research, summarisation, and preliminary drafting, are acknowledged as facilitators. However, the Court’s caution lies in the transformation of these tools into substitutes for legal reasoning.
The distinction is critical. AI operates on pattern recognition and data synthesis, whereas legal drafting requires interpretive judgment balancing facts, precedent, statutory interpretation, and strategy. The Chief Justice’s warning implicitly recognises that blind reliance on AI risks introducing inaccuracies, fabricated citations, or contextually inappropriate arguments issues already flagged in jurisdictions globally.
The concern expressed also reflects an institutional anxiety about the sanctity of judicial records. Petitions filed before the Supreme Court are not merely procedural documents; they form the foundational narrative upon which constitutional adjudication proceeds.
If pleadings are generated without careful human scrutiny, the risks are manifold misstatement of facts, incorrect legal propositions, or even misleading arguments. Such deficiencies not only weaken the litigant’s case but also burden the Court, which relies on counsel to present accurate and well-considered submissions.
In this context, the insistence on personal drafting is less about resisting innovation and more about preserving the epistemic integrity of the judicial process. Traditionally, legal practice has accommodated a degree of delegation—junior advocates assist seniors, law clerks aid research, and drafting is often collaborative. However, the entry of AI complicates this framework. Unlike human assistants, AI lacks professional accountability, ethical obligations, and contextual judgment.
CJI Surya Kant’s remarks draw a subtle but important line: while delegation within a supervised professional hierarchy is acceptable, outsourcing to an unregulated technological system raises ethical and professional concerns. The responsibility for every word in a petition ultimately rests with the AoR, and this responsibility cannot be diluted by attributing errors to automated tools.
The caution from the Supreme Court of India aligns with a broader global trend. Courts in jurisdictions such as the United States and the United Kingdom have already encountered instances where AI-generated submissions contained fictitious citations or erroneous legal propositions. These incidents have prompted judicial advisories emphasising verification and accountability. India’s Supreme Court, through this intervention, appears to be proactively addressing similar risks before they manifest at scale. The message is clear: technological adoption must be accompanied by professional vigilance.
The timing of the remark addressed specifically to newly inducted AoRs is particularly telling. The AoR designation is not merely a professional milestone but a recognition of competence and trust. AoRs act as the interface between litigants and the Supreme Court, and their filings carry institutional weight.
By urging them to personally engage in drafting, the Chief Justice reinforces the idea that excellence in advocacy begins with mastery over pleadings. The ability to structure arguments, frame issues, and present facts coherently is not ancillaryit is central to effective constitutional litigation.
CJI Surya Kant’s remarks represent a nuanced judicial stance neither technophobic nor uncritically embracing innovation. Instead, they articulate a principle of disciplined adoption: technology may assist but cannot replace the human intellect at the heart of legal practice.
Three broader implications emerge such as; Reaffirmation of Professional Accountability; Advocates remain personally responsible for every submission, regardless of technological assistance, Judicial Protection of Process Integrity– Courts seek to safeguard the accuracy and reliability of pleadings in an era of automated content generation and Ethical Framework for AI Use in Law– The legal profession must evolve standards governing the permissible use of AI, balancing efficiency with responsibility.
Ultimately, the message is less about AI and more about advocacy itself. In constitutional litigation where rights, liberties, and governance structures are at stake the Court demands not convenience, but conscientiousness.

