Regulating AI: Navigating India’s challenging regime

By Harsh Kumar and Vishal Singh, Kaizen Law
Copy link

Artificial intelligence (AI) applications have dramatically influenced our everyday lives through generative AI (GenAI), shaping activities including content creation, marketing endeavours, customer service interactions through chatbots and voice assistants, the use of advanced applications for medical diagnostics, drug discovery, the personalising of entertainment, and formulating autonomous driving systems.

AI applications span various industries including research and education, industrial applications, customer support and healthcare, and they promise increased efficiency, automation and refined decision-making processes.

According to Statista, the global AI market is expected to reach a value of USD305.9 billion in 2024, expanding at a compounded annual growth rate (CAGR) of 15.83% until 2030. In India, the AI market is projected to achieve a CAGR of 17.94% through to 2030, and surpass USD5.47 billion in 2024.

The Indian AI startup ecosystem is also thriving and, per an AIM Research report dated February 2024, AI entities have garnered more than USD560 million in funding in 2023. These figures underscore the growing significance of AI in India’s technological landscape.


Harsh Kumar, Kaizen Law
Harsh Kumar
Founding Partner
Kaizen Law
Tel: +91 9999191620

Unlike the EU, which on 26 January 2024 released the final draft of its EU AI Act to classify AI systems based on threats to each AI system, India does not have overarching legislation addressing AI and its application.

However, existing Indian laws, such as the Information Technology Act of 2000 (IT Act) read with associated rules, and the Digital Personal Data Protection Act of 2023 (DPDP Act, as and when notified), establish the regulatory expectations for processing personal data and govern a few aspects to prevent misuse of AI.

For example, on 1 March 2024, the Ministry of Electronics and Information Technology (MeitY) issued an advisory under the IT Act outlining guidelines for AI models. The advisory requires adherence to the due diligence requirement specified under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, prevention of bias or discrimination in AI training, obtaining government approval for deploying AI that is under testing or is deemed unreliable, and notifying users about the consequences of dealing with unlawful information. However, clarity is needed on the definition of “significant” platforms to which this advisory applies.


The lack of a central legislative framework to grade, evaluate and govern the implementation of various AI tools is one of the significant handicaps for responsible AI development in India. Existing Indian laws are also inadequate to fully govern the proliferation of fake news, deepfakes, and the bias and prejudices embedded in AI tools, as was recently flagged in Google’s GenAI model, Gemini.

Assigning liability to an AI tool that autonomously uses AI to generate plagiarised or defamatory content, or is programmed to autonomously generate AI worms to steal data or deploy malware for example, the Morris II worm is also untested in India, considering that under Indian laws, criminality is associated only with a human mind and not an AI tool.

Also, no judicial precedents establish the tests for the responsible use of AI. Some additional limitations in governing AI are discussed below.

The Copyright Act, 1957. The Copyright Act makes a work eligible for copyright only if it is original and is the product of human authorship. Hence, AI-generated content is accepted as not copyrightable. Recent cases in the US and India also highlight the complexities surrounding copyrighting AI-generated content. For example, Ankit Sahni’s attempt to register the AI-generated artwork, Suryast, which was produced by the AI painting app Raghav, was denied.

Under the extant Indian framework, granting authorship rights to AI would also present significant practical challenges, including AI having perpetual copyright protection, given its unending existence. Consequently, comprehensive reforms are necessary before considering copyright ownership for AI-generated works.

Data protection. The DPDP Act permits entities to process digital personal data only with individual consent, or for specific legitimate purposes. It is unclear how autonomous AI tools sourcing users’ personal information from third-party apps, in contravention of the DPDP Act, would be punished under that act.

The ability of an AI tool to erase or edit personal data on the withdrawal of consent by an individual is also challenging, considering the difficulties involved in isolating such personal data from the pre-fed training parameters of an AI tool, and AI’s general ability to unlearn.


Vishal Singh, Kaizen Law
Vishal Singh
Kaizen Law
Tel: +91 7459961538

Ministry of Electronics and Information Technology. The MeitY has undertaken various initiatives to drive responsible innovation and AI development in India. These initiatives include establishing committees on AI to formulate policy frameworks, setting up centres of excellence for the internet of things (IoT) across multiple cities, and creating specialised centres for virtual and augmented reality, gaming, visual effects, computer vision and AI, and blockchain technology.

Recommendations from MeitY committees include developing the National Artificial Intelligence Resource Platform to make public data available for AI, identifying national missions for each sector, and amending/formulating regulations to enable AI usage. The MeitY has also launched projects like the National AI Portal, or INDIAai, and the AI Research Analytics and Knowledge Dissemination Platform.

Furthermore, the National Programme on AI aims to leverage transformative technologies for social impact, focusing on areas such as skilling, ethics, governance, research and development. These initiatives represent India’s efforts to regulate the AI landscape while fostering inclusive and innovative growth.

Niti Aayog. In 2018, Niti Aayog, the apex policy thinktank of the government of India, released a discussion paper titled “National Strategy for Artificial Intelligence”, focusing on AI’s role in critical sectors such as healthcare, agriculture, education, smart cities and transportation.

The paper proposed establishing research centres, a common cloud platform, and appropriate intellectual property frameworks for governing AI innovation. In February 2021, Niti Aayog published “Principles for Responsible AI”, emphasising safety, equity, inclusivity, privacy, transparency, accountability and values.

Subsequently, an approach paper titled “Operationalising Principles for Responsible AI”, published in August 2021, outlines roles for government, the private sector and research institutions advocating for a graded, risk-based regulatory approach akin to the EU AI Act.

The approach recommends stringent oversight and responsible AI practices for high-risk AI, ensuring alignment among stakeholders for technology-agnostic governance frameworks. These policy documents are initial steps in policy formulation in India towards offering a governance framework for promoting responsible AI systems.

Telecom Regulatory Authority of India (TRAI). On 20 July 2023, the TRAI issued recommendations for responsible use of AI in the telecoms industry. The TRAI highlighted the necessity of regulating AI because of its associated risks such as bias, accountability issues, the explainability challenges of AI tools for human comprehension, and the potential of AI for surveillance. In alignment with Niti Aayog’s approach, the TRAI proposed adopting a risk-based regulatory framework, suggesting stricter controls and scrutiny for high-risk AI applications.

The Reserve Bank of India (RBI). The RBI is contemplating a regulatory framework to oversee the use of AI in the banking and financial services sector, spurred by the widespread adoption of AI tools by leading Indian banks including HDFC Bank, ICICI Bank, the State Bank of India, and Kotak Mahindra Bank.

Recognising the complexity and potential biases in AI algorithms and concerns regarding data security, the RBI aims to introduce short-term measures to address these issues. These measures may include mandatory disclosures by banks to customers before AI use and deployment, stricter rules for outsourcing core banking functions including using AI for credit underwriting, and standardised storage formats for AI algorithms to promote transparency, accountability and auditability within the banking industry.

Bureau of Indian Standards (BIS). The draft Indian standard IS/ISO/IEC 42001:2023, released by the BIS in January 2024, outlines guidelines and prerequisites for establishing, implementing, maintaining and continually improving an AI management system within organisations. An AI management system would include any software platform designed to manage, oversee and optimise AI within organisations.


With the widespread deployment of AI, concerns about AI surveillance, data security, potential abuse of individuals’ sensitive information, amplification of social biases, security risks, fake news and lack of reliability have become increasingly prominent. There are also worries about AI driving unemployment and elevating national security risks, including interference in free and fair elections.

Recent controversies involving the proliferation of deepfake incidents and fake news in India, and the controversial use of facial recognition technology in the DigiYatra app for faster security clearance at Indian airports, highlight the importance of harnessing AI while proactively addressing its misuse and ethical concerns surrounding its use.

As AI systems often operate as “black boxes” whose decision-making processes cannot be evaluated, this reality also underscores the need for an appropriate policy framework to govern AI, ensuring that AI systems are transparent, safe, traceable and non-discriminatory. As Sundar Pichai pointed out: “AI is too important not to regulate.”

India requires specific regulations to ensure the responsible development of AI and protect users from security threats and harm. The Digital India Act (DIA), which is expected to replace the IT Act, will provide a more comprehensive governance framework for AI and other emerging technologies like blockchain. However, because of the dynamic nature of AI, a sector-specific approach and open, consultative governance frameworks are also essential.

Kaizen Law

4th Floor, Spring House, Plot No 2,

Golf Course Road, Sector 43,
Gurgaon, Haryana – 122011
Tel: +91 9999191620


Copy link