Artificial Intelligence (AI) is transforming our world. However, concerns have grown regarding its ethical implications and risks. The European Union (EU) has acted significantly by adopting the AI Act (act), a comprehensive regulatory framework for the development and use of AI technologies.
The act imposes obligations on providers and users of AI, depending on the risk and potential harm of their systems. Systems posing an unacceptable threat to individuals’ safety are forbidden. Threats include subliminal or intentionally manipulative methods, taking advantage of an individual’s weaknesses or social scoring. The latter involves classifying individuals according to social conduct, socioeconomic position or personal attributes.
The act bans intrusive and discriminatory uses of AI such as in real-time biometric identification systems in public areas; post-incident remote biometric identification systems, other than the investigation of serious crime and then only with judicial authorisation, and biometric classification of sensitive characteristics such as gender, race, ethnicity, and religion.
Also prohibited are predictive policing systems analysing profiles, residence or past criminal behaviour; emotion-recognition systems in law enforcement, border control, workplaces and educational institutions, and the unrestricted collection of biometric data from social media or CCTV to create facial recognition databases. These violate human rights, particularly the right to privacy.
The act includes in the definition of high-risk domains potential harm to individuals’ well-being, safety and fundamental rights and to the environment. It includes AI systems that manipulate voters in political campaigns and adds systems of social media platforms to the list of high-risk applications.
Providers of foundation models, a rapidly developing area of AI, are regulated. They must protect fundamental rights, health, safety, the environment, democracy and the rule of law. They must assess and mitigate risks, adhere to design, information and environmental standards and be on the EU database. Transparency requirements apply to generative foundation models, such as ChatGPT. These include disclosing that content is AI-generated, preventing the generation of illegal content and publishing summaries of copyrighted data used in training.
To boost innovation, the act exempts research activities and AI elements permitted by open-source licences. It promotes regulatory sandboxes, or controlled environments, to test AI before release.
India does not have separate legislation governing AI, but the proposed Digital India Act (DIA) will likely regulate AI and emerging technologies from the perspective of user harm. Control will probably be established over intermediaries using high-risk AI. High-risk AI will be defined and regulated through algorithmic accountability, identification of threats and vulnerability assessment. AI-based ad-targeting and content moderation will be scrutinised. The DIA should impose accountability in upholding the rights of citizens under the constitution. AI-based tools have to be used ethically to protect users. Penalties will be effective and proportionate to deter and dissuade offending.
The DIA will align technological advancement with ethical standards, safeguard individuals and ensure responsible implementation. Transparency will strengthen consumer protection by enabling customers to know when AI affects them. A robust regulatory framework will boost public trust in AI technologies. High standards in the DIA will position India as a global leader in responsible AI, attracting international investment and collaboration, enhancing its global competitiveness and generating economic growth.
Implementing regulation will be challenging. A balance between innovation and regulation is crucial to avoid research and technological advancement being stifled. AI’s rapid evolution demands flexible and adaptable governance but compliance requires effective enforcement and the collaboration of government, industry and academia.
By establishing clear guidelines, promoting transparency, and encouraging the responsible use of AI, India can be a global leader. Careful regulation and continuous collaboration between stakeholders will overcome obstacles and lead to a prosperous AI-powered future in India.
Ashima Obhan is a senior partner and Aparna Amnerkar is an associate at Obhan & Associates.
Advocates and Patent Agents
N – 94, Second Floor
New Delhi 110017, India