Setting the line between innovation and privacy

By Seerat Bhutani and Ashima Obhan, Obhan & Associates
0
1204
LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link

Popular movies envisage futures in which advanced artificial intelligence (AI) becomes too powerful, enslaving and controlling humans. While the idea of world domination by AI may be far-fetched, the increasing and unregulated use of AI certainly poses a risk to the fundamental rights of individuals.

Ashima Obhan , Obhan & Associates
Ashima Obhan
Partner
Obhan & Associates

The training of AI requires access to large datasets. This is a considerable privacy concern as these exercises often use the personal data of individuals without their consent. Facial recognition algorithms are trained by comparing sample faces against a database. An American facial recognition company, Clearview, which possesses a database of billions of images taken from social media platforms, was held to be in breach of Australian privacy law for collecting sensitive information without the consent of individuals. Data protection laws across the world require that the express consent of individuals be obtained to collect, process, or use personal data. This requirement for consent should be incorporated within AI training frameworks. AI facial recognition technology can be used for mass surveillance or profiling by law enforcement agencies. The European Parliament has adopted a non-binding resolution to prevent such practices, and the resolution encourages strict democratic control and independent oversight.

Seerat Bhutani, Obhan & Associates
Seerat Bhutani
Associate
Obhan & Associates

Besides the threat to the fundamental right of privacy of individuals, AI algorithms have ethical shortcomings. These algorithms, much like human beings, suffer from inherent bias due to the underlying datasets used, which may be unknowingly influenced by stereotypes and discrimination. As AI machine learning models become more complex, it becomes more difficult to understand AI decision-making processes and to predict their outputs. This is known as the black box problem, in which AI systems become opaque. Amazon stopped using a recruiting algorithm after realising that it preferred men because in the previous 10 years most resumes were submitted by men. Thus, additional safeguards are necessary to prevent the perpetuation of discrimination against certain sections of society. Mechanisms are required to ensure the transparency and methodology of these algorithms.

Other ethical concerns involve the use of deep learning AI for malicious purposes such as the creation of deepfakes. These are doctored images and videos in which the face of one person is superimposed on another’s body. They have been used to falsify pornographic content as well as spread fake news and information. Deepfake technology can harm not only the dignity and privacy of persons but also social peace and harmony.

While there are risks with the adoption of AI technology, it may add immense value to many sectors of the economy such as healthcare, agriculture, retail, manufacturing, energy and e-commerce, and drive growth in, and advancement of many countries. However, leveraging such technologies and promoting innovation should be balanced against the protection of the fundamental rights of individuals.

Legal systems should adapt to the unique challenges posed by AI through regulation. The European Union in April 2021 published a proposal for legislation on AI which follows a risk-based approach by classifying the uses of AI into three categories of unacceptable, high and low risk. Requirements are then imposed according to such classes. Many countries have adopted the OECD’s ethical principles for AI and released their own proposals for the responsible use of AI.

Currently, India has no legislation that specifically regulates AI. Unfortunately, the Personal Data Protection Bill, 2019 which would provide a data protection framework in the country has not yet been passed by the legislature. It is imperative that this legislative loophole is corrected and a legal framework developed for the regulation of AI. Since the decisions and actions of AI are influenced by such factors as training, datasets, algorithms and deployment, it may be challenging to apportion accountability in those cases where AI has caused harm. Furthermore, due to the involvement of various entities in the development process, the assignment of liability will prove to be complex. Therefore, it is probable that the development of sufficiently robust legislation will require time.

Ashima Obhan is a partner and Seerat Bhutani is an associate at Obhan & Associates

Obhan & Associates

Obhan & Associates

 

N – 94, Second Floor, Panchshila Park,

New Delhi – 110017, India

 

www.obhanandassociates.com

 

LinkedIn
Facebook
Twitter
Whatsapp
Telegram
Copy link