Skip to Content

Facial Recognition Technology: Why you should care?

Hiba Aftab Ansari - 3rd Year student, B.A. LL.B., Unity Post Graduate College
“If your password gets breached, you can change your password. If your credit card number gets breached, you can cancel your card. But you can’t change biometric information, like your facial characteristics.” — Edward Markey, US Senator. 
INTRODUCTION 

Most of us students are from the time when our attendance was marked by our teacher manually in a register. The process would easily take about 5 minutes, sometimes even 10 due to the bustling. Today, we have IoTs to instantly track and mark our attendance within a matter of seconds, saving time and effort. This means no more proxy attendance, pile of papers, or an “Attendance Day” off. With the growing integration of advanced technology in human life, cybercriminals have never felt more in power. This power is an embryo of weak legislations and rapid developments in the Information Technology sector. Lack of adequate privacy regulations may lead to an individual’s personal data being recorded without their knowledge or explicit consent. Some uses of FRTs are completely harmless, like a quick facial scan at the Airport check-in, preventing strangers from entering your premises, or getting recommended a menu at KFC. On the other hand, some uses of FRTs are concerning. A US-based software company, Clearview AI, faced controversies several times for providing law enforcement agencies a search engine of facial dataset that it had scraped from over 30 billion digital images from across the internet, without the knowledge or consent of the people or the platforms involved, violating the right to privacy. Therefore, in this digital era of mass data explosion, it becomes extremely important to protect your personal data, even from yourself.  

1. DIGITAL INDIA IN THE PIPELINE 

Artificial Intelligence — the new topic of debate and discussion — no longer remains an unfamiliar reality. Global conversations have shifted towards it, and people have started to both criticize and praise it, which was evident as it brought, in one hand, a blissful future run by the intelligence of what humans could only ideate, while in the other, concerns ranging from biasness and privacy to job security. AI could potentially enhance security and potentially compromise it simultaneously. It is with pride that in the list of countries with the highest AI Skill Penetration from 2015 to 2024, India ranks second. The Central Government pledged to invest USD 1.25B in AI development and launched the IndiaAI Mission to foster AI innovation. India’s consistent efforts to build a strong digital infrastructure has, however, posed some key challenges (particularly legal) to be addressed. Our data privacy regulations are not at par stringent. For instance, the Digital Personal Data Protection Act, 2021 protects an individual’s personal data in the digital environment, but the Act is not comprehensive per se and falls “enforcement deficit” to safeguard against the rapid AI wave posing new challenges every day. A complete solution balancing the transformative role of AI with the potential threat of generative models is required to bridge the security gap. 

 2. FACIAL RECOGNITION SYSTEM (FRTs) 

Facial Recognition System is designed and trained on large corpuses of digital images of millions of humans, curated through CCTVs, media, social media, and other sources. It is a technology that allows you to identify, detect or authenticate a person against a database of faces in two formats: 

  1. Facial Recognition Technology (FRT) System: The main target of a 1:1 FRT is to scan a person’s face by capturing it live and authenticate it against the specific facial data. This is the same system that uses facial recognition to unlock your phone. The user, in this case, is aware that their facial data is being used to serve a specific purpose only. 
  2. Facial Recognition Technology (FRT) System: 1:n system of FRT is used to identify a particular face among the many faces captured in an image or a video to generate a possible match. It is primarily being used by the Indian law enforcement agencies. 

3. LEGAL AND CONSTITUTIONAL INTERPRETATION 

In the Puttaswamy case, the Hon’ble Supreme Court laid the foundation for a three-fold test to determine if a State action infringes upon the fundamental right to privacy. The test provides that any restriction on privacy must be legal, have a legitimate goal and be proportionate. Justice S.K Kaul added a fourth prong to this test that mandated “procedural guarantees against abuse of such interference.” However, in a yet another chilling case of Manohar Lal Sharma v. Union of India, it was alleged that the Government of India used the Pegasus spyware developed by the NSO Group, an Israeli technology firm, to conduct surveillance on a wide range of individuals including journalists, activists, politicians and public officials without their knowledge, thereby compromising their right to privacy. The Hon’ble Supreme Court, while hearing the case, observed that while there exists no harm in the Government using spyware as it ultimately ensures national security, any intrusion of individual’s privacy rights must be legally justified and proportionate. 

 4. BORDERING AI: AN OVERVIEW OF THE GLOBAL REGULATIONS 

  • As the world rapidly transforms digitally, more than 160 nations currently have laws regarding the protection of data. Germany was the first to issue such laws with the Hesse Data Protection Act of 1970. In the 2000s, the growing internet population initiated public discussions regarding privacy and the need for proper protection for such data. The EU's GDPR, introduced in 2018, set the international standard for data protection by allowing citizens to have much more control and protection of their data. 
  • With the introduction of the Information Technology Act of 2000, India's first data protection law poorly managed the ever-growing social media, OTT, and e-commerce sectors. This changed when in the case of the K.S. Puttaswamy v. Union of India, the Supreme Court recognized the Right to Privacy as a Fundamental Right. This helped with the introduction of the Digital Personal Data Protection Act of 2023. This act was the first of its kind and, along with the recommendation of the Justice B.N. Srikrishna Committee, is India’s most advanced privacy protection law, even with the lack of enforcement. 

5. INTERNATIONAL OUTLOOK  

  • EU AI Act (2024): The first AI law which is based on risk assessment; some uses are banned, high-risk AIs such as biometric surveillance are regulated, and general-purpose AIs are supervised for transparency. 
  • California Consumer Privacy Act (2018): Allows citizens from California to opt-out and have their data deleted, as well as to know the identity of the data collector.   
  • HIPAA: A law designed to protect the privacy and information of a person’s health. 
  • PIPEDA (Canada): Concerns the management and use of data in the private sector whereby there is a focus on consent and responsibility. 

6. INDIA’S PERSPECTIVE  

India strives to integrate AI within its broader national development strategies while implementing policies in particular sectors by providing pioneering policies in the form of ‘regulatory sandboxes’. Though the DPDPA 2023 is framed in the ‘privacy first philosophy’ and seeks to foster innovation while protecting citizens’ rights, its implementation is still awaited. 

6.2. Significant Roles Played by FRTs 

  • Thermal Screening (Kerala, Pandemic): This system made it possible to take temperatures in a contactless manner. 
  • Maharashtra’s MARVEL (2025): An AI system scanned all the toll plaza CCTV footage within 36 hours to help solve a truck-related crime, showcasing AI’s prowess in police work. 

6.2. Problems with FRTs 

  • Algorithmic Bias: Buolamwini and Gebru’s study noted a 34.7% error rate for dark-skinned women as compared to less than 1% for light-skinned men. The case of Nijeer Parks (2019, USA), who was wrongfully arrested, highlights the negative consequences of this. 
  • Privacy Infringement: The Kerala High Court faced the SAFE Kerala Project with the fear of misuse of the citizens’ sensitive information for indiscriminate data collection by private companies. 
  • Data Misuse: The “Uighur Alert” system by Huawei is a case in point, demonstrating the use of FRT for state surveillance and ethnic targeting. 

While the Global and Indian AI laws strive to promote innovation along with accountability by balancing it with the FRTs, the underlying bias and misuse, along with violations of rights, remain. There is a need to foster Trustworthy AI that, in enhancing justice and governance, preserves privacy and equality. 

7. THE SOLUTION 

Who, then, should be given the control over data? Shall it remain the individual’s responsibility or the collective responsibility of the State? Franz Kafka’s famous work, The Trial, hinted an answer. A closer look at his depiction of the individual powerlessness reveals the paradoxical nature of humans. Characters readily submit to the higher authority, even when doing so causes them injury. They are disempowered. Merely giving them control does not work, unless they possess the knowledge and understanding of what their power means and does. Sometimes, people sign up for their own privacy breach, like the incorporation of smart devices (the luxury surveillance1). In today’s era, controlling your data means to stand against the powerful AI models, considering that some AI deployments are heavily restricted, or simply prohibited. AI does pose a threat to our privacy, but it plays a significant role in other arenas where privacy is not even concerned. What we need are trustworthy parties, fiduciaries, and policies aiming at data protection and transparency.  

CONCLUSION 

Contemporary frameworks — fragmented across sectoral policies and data protection statutes, lack a unified approach for addressing foundational concerns such as bias, transparency, and algorithmic capacity. Notable incidents, including AI-driven recruitment and credit scoring systems that exhibited measurable discriminatory outcomes, have underscored the inadequacy of existing ex-post compliance mechanisms and the need for ex-ante governance. The rapid use of generative AI technologies has intensified challenges related to misinformation and synthetic content, which remain only partially addressed by current Digital Personal Data Protection Act, 2023. Intelligent, technology-driven predictive policing and FRT enhance public safety, but unchecked technological surveillance expansion poses serious threats to civil liberty, due process of law and the very foundation of a democratic governance. 

REFERENCES 
  1. Cyber Space and Cyber Crime, available at: https://www.lawctopus.com/academike/cyber-space-and-cyber-crime/ (last visited on August 24, 2025). 
  2. KFC China is using facial recognition tech to serve customers – but are they buying it?, available at: https://www.theguardian.com/technology/2017/jan/11/china-beijing-first-smart-restaurant-kfc-facial-recognition (last visited on Aug 25, 2025). 
  3. Clearview AI Fined Yet Again For "Illegal" Face Recognition, available at: https://www.forbes.com/sites/roberthart/2024/09/03/clearview-ai-controversial-facial-recognition-firm-fined-33-million-for-illegal-database/  (last visited on Aug 25, 2025). 
  4. The 2025 AI Index Report, available at: https://hai.stanford.edu/ai-index/2025-ai-index-report (last visited on Aug 25, 2025). 
  5. Global AI Governance Law and Policy: India, available at: https://iapp.org/resources/article/global-ai-governance-india/ (last visited on Aug 25, 2025). 
  6. INDIAAI, available at: https://indiaai.gov.in/ (last visited on Aug 25, 2025). 
  7. Digital Personal Data Protection Act, 2023 (Act No. 22 of 2023). 
  8. Enforcement Gaps in India’s DPDP Act and the Case for Decentralized Protection Boards, available at: https://www.expresscomputer.in/guest-blogs/enforcement-gaps-in-indias-dpdp-act-and-the-case-for-decentralized-data-protection-boards/126140/ (last visited on Aug 25, 2025). 
  9. The Evolution of AI in Cybersecurity: From Rule-Based Systems to Generative AI, available at: https://www.researchgate.net/publication/388930668_The_Evolution_Of_Ai
  10. NITI AAYOG, RESPONSIBLE AI #AIFORALL 10-11 (2022). 
  11. Pegasus order W.P. (Crl.) No.314 of 2021. 
  12. Justice K.S. Puttaswamy & Anr. v. Union of India & Ors., AMAZON WEB SERVICES https://nluwebsite.s3.ap-south-1.amazonaws.com/uploads/justice-ks-puttaswamy-ors-vs-union-of-india-ors-5.pdf 
  13. AI helps police catch a hit-and-run suspect in 36 hours, available at: https://www.bhaskarenglish.in/tech-science/news/ai-helps-police-catch-a-hit-and-run-suspect-in-36-hours-indias-first-police-ai-system-marvel-helped-nagpur-cops-trace-truck-135703646.html (last visited on Aug 26, 2025). 
  14. Addressing Gender Bias in Facial Recognition Technology: An Urgent Need for Fairness and Inclusion, available at: https://www.cogentinfo.com/resources/addressing-gender-bias-in-facial-recognition-technology-an-urgent-need-for-fairness-and-inclusion (last visited on Aug 27, 2025). 
  15. Another Arrest, and Jail Time, due to a Bad Facial recognition Match, available at: https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html (last visited on Aug 28, 2025). 
  16. Hesse Data Protection Act, 1970 (Germany). 
  17. General Data Protection Regulation OJ L 119, 2016. 
  18. Information Technology Act, 2000 (Act No. 21 of 2000). 
  19. Huawei worked on a facial recognition system to identify Uighurs in China, says report, available at: https://www.indiatoday.in/technology/news/story/huawei-worked-on-a-facial-recognition-system-to-identify-uighurs-in-china-says-report-1747999-2020-12-09 (last visited on Aug 28, 2025). 
  20. The Rise of ’Luxury Surveillance’, available at: Amazon and the Rise of ‘Luxury Surveillance’ - The Atlantic (last visited on Aug 28, 2025). 

 


Share this post
Archive
Sign in to leave a comment
Wildlife in the Age of Disruption: Can Law Regulate the Future of Nature?
Chandrani Chakraborty (Research Scholar), Motherhood University