top of page

Lab Blog

Ethics and Biometric Data

  • dbacic47
  • Apr 26
  • 6 min read

Author: Kathryn Glasgow


Biometric data has been developing for over two centuries, with the first recorded data collected in 1858 with fingerprinting by William Herschel. Today, biometric data has expanded far beyond only fingerprints, including things like AI facial recognition: the nature and scope of biometric data have changed drastically since its genesis. There are traditionally two categories of biometric data: first generation and second generation. First-generation biometric data’s main purpose was to find out who a person is. The second generation of biometric data is focused on how a person is. The combination of these two generations leads to a variety of innovations and concerns.


It is essential to consider the ways that these advances have prompted ethical questions about the technology and the responsibility of the users. The rapid growth and change of technology, even within the last ten years, has created a gap between our ethical responsibilities and the abilities of the innovations. In 2004, Martin and Freeman posed the question of “Who should be accountable for the ethical implications of technologies?” and suggested a socio-technical systems approach: people BOTH shape and are shaped by technology (Martin and Freeman 2004). Although a clear power dynamic would be easier to understand, both humanity and technology share power and influence each other. Therefore, the relationship is more complex than people may perceive, which requires a socio-technical systems context when discussing the ethics of advances like biometric data. Without this context, it could seem as simple as “Well, people should just use it ethically.” What if the technology leads to unethical choices? Or separation of power? Humanity and technology are interwoven in a confusing, complicated way that requires an entirely new ethical framework.


Privacy

One of the biggest ethical concerns that emerges from biometric data is privacy. The nature of biometric data is simply different than that of statistical data – it feels personal, sensitive (Evans et al 2017). The original privacy framework established for legal protection did not include all the things that biometric data can collect and was expanded to include things like privacy of thoughts and feelings. This is where, in my opinion, the ethical pitfalls of biometric data start to become clear. People who could use and/or abuse these tools can ascertain a level of omniscience without others consent. Most people would like the privacy of their thoughts and feelings. Therefore, a pillar to the ethical use of biometric data is AUTONOMY. With the choice to give informed consent, people can feel more secure with their sense of self while simultaneously providing biometric data. One example of autonomous biometric data collection would be Olympic athletes: they are tracking things like heart rate, hydration levels, body fat, etc. to leverage for their own improvements. If someone’s biometric data is being taken without their knowing, informed consent is impossible. The problem is that biometric data is getting easier to collect without the subject knowing. Will that encourage nonconsensual collection of biometric data? How can we protect subjects from that?


Legal Protection

Legal protection of digital information, communication, monetization, and more has been rushing to catch up with the rapid changes in technology available for public consumption across the globe. Specifically regarding data governance, the European Union (EU) established the General Data Protection Regulation (GDPR) that came into effect in May 2018. It was written to protect the individual rights of data privacy by establishing regulations for firms, such as requiring consent for data processing or access to personal data. The interesting thing about regulations like this is the “privacy paradox,” where people are simultaneously sharing loads of private information on the web while wanting to protect their privacy (Corcoran and Costache 2016). If the individual gives their consent, is that enough? What if this requires more regulation than individual choice? The legal regulations regarding privacy have been built around PHYSICAL privacy (must have a search warrant to enter or force fingerprinting): the boundaries are clearer; violations are more objective and can be easier to prove. As we collect more biometric data and integrate it into our society, what are the ethical responsibilities of the developers, implementers, users, and subjects of biometric data collection?


Discrimination

Is privacy the only ethical drawback? Many concerns about biometric data are around the potential for discrimination. Biometric technology, like face trackers, can demographically classify people’s age, ethnicity, gender, sexual orientation, even. As it continues to improve, its classifications will be more accurate. If this sort of data is used in firms for hiring/firing, or in educational institutions for student enrollment, what governance will prevent discrimination? (Campisi 2013; Naker and Greenbaum 2017) When technology makes judgments, there can be an ‘illusion of objectivity.’ This would make it more appealing to aid in hiring/firing decisions to prevent discrimination. However, it is not that black and white. Who made the algorithm? Who is going to have the ‘veto’ power, if anyone, and decide against the tech’s suggestion? Technology, often, must be taught to make judgments/decisions, except for some of the latest AI technology. If those with digital literacy leverage the biometric data to discriminate against certain populations, how will those who are not digitally literate be able to tell the difference? Isn’t the algorithm always right? Martins explores the firm’s responsibility for algorithms, saying that it is determined by two factors: the role of the decision in society, and the role of the algorithm in the decision (Martin 2018). It is important to find the balance of these two factors when contextualizing the algorithm’s responsibility in decision-making. Does it matter if it’s not the technology’s fault? Is the fact that there is potential to use the technology in this way enough to make it unethical?


Government

The clearest application we can see of these ethical concerns is in government and political implications. The United States and China are both using biometric data for their national security in their two different government structures. In theory, the United States’ use of biometric data should be more just because of its democratic accountability. However, it is unrealistic to assume that the government only acts with justice. China’s use of biometric data is certainly unjust, as it is an authoritarian state exhibiting more explicit control over its citizens. Biometric data has been implemented in China for a while, and it has been used even for small things like facial recognition for jaywalkers from CCTV (Qiang 2019). Constant surveillance has been the expectation of their citizens, as they understand they would never be asked for informed consent. In the United States, how they are leveraging biometric data is more ambiguous. Law enforcement has been working with a private firm called Clearview AI, which is focused on facial recognition technology. It is employed by the government as well as private firms. Clearview AI’s website highlights its successes with law enforcement, like identifying child victims of sexual assault. The risk, however, with advertising work in that realm is the opportunity to use that technology for evil reasons or gaining knowledge on how to ‘trick’ the AI.


Conclusions

Innovations with biometric data collection are equally exciting and worrisome. While we can leverage it for exciting projects like our eye-tracking technology to study climate change data visualizations, it can also be used to discriminate, oppress, and dehumanize. We have a responsibility and duty to spend time on the ethical implications of biometric data on society (nationally and internationally) to protect each other from new manifestations of injustice.

 

References

  1. Ainsley, Julia. "Homeland Security Identifies 311 Child Victims of Sexual Exploitation in 'Cold Cases'." NBC News, 9 Aug. 2023, www.nbcnews.com/news/investigations/homeland-security-identifies-311-child-victims-sexual-exploitation-rcna97357. Accessed 13 Apr. 2025.

  2. Campisi, Patrizio. "Security and privacy in biometrics: towards a holistic approach." Security and privacy in biometrics. London: Springer London, 2013. 1-23.

  3. Corcoran, Peter, and Claudia Costache. "Smartphones, biometrics, and a brave new world." IEEE Technology and Society Magazine 35.3 (2016): 59-66.

  4. Evans, Robert, Michael McNamee, and Owen Guy. "Ethics, nanobiosensors and elite sport: The need for a new governance framework." Science and engineering ethics 23 (2017): 1487-1505.

  5. Martin, Kirsten E., and R. Edward Freeman. "The separation of technology and ethics in business ethics." Journal of Business Ethics 53 (2004): 353-364.

  6. Martin, Kirsten. "Ethical implications and accountability of algorithms." Journal of business ethics 160.4 (2019): 835-850.

  7. North-Samardzic, Ainsley. "Biometric Technology and Ethics: Beyond Security Applications." Journal of Business Ethics, vol. 167, no. 3, 2020, pp. 433–450, doi:10.1007/s10551-019-04143-6.

  8. Qiang, Xiao. "The road to digital unfreedom: President Xi's surveillance state." Journal of Democracy 30.1 (2019): 53-67.

  9. Smith, Marcus, and Seumas Miller. "The ethical application of biometric facial recognition technology." Ai & Society 37.1 (2022): 167-175. 

Comments


bottom of page