AI strategist Lisa Palmer and privacy consultant Jodi Daniels discuss privacy concerns around the acquisition of biometric data.facial recognition technology is increasingly being deployed to catch criminals, but experts express concern about the impact on personal privacy and data.
Output of an Artificial Intelligence system from Google Vision, performing Facial Recognition on a photograph of a man, with facial features identified and facial bounding boxes present, San Ramon, California, November 22, 2019. Palmer also noted that predictive policing is often a source of tremendous bias in facial recognition technology. In these instances, some AI systems tend to identify persons of color or people from underrepresented groups more frequently.
Critics of the company argue that the use of its software by police puts everyone into a"perpetual police lineup." Whenever the police have a photo of a suspect, they can compare it to your face, which many people find invasive. It also raises questions about civil liberties and civil rights and has falsely identified people despite a typical high accuracy rate.
One of the biggest challenges with AI training, according to Palmer, is trying to identify what is not there. Daniels highlighted two pivotal instances that have sparked debate about facial recognition and helped cultivate biometric laws. Passengers queue up to pass through the north security checkpoint Monday, Jan. 3, 2022, in the main terminal of Denver International Airport in Denver, Colorado."So, if you think about just the idea of being tracked, it, most people don't love the idea of someone tracking you," Daniels said.