AI-equipped police body cameras in Canada's Edmonton spark ethical concerns | AP News
In a live test of facial recognition technology, Edmonton, Canada, is now using AI-equipped police body cameras to detect the faces of approximately 7,000 individuals on a "high risk" watch list. This raises questions about the potential role of facial recognition in policing across North America, despite initial concerns raised by leading body camera maker Axon Enterprise, Inc. six years ago.
The pilot project, which began last week, has sparked debates about the ethical implications of using facial recognition technology in law enforcement. Critics argue that the technology may lack sufficient public debate, testing, and expert scrutiny regarding its societal risks and privacy implications.
Barry Friedman, a former chair of Axon's AI ethics board, expressed concerns about the company's decision to proceed without thorough evaluation. He emphasized the importance of clear benefits and public consensus before implementing technologies with significant risks and costs.
Axon's CEO, Rick Smith, defended the Edmonton pilot as early-stage field research, aiming to assess technology performance and identify necessary safeguards for responsible use. The project aims to enhance officer safety by identifying individuals with "flag or caution" status, such as those with violent histories or weapons.
However, the technology's potential impact on policing worldwide is significant. Axon, a publicly traded firm known for the Taser, dominates the U.S. body camera market and has been expanding its presence in Canada. The company recently secured a contract with the Royal Canadian Mounted Police, surpassing its competitor, Motorola Solutions.
Motorola, while capable of integrating facial recognition, has chosen not to deploy it proactively due to ethical considerations. The Alberta government's mandate for body cameras in 2023 aimed to improve transparency, evidence collection, and investigation timelines.
Despite its benefits, real-time facial recognition has faced backlash from civil liberties advocates and political groups. Studies have shown biased results and inaccuracies, particularly for darker-skinned individuals. As a result, several U.S. states and cities have sought to limit police use of facial recognition.
The European Union banned real-time public face-scanning technology, except for serious crimes. In the UK, authorities have used the technology for 1,300 arrests in the past two years, with plans to expand its use. The details of Edmonton's pilot remain undisclosed, including the AI model and third-party vendor.
Critics, like University of Alberta criminology professor Temitope Oriola, question the technology's impact on public interactions and police-community relationships. Axon has faced similar backlash in the past, with its AI ethics board resigning over concerns about a Taser-equipped drone.
Despite past controversies, Axon CEO Smith claims the company has made significant improvements in facial recognition accuracy and is ready for real-world trials. However, the need for transparency, accountability, and rigorous testing remains a central debate in the ongoing discussion.