The Edmonton Police Service in Canada has begun a pilot program testing body cameras equipped with artificial intelligence capable of real-time facial recognition. The technology, developed by Axon Enterprise, is designed to identify individuals on a police watch list of approximately 7,000 people considered high-risk.
This trial marks a significant step for Axon, which had previously suspended its work on facial recognition technology in 2019 following ethical concerns. The move is now reigniting debate among privacy advocates, technology ethicists, and law enforcement officials about the role of AI in public surveillance and its potential impact on civil liberties.
Key Takeaways
- Edmonton police are testing AI-powered body cameras that can identify individuals from a "high risk" watch list.
- The watch list contains around 7,000 names, including those flagged as violent, armed, or an escape risk.
- The technology is provided by Axon, a leading Taser and body camera supplier, which had previously paused facial recognition development due to ethical issues.
- Critics, including a former Axon ethics board chair, raise concerns about transparency, public debate, and potential biases in the technology.
- The pilot is described by Axon as "early-stage field research" to assess performance and necessary safeguards.
A Controversial Technology Returns to the Streets
Police in Edmonton, a major Canadian city with over one million residents, are now at the forefront of a contentious technological experiment. A group of approximately 50 officers is participating in a trial using body cameras that can actively scan faces and compare them against a curated police database.
According to Kurt Martin, acting superintendent of the Edmonton Police Service, the primary goal is to improve officer safety. The system is intended to alert officers to the presence of individuals who have been classified with a "flag or caution" for behaviors such as being "violent or assaultive" or an "escape risk."
The Watch List Breakdown
The database used in the pilot program is composed of two main lists:
- 6,341 individuals with a "flag or caution" for high-risk behavior.
- 724 individuals with at least one outstanding serious criminal warrant.
This initiative follows a 2023 mandate from the Alberta provincial government requiring all police agencies to adopt body cameras. The original intent was to increase transparency and improve evidence collection, but the addition of live facial recognition introduces a new layer of complexity and concern.
Axon's Cautious Re-entry into Facial Recognition
The company behind the technology, Arizona-based Axon Enterprise, is the dominant supplier of body cameras in the United States and is expanding its presence globally. In 2019, the company publicly stepped back from facial recognition after its own AI ethics board raised serious concerns about its reliability, potential for bias, and societal implications.
Rick Smith, Axon's founder and CEO, has framed the Edmonton pilot not as a product launch but as "early-stage field research." In a public statement, he explained that testing the system outside the U.S. allows the company to "gather independent insights, strengthen oversight frameworks, and apply those learnings to future evaluations."
"It’s essential not to use these technologies, which have very real costs and risks, unless there’s some clear indication of the benefits," stated Barry Friedman, a law professor at New York University and the former chair of Axon's AI ethics board. He expressed concern that the technology is moving forward without sufficient public debate or independent vetting.
Axon has acknowledged that environmental factors like lighting and camera angles can affect the accuracy of all facial recognition systems, potentially having a disproportionate impact on individuals with darker skin tones. The company insists that every potential match generated by the AI will require human review to mitigate these risks.
How the Edmonton Pilot Works
Many operational details of the trial remain limited. The pilot is scheduled to run through the end of December and will operate only during daylight hours, a significant constraint in a city known for its long winter nights. Edmonton police have stated that factors like cold temperatures and lighting conditions will be key metrics in evaluating the technology's performance.
During this initial phase, the participating officers will not receive real-time alerts on their devices if a match is detected. Instead, the data is being collected and will be analyzed later to assess the system's accuracy and effectiveness. The long-term vision, however, is for officers to receive live notifications.
A Global Divide on Facial Recognition
The use of real-time facial recognition by police is a divisive issue globally. The European Union has largely banned its use in public spaces, with exceptions for serious crimes like terrorism. In contrast, authorities in the United Kingdom have used the technology to make over 1,300 arrests in the last two years and are considering a nationwide expansion.
Edmonton police officials have said the system is designed to be activated only when an officer is responding to a call or has initiated an investigation, not during passive patrols through public crowds. This is meant to balance surveillance capabilities with individual privacy rights, though critics question how this distinction will be enforced in practice.
Concerns Over Transparency and Community Trust
The rollout of the pilot program has raised questions about oversight. Alberta's information and privacy commissioner's office confirmed it received a privacy impact assessment from the Edmonton police on December 2, the same day the program was publicly announced. The office is now in the process of reviewing the assessment.
Temitope Oriola, a criminology professor at the University of Alberta, described Edmonton as a "laboratory for this tool." He noted the police service has had a strained relationship with some communities, particularly its Indigenous and Black residents, and questioned whether the technology would improve public safety or further erode trust.
The central debate revolves around who should authorize the use of such powerful technology. Barry Friedman argues that these decisions should not be left to police agencies and technology vendors alone. "A pilot is a great idea. But there’s supposed to be transparency, accountability," he said. "None of that’s here. They’re just going ahead."





