A discussion on the challenges and trends of this novel technology
In addition to being highly controversial, Facial Recognition Technology (FRT) presents our legal system with a lot of complex, interrelated challenges. Currently, it leaves us with a lot of questions—about the technical standards, regulatory laws, and democratic oversight required to ensure that our civil liberties are protected—with very few answers.
The recent panel organized by the David Asper Centre for Constitutional Rights, the International Human Rights Program, and the Future of Law Lab on February 15 sought to help break down this technical, ethical, and legal quagmire. The session made clear that tackling this issue requires a consideration of privacy, criminal, constitutional, and human rights law and that it might be more helpful to fully consider the nature of the problem before trying to skip to a solution.
Professor Vincent Chiao—an associate professor at the Faculty of Law, currently serving as a visiting professor at Harvard Law School—discussed the challenges of regulating FRT through the legal framework of privacy protection. He recognized that we might be inclined to argue that government use of this technology violates our right to privacy under the Charter. The problem is that it is not obvious how it does.
Although there is something inherently unsettling about software that can detect and recognize a face, our faces are not private in the ordinary sense of the term. In fact, they are arguably the most public part of our bodies. A pertinent example of this is that we are required to show our government-issued photo ID cards in a variety of contexts. The question becomes whether our privacy is infringed more by the software than it would be by an official manually comparing our photo to our face.
Facial recognition is not as obvious a violation of privacy as an invasive procedure or other form of body manipulation would be. It is perhaps more plausible to categorize it as an invasion of our right to anonymity. However, Professor Chiao is not sure whether such a right is available to us. It is these complications with the privacy argument that makes it difficult for courts, lawyers, and civil rights advocates to fight against FRT insofar as it leads to incipient authoritarianism or infringes civil liberties. Though the concerns about unconstrained use of FRT are reasonable, arguing privacy violations might not be the most effective avenue for change.
Kate Robertson, a criminal and regulatory litigator and Fellow at the Citizen Lab, echoed the sentiment that the FRT problem does not lend itself to a simple answer. She argued that part of the reason why is the complex nature of the technology itself. FRT’s consequences in real terms are much more nuanced than those of CCTV footage, for example. The human rights law issues FRT poses, as well as the complex systems of laws that would likely apply to its use, make it stand apart.
When considering the legality of facial recognition, Robertson emphasized looking at a number of important variables. What makes a difference between a legal and illegal action in the privacy and human rights contexts is whether or not there has been compliance with a comprehensive system of checks and balances. This system informs what the appropriate boundaries of police and governmental conduct are in a free and democratic society.
Robertson disagreed with Professor Chiao that privacy law is as limited in its current application as he seemed to suggest. She pointed to a recognition by the SCC since as early as the 1990s that privacy law must keep pace with technological developments. A recognition that what the law requires is a normatively defined set of rules that are applied contextually and responsive to changes in technology over time. The legal oversight mechanisms that are specifically attuned to and govern wiretapping and GPS tracking provide examples of this.
While such examples can give us clues about what normative principles apply to FRT, privacy law still needs to respond and adapt to the particular issues that this technology will present. Currently, we are in a gap period where a lack of appellate review makes it unclear how things will progress from a regulatory standpoint. The problem with this gap is that AI technology such as facial recognition can and is being experimented with without necessary infrastructure such as regulatory laws, training, and community notice. This is exacerbated by the range of available systems with different levels of accuracy and bias, and the fact that these systems in general tend to misidentify racialized and female faces.
Both Professor Chiao and Robertson agreed that the future of FRT is unknown. Whether this calls for further police and governmental experimentation, or for the implementation of moratoriums until constitutional safeguards are put into place, was left up to debate. What is clear is that in making these calls, we need to weigh the usefulness of FRT against the dangers of its unchecked use and algorithmic bias on our civil liberties and on marginalized communities.