Bots at the Gate

Editor-in-Chief

An Update on Automatic Decision-Making in Canada’s Immigration and Refugee System

Image credit: Citizen Lab

Technology plays an increasingly important role in decisions that have a profound impact on an individual’s life. In the European Union (EU), border officials hope that artificial intelligence can assist their determination of an individual’s admissibility. They have invested in iBorderCtrl, a British startup that claims to pick up on an individual’s “micro gestures” to assess the truthfulness of an individual’s answers. 

If the system perceives that an individual has lied, they can be categorized as “high risk” or “medium risk,” and therefore place their admissibility into the EU at risk. However, recent research has demonstrated that this system makes false positives, and there is concern that it might discriminate against people on the basis of their ethnic origin. This raises the question of how far technology can go in making decisions that affect us.

Two years ago, the University of Toronto’s International Human Rights Program (IHRP), Citizen Lab, and Information Technology, Transparency, and Transformation Lab published “Bots at the Gate: A Human Rights Analysis of Automatic Decision-Making in Canada’s Immigration and Refugee System.” Co-authored by IHRP’s interim director Petra Molnar and Citizen Lab research fellow Lex Gill, this report discussed how automated decision-making is used in Canada’s immigration and refugee system, and the human rights implications of this.

Since at least 2014, Canada has experimented with algorithmic and automated technologies to replace or assist administrative decision-making by immigration officials. Officials are hopeful that technology will fulfill its promise of greater expediency and accuracy in decision-making. However, this technology can significantly impair the human rights of those most vulnerable who are subject to this technology.

The current system of decision-making in the immigration and refugee context is designed to allow for discretion to be exercised by the immigration official. For example, in determining an individual’s admissibility into a country, the Canadian Border Services Agency (CBSA) must make a determination as to whether that individual is a “high risk traveller” or poses a risk to national security. These determinations are subjective, and biases can play a role in the outcome. 

In 2019, CBC News reported that “border service officers did use their discretion to order secondary inspections for travellers from the Middle East, Africa and the Caribbean at far higher rates than for travellers from the U.S. or Western Europe.” The same report also remarked that facial recognition technology had a higher error-rate for darker-skinned travellers and for women. Technology that aims to remove human intervention could exacerbate existing biases and errors at the expense of many, and especially the marginalized.

These examples demonstrate only some of the many human rights and legal challenges posed by automated decision-making identified by the “Bots at the Gate” report. Automated decision-making implicates several international human rights obligations under treaties such as the International Covenant on Civil and Political Rights and the International Convention on the Elimination of All Forms of Racial Discrimination. 

It can also infringe an individual’s Charter rights of freedom of association and freedom of movement, their quasi-constitutional right to privacy, and administrative law rights of procedural fairness, right to be heard, and right to a fair, impartial, and independent decision-maker. Additionally, automated decision-making raises issues of access to justice, confidence in the legal system, and private sector accountability. These legal issues are among many highlighted in the report.

The report concludes with an extensive list of recommendations for the federal government. This includes, among other things, greater transparency in the procurement and deployment of automatic decision-making systems, adopting a binding, government-wide directive for all systems that substantively addresses the many legal issues, and establishing an independent oversight body. The recommendations balance the government’s desire to streamline their decision-making processes with human rights and other legal obligations. After the report was released, the IHRP and Citizen Lab met with government officials in Ottawa to present the report.

Although the report was only published two years ago, the development of technology and governments’ use of automated decision-making at borders have increased significantly. For example, the CBSA has been testing the use of AVATAR, an emotion sensing system similar to iBorderCtrl, for enhancing Canada’s border security. 

Molnar has been following these developments closely. In addition to her leadership role with the IHRP, she serves as a Mozilla Fellow with European Digital Rights, an association of civil and human rights organisations across Europe. In a series of informative pieces, she discusses the pervasiveness of migration control technologies across the world. During November 2019, she participated in the United Nations Global Pulse meeting in Geneva. Her continued research and advocacy continues the work started with the “Bots at the Gate” report.

With the outbreak of the COVID-19 crisis, governments have been deploying technology that drastically infringes on human rights. Israel, Iran, and China, among other countries, have used cell phone tracking data and facial recognition software to determine where the virus has spread, at a huge cost to individual privacy. Work on the intersection of technology and human rights remains as relevant as ever.

Categories:
Tags:

Advertisement

Begin typing your search above and press return to search. Press Esc to cancel.