To:
The Honourable Marco E. L. Mendicino, P.C., M.P., Minister of Public Safety
The Honourable François-Philippe Champagne, P.C., M.P., Minister of Innovation, Science and Industry
Joël Lightbound, Chair of the Standing Committee on Industry and Technology
CC:
The Honourable Bill Blair, Minister of Emergency Preparedness
The Honourable Pierre Poilievre, P.C., M.P., Leader of the Opposition
Yves-François Blanchet M.P., Bloc Québécois Leader
Jagmeet Singh M.P., NDP Leader
Elizabeth May M.P., Green Party Parliamentary Leader
The Honourable Michelle Rempel Garner, P.C., M.P., Parliamentary Caucus on Emerging Technology
The Honourable Colin Deacon, Senator, Parliamentary Caucus on Emerging Technology
Members of the Standing Committee on Access to Information, Privacy and Ethics
Members of the Standing Committee on Industry and Technology
Joint Letter of Concern regarding the government’s response to the ETHI Report on Facial Recognition Technology and the Growing Power of Artificial Intelligence
Dear Ministers,
We are writing to express our concerns with the government’s response to the recent publication of the Committee on Access to Information, Privacy and Ethics (ETHI) report, “Facial Recognition Technology and the Growing Power of Artificial Intelligence.” After careful review, we find that it fails to address the severity of the challenges caused by facial recognition technology (FRT) and artificial intelligence (AI). Canada needs to take action now.
The ETHI Committee’s study confirmed that Canada’s current legislation does not adequately regulate facial recognition technology and artificial intelligence. While discussions concerning FRT often focus on security and surveillance, the report demonstrates how FRT and AI systems are increasingly being adopted across many Canadian sectors, including retail, e-commerce, and healthcare – quickly becoming ubiquitous in daily life.
Critically, these technologies threaten many human rights, equity principles, and fundamental freedoms, including the right to privacy, freedom of association, freedom of assembly, freedom of expression and right to non-discrimination. These harms are not only caused by the real-time use of FRT, but by FRT’s connection to broader surveillance and AI driven systems, such as its use in populating biometric databases and training of AI algorithms. Without adopting a robust legislative framework to govern this invasive technology, there is a pervasive and increasing risk of individual, collective, and social harm.
The ETHI Committee’s recommendations are generally strong and represent a meaningful step towards the responsible governance of FRT and AI in Canada. Through a participatory approach, the Committee listened to feedback and advice from a range of witnesses, whose recommendations were reflected in the report.
Significantly, the ETHI Committee acknowledges the disproportionate implications FRT has for historically racialized and marginalized communities, particularly because of biased and inaccurate algorithms. Consequently, the Committee calls on the government to invest in studying and disclosing such impacts. This is an important recommendation. However, even if increased accuracy is achieved within FRT applications, and technical bias is resolved, we note that the technology still raises serious concerns. If left unregulated, the use of more accurate facial recognition technologies will become even more detrimental to groups that already experience systemic discrimination. The use of FRT would further exacerbate inequalities through more perfect targeting of those who are already disproportionately surveilled such as unhoused communities, sex workers, individuals who receive income assistance, among others.
The ETHI Committee’s report also considers some of the ways in which FRT could benefit society. This Coalition is of the view that even if there are positive uses for this technology, proper regulatory safeguards are necessary to ensure that any potentially socially beneficial purposes are fulfilled, and harmful uses are prohibited.
This Coalition would like to highlight what we believe are the key recommendations for government action purposed by the ETHI Committee:
- Imposing a federal moratorium on the use of facial recognition technology by federal policing services and Canadian industries until a robust regulatory framework is developed and implemented.
- Developing a regulatory framework that defines acceptable and unacceptable uses of facial recognition technology with a view to protect individuals and communities against mass surveillance, with clear penalties for violations by police.
- Increasing transparency mechanisms for the disclosure of racial, age, or other biases that exist in FRT and policy measures with participation frameworks for these marginalized groups to address such issues.
- Restricting private sector entities from requiring biometric information as a condition of service.
- Amending the Privacy Act and PIPEDA to prohibit entities from capturing images of Canadians from the internet or public spaces for the purpose of populating FRT databases or AI algorithms.
- Strengthening the ability of the Privacy Commissioner to impose meaningful penalties on entities that break the law and to be engaged in regulatory reform concerning FRT.
- Increasing transparency and oversight mechanisms for the use of FRT in the context of national security and procurement.
In its reply, the government failed to address many of the key recommendations made by the ETHI Committee. The government instead relied upon nascent or outdated pieces of privacy legislation that, in their current form, are unable to address the serious risks and challenges caused by the adoption and deployment of facial recognition technologies. This Coalition would like to highlight some of the crucial areas where the government reply was inadequate.
Lack of Engagement with the Calls for a Federal Moratorium on the Use of FRT
The government has not adequately addressed the Committee’s Recommendation 18, that calls for a federal moratorium on the use of facial recognition technology by federal police services. In June 2022, in response to the Standing Committee’s Report on Systemic Racism in Policing in Canada, the Minister of Public Safety announced the government’s commitment to addressing racial bias within Canadian policing. As previously discussed, the ETHI Report acknowledges that FRT can exacerbate racial inequalities and can contribute to the over-policing of equity deserving communities. Given the highly invasive nature of FRT and the fundamental rights at stake, Canada cannot continue to wait for legislative amendments to the Privacy Act and should enact a moratorium until adequate regulations are in place to protect against discrimination and prescribe appropriate standards for law enforcement.
Instructive lessons can be learned from other jurisdictions. In 2021, the European Union instituted a ban on FRT by law enforcement in public spaces. In 2022, the Italian government prohibited the use of FRT until adequate legislation could be passed to regulate the technology. Similarly, numerous American municipalities, including San Francisco, have banned the use of FRT by police and other public agencies. In Canada, a moratorium is possible and should be adopted.
The Federal Government Should Assume a Leadership Role in Responsible Tech Policy
In its reply, the government noted that jurisdictional issues preclude it from regulating the use of FRT by provincial police forces. However, the RCMP maintains over 700 detachments in 150 communities across Canada and provides policing services in over 600 Indigenous communities. The scope of the RCMP’s jurisdiction is significant.
Given Canada’s current patchwork of privacy legislation, the federal government should work with provincial and territorial leaders to promote the development of a comprehensive regulatory framework for FRT. The federal government can and should play a leadership role in developing and guiding responsible tech policy. Looking forward, concerted effort from the federal government is required.
The Treasury Board Directive is Limited in Scope and Lacks Transparency Mechanisms
In its response, the government relies heavily on the Treasury Board Directive on Automated Decision Making to protect against the irresponsible use of FRT by federal departments. However, in its current form this instrument will not solve many of the issues related to the government’s use of FRT and AI. The scope of the Directive is limited. It covers only external and not internal services, such as automated decisions that impact federal employees. Moreover, national security systems are exempt, as well as systems developed or procured before April 1, 2020.
The Directive does not promote transparency because it does not define nor expand upon key reporting processes. For example, under the Directive individuals are entitled to receive a “meaningful explanation” for decisions made by an automated process employed by a federal department, but the Directive does not define or set a standard for what constitutes a meaningful explanation. Additionally, the instrument does not contain any mechanisms to provide the public with a justification for the government’s decision to adopt an AI system in the first place.
Bill C-27 Will Not Protect Individuals’ Privacy Rights and Leaves Stakeholders Behind
The ETHI Committee’s report demonstrates that prompt and meaningful legislative action is required to protect the rights of individuals in Canada. Other jurisdictions have begun taking the necessary steps to ensure responsible governance of FRT as evidenced by the European Union’s recent proposed Artificial Intelligence Act, or even biometric-specific legislation, such as Illinois’s Biometric Information Privacy Act. In comparison, despite unified calls from Federal, Provincial, and Territorial Privacy Commissioners to establish a legal framework for FRT, Canada’s regulatory framework has remained largely stagnant.
While Bill C-27 is a necessary first step in regulating the use of artificial intelligence and updating Canada’s privacy legislation, it ultimately falls short of addressing the risks and recommendations identified in the ETHI Committee’s report. The legislation was largely developed without necessary input from key stakeholders including civil society groups, researchers, and historically marginalized communities as stated by the ETHI Committee in Recommendation 10. Bill C-27 fails to comprehensively consider the human rights implications of AI, particularly its potential for collective and diffuse harms, in addition to individual and targeted harms.
Bill C-27’s Consumer Privacy Protection Act, the new legislation to regulate industry, irresponsibly prioritizes the rights of businesses over individuals in Canada. Critically, it will permit businesses to collect and use individuals’ information without their consent for certain activities. Additionally, it is notably silent on special protections for sensitive personal information such as biometric data including faces, fingerprints, and vocal patterns. Instead, almost all types of data are treated the same.
Similarly, Bill C-27’s Artificial Intelligence and Data Act (AIDA) is wrought with issues. It is limited in scope and leaves many important aspects of the proposed framework to future regulations, thereby undermining transparency. Key provisions such as the definition of high-impact AI systems have been omitted, leaving their formulation to a later date. While the AIDA Companion document lists biometric information of “interest” to the government, biometric information, which goes to the core of individuals’ identity, has not been explicitly identified as sensitive information within the legislation itself. These details should be included in the core of the legislation, and follow the government’s pattern of leaving key elements to future regulations. AI systems developed by the private sector for use by national security agencies are also exempt from AIDA, creating a troubling exclusion for some of the highest risk uses of these tools. Additionally, businesses have been granted far-reaching discretion to make decisions about what might be in their own legitimate business interest, without a full recognition enshrined in the Act that what they are making decisions about is not an individual’s privacy “interest”, but an individual’s fundamental human rights.
Facial recognition technology and artificial intelligence pose a serious risk to a range of fundamental human rights, including equity, privacy, non-discrimination, and freedom of expression. Despite the high-risks associated with these technologies, absent clear legislative standards, they are increasingly being used in ways that are invasive, arbitrary, and irresponsible by law enforcement, private entities, and governments in Canada. The Canadian Government has a responsibility to ensure that individuals’ rights are not subverted with the emergence of AI driven technologies.
We hope that the government will reconsider the recommendations made by the ETHI Committee as it seeks to amend and develop new legislation. As regulations to address the issues posed by FRT move through the legislative process, we look forward to working with you to craft a robust regulatory framework.
Sincerely,
Canadian Civil Liberties Association
BC Freedom of Information and Privacy Commission
The Centre for Media, Technology and Democracy
Criminalization and Punishment Education Project
The Dais at Toronto Metropolitan University
Digital Democracies Institute
Digital Public
International Civil Liberties Monitoring Group
Privacy & Access Council of Canada
Refugee Law Lab
Tech Reset Canada
Women’s Legal Education and Action Fund
Brenda McPhail, Acting Executive Director, Master of Public Policy in Digital Society, McMaster University
Kanika Samuels-Wortley, Researcher & Professor, Toronto Metropolitan University
Christelle Tessono, Tech Policy Researcher
Alessandra Puopolo, Project Coordinator & Researcher, Canadian Civil Liberties Association
Prem Sylvester, Researcher, Digital Democracies Institute
Tim McSorley, National Coordinator, International Civil Liberties Monitoring Group
Aaron Tucker, Academic, University of Toronto
Adam Molnar, Assistant Professor, Sociology and Legal Studies, University of Waterloo
Alana Saulnier, Criminologist, Assistant Professor, Queen’s University
Dr. Kristen Thomasen, Legal Academic
Evan Light, Associate Professor, York University
Fenwick McKelvey, Concordia University
Joanna Redden, Associate Professor Western University
Joe Masoodi, The Dais, Toronto Metropolitan University
Jon Penney, Academic, Osgoode Hall Law School & Citizen Lab
Karim Benyekhlef, Ad.E., Professeur et directeur du Laboratoire de cyberjustice, Faculté de droit, Université de Montréal
Luke Stark, Assistant Professor, Faculty of Information and Media Studies, Western University
Matt Hatfield, Campaigns Director, OpenMedia
Mike Larsen, President, BC Freedom of Information and Privacy Association (FIPA)
Pam Hrick, Executive Director & General Counsel, Women’s Legal Education and Action Fund (LEAF)
Petra Molnar, Associate Director, Refugee Law Lab; Fellow at Harvard’s Berkman Klein Center for Internet and Society
Rosel Kim, Senior Staff Lawyer, Women’s Legal Education and Action Fund (LEAF)
Sam Andrey, The Dais, Toronto Metropolitan University
Sébastien Gambs, Professor at the Université du Québec à Montréal, Canada Research Chair in Privacy-preserving and Ethical Analysis of Big Data
Sharon Polsky, MAPP, President, Privacy & Access Council of Canada
Sonja Solomun, McGill University
Stéphane Leman-Langlois, Laval University
Thomas Linder, PhD, Researcher at Open North
Yuan Stevens, Human Rights Lawyer & Activist
Sent on June 21st, 2023 by the Canadian Civil Liberties Association (CCLA) on behalf of the above organizations and individuals.