Shaping global AI regulation
to build a secure and resilient society
The emergence and rapid development of artificial intelligence (AI) has forced domestic and international policymakers to question the applications of these normative frameworks. The AI + Regulation stream focuses on the regulation of AI and automation around the world to inform Canadian policymakers and actors to better respond to the evolution of technology. The research conducted in this stream critically discusses copyright, trade secrets, privacy laws, and data governance to ensure the deployment of inclusive and less biased AI systems. We need to be able to access more information and more data to be able to better know society and include more diverse voices, opinions, and realities.
This research is led in collaboration with our global research network as the ubiquity of the global and digital economy calls for comparative studies to design adequate frameworks. As most regulations are made at the national level thereby reflecting local values, there is bound to be conflict where different states approach their policies and regulatory frameworks with competing interests and societal values in mind. Yet, those states may not share the same core sets of societal values.
The research in this stream aims at developing tools and frameworks to tackle these issues. Our initiative promotes collaborative work to share best practices across jurisdictions while rendering current legislative AI tools more accessible.
Canadian Law and Policy for Artificial Intelligence
AI creates significant new challenges for law and public policy. Artificial intelligence (AI) is poised to transform the economy, the nature of work, entire fields of human endeavor such as medicine and engineering, and the nature of government and commercial decision-making. Many of these transformations are already underway with technology advancing more quickly than we seem equipped to regulate it. Although there has been relatively little AI-specific litigation or legislation in Canada—or elsewhere for that matter—the rapid advancement of these technologies requires us to interrogate whether our existing legal frameworks are applicable or how they may need to adapt to this fundamentally disruptive technology. The project will aim at addressing current legal uncertainties and points to the questions yet to be resolved by the courts and the laws yet to be enacted by legislators.
Regulating AI Ethics
AI, blockchain, and other algorithmic structures are rapidly transforming various sectors in society. As a result policymakers and legislators are dramatically outpaced. Current discourse indicates that the law needs to adapt to respond to 21st-century challenges. Some seem convinced that the absence of sufficiently specific black letter laws and corresponding regulations are rendering governments impotent against powerful corporate interests, leaving critical design choices to be developed by corporate actors. This research project analyzes various initiatives to regulate AI and incorporate human rights and ethical values at every stage of development and deployment of AI. Beyond the usual black letter law, the project is exploring other approaches to regulate AI, such as rules of professional conduct, norms, and standards. The project is mapping ethical frameworks to govern three important automated technologies: machine learning algorithms, virtual assistants, and blockchain and as a result will propose policy recommendations and frameworks to support ethical innovation.
Automated and Predictive Decision-Making
The Government of Canada Directive on Automated Decision-Making and its accompanying Algorithmic Impact Assessment tools are vital documents that are already having an impact nationally and globally, in both the public and private sectors. Using these documents as a departure point and grounding the work in concepts of administrative and natural justice, this project examines in detail issues of fairness in algorithmic decision-making. How well do our concepts of fairness translate to contexts of automated decisions? What adaptations are necessary? Are there gaps that existing principles are ill-adapted to address?
Copyright and AI
There are social, economic, and cultural implications at the intersection of copyright and artificial intelligence. There are several avenues currently being explored, including ensuring that Canadian policy is competitive with other countries and does not create barriers to AI innovation, the question of AI and creativity, and the bias concerns which arise from the reliance on “low risk” data sets for machine learning purposes.
Data Governance for Data Sharing
AI depends upon the access to use vast quantities of data, which has led to demands for greater access to data resources. Data sharing is facilitated by a wide range of mechanisms from free or paid access through APIs, data trusts, data commons, open data portals, and research data infrastructure. Data governance for data sharing becomes much more complex when data contains personal information. For such datasets, de-identification techniques, license terms, and safe sharing sites are among the tools available, although each has its limitations. Data governance for data sharing in Canada may also require changes to law and policy to facilitate these practices and to provide needed protections for privacy and ethical reuse.
AI and Justice
AI has been used to automate parts of bail, sentencing, and parole decisions in the criminal justice system. Critics of such tools have raised credible human rights concerns about their use. In particular, there have been studies demonstrating that, in some cases, “predictive” algorithms operate in discriminatory and disproportionately punitive ways against racialized communities. Similar tools are now being considered for use in Canada. It is essential that we develop a robust framework for assessing how such tools can be appropriately used here, if at all. This project will undertake to develop such a framework. The research will also look at how human rights codes and other legal tools can be used to leverage more diversity and inclusion in AI systems.
- Cristiano Therrien, Scotiabank Postdoctoral Fellow on AI and Regulation
- Courtney Doagoo, AI + Society Fellow
- Karni Chagal-Feferkorn, Scotiabank Postdoctoral Fellow on AI and Regulation (2020-2022)
- Book: Artificial Intelligence and the Law in Canada (13 March 2021)
- Report: AI Regulation in the World. A Quarterly Update (October-December 2020) (January 2021)
- Event Brief: The consequences of experimenting AI and emerging technologies on migrant communities at the border. (13 January 2020)
- Event Brief: Défis et inquiétudes face à l'utilisation de la reconnaissance faciale par les forces de police. (15 March 2021)
- AI at the Border: A Conversation on Migration Management from the Ground Up (12 December 2020)
- Entretiens Jacques Cartier - Forum sur la régulation de l’intelligence artificielle (22 November 2020)
- Algorithmic Policing and the Canadian Charter (8 October 2020)
- The Unimportance of Being Unintended: Digital Platform Harms and Reasonable Foreseeability (3 Juin 2020)
- Privacy Challenges with COVID-19 Contact-Tracing Apps (11 June 2020)
- Do the Next Billion Users Need More Innovation? (18 June 2020)
- Vers un droit de l’intelligence artificielle ? (14 January 2021)
- IA + Surveillance : Le cas de la reconnaissance faciale utilisée par la police (11 February 2021)
- The Death of the AI Author (29 March 2021)
- IA + Santé : Penser les politiques de santé à l’ère du numérique (21 March 2021)
- When They Hear Us: Race, Criminal Reform, and the Democratizing Potential of Algorithms (14 April 2021)
- Regulating Artificial Intelligence: Recent Developments in Canada and the EU (27 May 2021)
- IA & Justice : La représentation sémantique du droit (18 June 2021)
- I’m Not Responsible, I’m Just An Algorithm: Locating Tort Liability for Algorithm-Driven Harms (8 February 2022)
- Karni Chagal-Feferkorn, When AI Systems Are Negligent, Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law (Thomson Reuters, 2021)
- Karni Chagal-Feferkorn, How Can I Tell if My Algorithm Was Reasonable?, (2021) 27 Mich. Tech. L. Rev. 213
- Karni Chagal-Feferkorn, Data Science Meets Law: Learning Responsible AI Together, (2022) Commun. ACM 65, 2
- Teresa Scassa, Pamela Robinson & Ryan Mosoff, The Datafication of Wastewater: Legal, Ethical and Civic Considerations, TechReg 2022, 23-35
- Florian Martin-Bariteau & Teresa Scassa, Artificial Intelligence and the Law in Canada (LexisNexis Canada, 2021)
- Florian Martin-Bariteau & Marina Pavlović, “AI and Contract Law” in Florian Martin-Bariteau & Teresa Scassa, Artificial Intelligence and the Law in Canada (LexisNexis Canada, 2021).
- Teresa Scassa, “AI and Data Protection Law” in Florian Martin-Bariteau & Teresa Scassa, Artificial Intelligence and the Law in Canada (LexisNexis Canada, 2021).
- Colleen M. Flood & Catherine Régis, “AI and Health Law” in Florian Martin-Bariteau & Teresa Scassa, Artificial Intelligence and the Law in Canada (LexisNexis Canada, 2021).
- Jane Bailey, Valerie Steeves, Jacquelyn Burkell, Chandell Gosse & Suzie Dunn, “AI and Technology-Facilitated Violence and Abuse” in Florian Martin-Bariteau & Teresa Scassa, Artificial Intelligence and the Law in Canada (LexisNexis Canada, 2021).
- Amy Salyzyn, “AI and Legal Ethics” in Florian Martin-Bariteau & Teresa Scassa, Artificial Intelligence and the Law in Canada (LexisNexis Canada, 2021).
- Michael Geist, “AI and International Regulation” in Florian Martin-Bariteau & Teresa Scassa, Artificial Intelligence and the Law in Canada (LexisNexis Canada, 2021).
This research stream is supported by the Scotiabank Fund for the AI + Society Initiative, the Canada Research Chairs Program, the University of Ottawa Research Chair Program, and the Social Sciences and Humanities Research Council of Canada.