Shaping global AI regulation to build a secure and resilient society.

The emergence and rapid development of artificial intelligence (AI) has forced domestic and international policymakers to question the applications of these normative frameworks. The AI + Regulation stream focuses on the regulation of AI and automation around the world to inform Canadian policymakers and actors to better respond to the evolution of technology. The research conducted in this stream critically discusses copyright, trade secrets, privacy laws, and data governance to ensure the deployment of inclusive and less biased AI systems. We need to be able to access more information and more data to be able to better know society and include more diverse voices, opinions, and realities.

This research is led in collaboration with our global research network as the ubiquity of the global and digital economy calls for comparative studies to design adequate frameworks. As most regulations are made at the national level thereby reflecting local values, there is bound to be conflict where different states approach their policies and regulatory frameworks with competing interests and societal values in mind. Yet, those states may not share the same core sets of societal values.

The research in this stream aims at developing tools and frameworks to tackle these issues. Our initiative promotes collaborative work to share best practices across jurisdictions while rendering current legislative AI tools more accessible.

Projects

Canadian Law and Policy for Artificial Intelligence

AI creates significant new challenges for law and public policy. Artificial intelligence (AI) is poised to transform the economy, the nature of work, entire fields of human endeavour such as medicine and engineering, and the nature of government and commercial decision-making. Many of these transformations are already underway with technology advancing more quickly than we seem equipped to regulate it. Although there has been relatively little AI-specific litigation or legislation in Canada—or elsewhere for that matter—the rapid advancement of these technologies requires us to interrogate whether our existing legal frameworks are applicable or how they may need to adapt to this fundamentally disruptive technology. The project will aim at addressing current legal uncertainties and points to the questions yet to be resolved by the courts and the laws yet to be enacted by legislators.

Regulating AI Ethics

AI, blockchain, and other algorithmic structures are rapidly transforming various sectors in society. As a result, policymakers and legislators are dramatically outpaced. Current discourse indicates that the law needs to adapt to respond to 21st-century challenges. Some seem convinced that the absence of sufficiently specific black letter laws and corresponding regulations are rendering governments impotent against powerful corporate interests, leaving critical design choices to be developed by corporate actors. This research project analyzes various initiatives to regulate AI and incorporate human rights and ethical values at every stage of development and deployment of AI. Beyond the usual black letter law, the project is exploring other approaches to regulate AI, such as rules of professional conduct, norms, and standards. The project is mapping ethical frameworks to govern three important automated technologies: machine learning algorithms, virtual assistants, and blockchain and as a result will propose policy recommendations and frameworks to support ethical innovation.

Automated and Predictive Decision-Making

The Government of Canada Directive on Automated Decision-Making and its accompanying Algorithmic Impact Assessment tools are vital documents that are already having an impact nationally and globally, in both the public and private sectors. Using these documents as a departure point and grounding the work in concepts of administrative and natural justice, this project examines in detail issues of fairness in algorithmic decision-making. How well do our concepts of fairness translate to contexts of automated decisions? What adaptations are necessary? Are there gaps that existing principles are ill-adapted to address?

Copyright and AI

There are social, economic, and cultural implications at the intersection of copyright and artificial intelligence. There are several avenues currently being explored, including ensuring that Canadian policy is competitive with other countries and does not create barriers to AI innovation, the question of AI and creativity, and the bias concerns which arise from the reliance on “low risk” data sets for machine learning purposes.

Data Governance for Data Sharing

AI depends upon the access to use vast quantities of data, which has led to demands for greater access to data resources. Data sharing is facilitated by a wide range of mechanisms from free or paid access through APIs, data trusts, data commons, open data portals, and research data infrastructure. Data governance for data sharing becomes much more complex when data contains personal information. For such datasets, de-identification techniques, license terms, and safe sharing sites are among the tools available, although each has its limitations. Data governance for data sharing in Canada may also require changes to law and policy to facilitate these practices and to provide needed protections for privacy and ethical reuse.

AI and Justice

AI has been used to automate parts of bail, sentencing, and parole decisions in the criminal justice system. Critics of such tools have raised credible human rights concerns about their use. In particular, there have been studies demonstrating that, in some cases, “predictive” algorithms operate in discriminatory and disproportionately punitive ways against racialized communities. Similar tools are now being considered for use in Canada. It is essential that we develop a robust framework for assessing how such tools can be appropriately used here, if at all. This project will undertake to develop such a framework. The research will also look at how human rights codes and other legal tools can be used to leverage more diversity and inclusion in AI systems.

Team

Stream leads

Florian Martin-Bariteau

Teresa Scassa

Faculty

Amy Salyzyn

Jane Bailey

Michael Geist

Affiliates

Alumni

Outputs

Publications

Conversations

Scholarship

Funding

This research stream is supported by the Scotiabank Fund for the AI + Society Initiative, the Canada Research Chairs Program, the University of Ottawa Research Chair Program, and the Social Sciences and Humanities Research Council of Canada.