Designing ethical and inclusive AI systems
for the benefits of Canada and the World
The AI + Inclusion stream promotes a research agenda around developing effective inclusive and participatory ethical design and engineering frameworks for AI systems, notably with respect to avoid the disturbing risk of amplifying global digital injustices through AI for women, youth, seniors, Indigenous People, LGBTQIA2S+, racialized people, people with disabilities, and linguistic minorities (such as French and Indigenous Languages)—and those at the intersection of these identities. The research also considers specific concerns regarding people in the North and remote communities, as well as developing nations.
While promising important benefits, unchecked AI development introduces significant challenges, from creating uncertainty surrounding the future of work, to shifts in power to new structures outside the control of existing and understood governance and accountability frameworks. It is particularly problematic how these challenges can have disproportionate impacts on marginalized populations and as a result amplify existing injustices.
This stream aims to pioneer best practices while proposing tools and frameworks for the ethical engineering of AI systems. We notably investigate strategies to implement effective ethical requirements for AI, implement those requirements and critically verify that they have been effectively implemented.
Ethical AI Requirements: From Conception to Validation
Various industry groups have developed, and continue to develop, principles for the ethical development of AI. However, far fewer have demonstrated how to effectively turn those principles into concrete engineering and design requirements for AI, implement those requirements into actual AI systems, and validate their implementation. In order to move towards a future of fair, inclusive, and safe AI systems, we need to address this critical gap. This research leverages our expertise and leadership in value-based design to develop and validate effective methodologies, frameworks and tools for ethical AI requirements engineering practices.
Youth and AI
Young Canadians are heavily engaged in digital communications, further exposing them to potentially discriminatory impacts of AI and algorithmic sorting. Like other Canadians, however, young Canadians have limited knowledge and understanding of AI, how it functions and what the long-term implications are going to be on their lives (e.g., family, academic, financial, health and well-being, professional and other aspects). For this reason, it is essential to ensure that young Canadians and those who work to support them (including parents, teachers and community organizers) have access to information in accessible language, common terminology, informative and engaging educational and outreach materials relating to AI. In order to address this need, we will produce open source multimedia resources such as videos, lesson plans, train-the-trainer sessions, web-based content, as well as hard copy material (for communities that require it) translated into multiple languages with the mission to ensure that young Canadians, their families, teachers and other supporting adults, have the information they need to better navigate in a digital environment that is increasingly dominated by AI.
Social, Ethical and Legal Requirements and Governance Structure for AI-Powered COVID-19 Contact-Tracing Apps
In the early days of the COVID-19 pandemic, debates and discussions began to emerge internationally and in Canada around the use of contract-tracing apps to support manual contact tracing, and notably the use of AI to enhance the functionality of contact tracing apps. We are creating a database of emerging scholarly literature, government reports, and news media stories on contact tracing/exposure notification apps to support our ongoing research to better understand the impact of the global adoption of contact tracing/exposure notification apps. We also design algorithmic impact assessments for such apps, as well as governance recommendations and engineering design requirements.
- Courtney Doagoo, AI + Society Fellow
- Global Pandemic App Watch (7 October 2020)
- Event Brief: Designing ethical AI through Indigenous-centred approaches (18 December 2020)
- Event Brief: The consequences of experimenting AI and emerging technologies onmigrant communities at the border (13 January 2021)
- Report: Digital Ethics in Times of Crisis: COVID-19 and Access to Education and Learning Spaces (8 March 2021)
- The Future is Reaching for Us: Indigenous Protocol and AI (30 December 2020)
- Identity Manipulation: Responding to Advances in Artificial Intelligence and Robotics (28 May 2020)
- Artificial Intelligence, Data, and Inequality in Egypt (25 June 2020)
- Privacy Challenges with COVID-19 Contact-Tracing Apps (11 June 2020)
- Do the Next Billion Users Need More Innovation? (18 June 2020)
- AI at the Border: A Conversation on Migration Management from the Ground Up (12 December 2020)
- Coded Bias (3 March 2021)
- AI and “Equality by Design” (24 November 2021)
- #tresdancing: A virtual film launch (22 February 2022)
- Short film: #Tresdancing, a short film on AI, Surveillance and Educational Technologies (24 February 2022)
- Exhibition: Calibrating Stretched Transparency, a multidisciplinary conversation on AI and climate change (19 November 2021)
- Exhibition: I’m Honoured To Serve, an artistic critic of digital assistants and user’s exploitation (15 June 2022)
- Jane Bailey, Valerie Steeves, Jacquelyn Burkell, Chandell Gosse & Suzie Dunn, “AI and Technology-Facilitated Violence and Abuse” in Florian Martin-Bariteau & Teresa Scassa, Artificial Intelligence and the Law in Canada (LexisNexis Canada, 2021).
This research stream is supported by the Scotiabank Fund for the AI + Society Initiative, the Canada Research Chairs Program, and the Social Sciences and Humanities Research Council of Canada.