Designing ethical AI through Indigenous-centred approaches

Posted on Friday, December 18, 2020

Indigenous perspectives could help shed new lights on conversation about design of ethical and inclusive artificial intelligence (AI). To amplify those perspectives, the AI + Society Initiative and CIFAR, in collaboration with the University of Ottawa Research Chair in Technology and Society, hosted a conversation on Indigenous Protocol and AI featuring Professor Jason Lewis, a leading expert in the field and the University Research Chair in Computational Media and the Indigenous Future Imaginary at Concordia University.

How can Indigenous epistemologies and ontologies contribute to the global conversation regarding society and AI? How do we broaden discussions regarding the role of technology in society beyond the largely culturally homogenous research labs and Silicon Valley startup culture? How do we imagine a future with A.I. that contributes to the flourishing of all humans and non-humans?  Those are some the questions discussed by Prof Lewis has he reflected on the “Indigenous Protocol and Artificial Intelligence Position Paper” published in the summer of 2020 following a series workshops hosted in the winter and spring of 2019 with the support of CIFAR's AI and Society Program.


Key insights 

Providing a starting place for designing ethical AI through an Indigenous-centred approach, Prof. Lewis noted the purpose of the White Paper is meant to reimagine a future while rooting that future in Indigenous ways of knowing, language, and traditions. Keeping Indigenous knowledge and identity at the core of the design and development of artificial intelligence reflects the experiences of those who have built it. Lewis captures this very sentiment, "…use digital technologies to create future imaginaries from an Indigenous perspective founded in Indigenous knowledge and eventually with Indigenous created technology." The protocols explore the relationship of kinship, protocols, and AI, all of which were inspired by Lewis's paper "Making Kin with Machines." 

However, questioning how we can create this new imaginary grounded in traditional knowledge and Indigenous identity is possible through the series of strategies Lewis presented. Such methods include revising our sense of what is possible: Lewis notes that it is about understanding and reminding oneself that the pursuit of sovereignty can be the future but is not dependent upon changing the past. Instead, the capabilities, knowledge, and strength of Indigenous communities will drive a new sovereign future. As well as asking our own questions: reflecting on Indigenous experiences means that questions that are not prescribed to assumptions about Indigenous peoples. 

In his presentation, Prof. Lewis highlighted a few future imaginaries that the co-authors of the protocol document imagined based on how the author's communities or ways of knowing can be harnessed through AI. The differences in the author's future imaginaries are dependent upon locality and community way of knowing. The diverse set of future imaginaries informed the position paper guidelines for developing Indigenous centered AI design. 

Prof. Lewis  notes that he would like to take the learnings from the protocol document a step further. By utilizing the learnings and strategies from the protocol document with the thinking behind the Anu'u Ōlelo programming language while working with computer scientists to build a Kānaka AI (an AI made from Hawaiian understanding of how to live and be in the world). 

Prof. Lewis concludes his talk by discussing three significant provocations that he continues to question as he engages with this topic as these provocations significantly impact AI systems' design:

  1. White supremacy is not a bug, it’s a feature: Understanding that white supremacy will impact our understanding of the world and therefore create biases, biases inevitably will have a large impact on systems design. 

  2. Manifestos protect normative values and allow for the erasure of Indigenous identity: by protecting and implementing manifestos such as the Montreal Declaration for Responsible AI, it protects western normative values and therefore "normalizes Indigenous erasure." 

  3. AI System Builders: Prof. Lewis’s final provocation finds that we are not creating the opportunity or space for engineers to become better. Engineers are often unable to explain how and why a system is working, alluding to this black box in AI design, but again that removes any opportunity to create holistically and Indigenous-led and built designs. In continuation of exploring this provocation, Prof. Lewis explores the thought of what we desire for intelligence to be, which is rational and logical, in comparison to how it is implemented in the world, which is irrational and subject to change and needs to be distinguished—questioning how we want the world to be versus how the world is functioning needs to be translated in systems design. 


Following his presentation, Prof. Lewis had an opportunity to engage with the audience to explore the design elements and frameworks contributing to creating Kānaka AI. Prof. Lewis  discussed how we can define "intelligence" in artificial intelligence and what that would encompass. Thus, how intelligence and the deployment of AI systems are heavily dependent on kinship and how that community would want to engage in such kinship with the AI system. Implementing a pan-Indigenous approach to technology would be a gross oversight of how elements of a community identity factor dictate the behaviour and relationship with AI systems. Prof. Lewis concludes with a call for a need to invest more heavily in Indigenous research. He notes that through investing in Indigenous researchers, these future imaginaries can be further explored and become realities by interacting with computer scientists and others working on designing, developing, and deploying these systems. 


Watch the event


Key resources to learn more


Our event summaries are provided to help amplify the conversation around the  ethical, legal and societal implications of AI in a short and accessible way. We invite you to watch the video and read the additional resources for more information on this topic.

This summary was prepared by Muriam Fancy, Research Coordinator at the AI + Society Initiative. Opinions and errors are those of the authors, and not of the Initiative or the University of Ottawa.

Back to top