Call for an International Ban on the Weaponization of Artificial Intelligence

#BanKillerAI

Members of the Artificial Intelligence research community exhort the Prime Minister of Canada to join the international call to ban lethal autonomous weapons that remove meaningful human control in the deployment of lethal force.

An open letter authored by five Canadian experts in artificial intelligence research urges the Prime Minister to urgently address the challenge of lethal autonomous weapons (often called “killer robots”) and to take a leading position against Autonomous Weapon Systems on the international stage at the upcoming UN meetings in Geneva.

The authors of the letter are:

  • Ian Kerr, Canada Research Chair in Ethics, Law and Technology, University of Ottawa,
  • Yoshua Bengio, Canada Research Chair in Statistical Learning Algorithms, Université de Montréal,
  • Geoffrey Hinton , Engineering Fellow, Google and Chief Scientific Advisor, The Vector Institute,
  • Rich Sutton, AITF Chair in Reinforcement Learning and Artificial Intelligence, University of Alberta,
  • Doina Precup, Canada Research Chair in Machine Learning, McGill University.

 

Read the Letter     Media Kit

Sign the Letter

  

Open Letter to the Prime Minister of Canada

November 2, 2017

The Right Honourable Justin Trudeau, P.C., M.P.
Prime Minister of Canada
Langevin Block, 80 Wellington Street
Ottawa, Ontario
K1A 0A2

 

Dear Prime Minister Trudeau:


RE: AN INTERNATIONAL BAN ON THE WEAPONIZATION OF AI
 

As members of the Canadian AI research community, we wish to thank you for your interest in the broad field of artificial intelligence and the remarkable investment that Canada is making in AI research and innovation. 

As you know, AI research—the attempt to build machines that can perform intelligent tasks—has made spectacular advances during the last decade. The evolution of classical AI, bolstered by rapid advances in machine learning, has revived the ambitions of the AI community to build machines that can carry out complex operations with or without human oversight or intervention. Proliferating applications already underpin a growing variety of products for consumers, for the improvement of infrastructure, transportation, education, health, the arts, the military, medicine, and for businesses. AI is of transformative significance. The transformations—actual and potential—demand our understanding and, increasingly, our heightened moral attention.

It is for these reasons that Canada’s AI research community is calling on you and your government to make Canada the 20th country in the world to take a firm global stand against weaponizing AI. Lethal autonomous weapons systems that remove  meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line. To this end, we ask Canada to announce its support for the call to ban lethal autonomous weapons systems at the upcoming United Nations Conference on the Convention on Certain Conventional Weapons (CCW). Canada should also commit to working with other states to conclude a new international agreement that achieves this objective. By doing so, our government can reclaim its position of moral leadership on the world stage as demonstrated previously by the Ottawa Treaty—the international ban on landmines initiated in 1996 by our then Minister of Foreign Affairs, Lloyd Axworthy, who was originally appointed to the federal Cabinet by your father.

We warmly welcome the decision of CCW to establish a Group of Governmental Experts (GGE) on Autonomous Weapon Systems. Many members of our research community are eager to lend their expertise to the Government of Canada in this regard. As many of the world’s top AI and robotics corporations—including Canadian companies—have recently urged, autonomous weapon systems threaten to become the third revolution in warfare. If developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. The deadly consequence of this is that machines—not people—will determine who lives and dies. Canada’s AI community does not condone such uses of AI. We want to study, create and promote its beneficial uses.

The strong leadership that Canada continues to demonstrate through its focus on technology and innovation will ensure our reputation as an international leader in the technological development of AI only if it also includes considerations of the broader legal, ethical and social implications. We therefore urge you to take a strong and leading position against Autonomous Weapon Systems on the international stage at the upcoming November 2017 CCW meetings at the United Nations.


Sincerely,
 

Ian Kerr
Canada Research Chair in Ethics, Law and Technology, University of Ottawa

Yoshua Bengio
Canada Research Chair in Statistical Learning Algorithms, Université de Montréal

Geoffrey Hinton 
Engineering Fellow, Google and Chief Scientific Advisor, The Vector Institute

Rich Sutton
AITF Chair in Reinforcement Learning and Artificial Intelligence, University of Alberta

Doina Precup
Canada Research Chair in Machine Learning, McGill University

 

cc: Hon. Navdeep Bains, Minister of Innovation, Science and Economic Development
Hon. Chrystia Freeland, Minister of Foreign Affairs
Hon. Harjit S. Sajjan, Minister of National Defence
Hon. Kirsty Duncan, Minister of Science
Dr. Mona Nemer, Chief Science Advisor

 


Signatories

(Scroll down to discover more than 650 signatories, or click here to open in a new window.)

Sign the Letter

Click here to open the form in a new window.

Loading...

Media Kit

Canada’s AI research community is calling on the government to make Canada the 20th country in the world to take a global stand against weaponizing AI—banning any AI systems that would remove meaningful human control in determining the legitimacy of targets and deploying lethal force.

 

The media kit includes background information, bios and quotes from the letter’s authors, and contacts for media commentary.

 


Background

 

  • AI research and development—the attempt to build machines that can perform intelligent tasks—has made spectacular advances during the last decade.
  • Prime Minister Trudeau’s government has shown great interest in the broad field of artificial intelligence and has made a remarkable $125 million investment in AI research and innovation.
  • Bolstered by rapid advances in machine learning, machines are now able to carry out many complex operations with or without human oversight or intervention.
  • Applications already underpin a growing variety of products for consumers for the improvement of infrastructure, transportation, education, health, the arts, the military, medicine, and for businesses.
  • Recent years have witnessed a rapid increase in the development of lethal autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets.
  • Many of the world’s top AI and robotics corporations—including Canadian companies—view autonomous weapon systems as a serious threat. If developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
  • The United Nations Convention on Certain Conventional Weapons (CCW) was established to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.
  • The CCW has established a Group of Governmental Experts (GGE) on Autonomous Weapon Systems, and will host a conference for this group in Geneva, Switzerland from November 13-17, 2017.
  • Canada previously demonstrated moral leadership on the world stage with the Ottawa Treaty—an international ban on landmines initiated in 1996 by our then Minister of Foreign Affairs, Lloyd Axworthy.
  • Canada’s AI research community is calling on Prime Minister Trudeau to re-assert Canadian moral leadership on the world stage by working with other states to conclude a new international agreement that bans lethal autonomous weapons.

 


Authors

 

Ian Kerr
Canada Research Chair in Ethics, Law and Technology, University of Ottawa

Ian Kerr is a Full Professor and holds the Canada Research Chair in Ethics, Law & Technology at the University of Ottawa, Faculty of Law, with cross appointments to the Faculty of Medicine, Department of Philosophy and School of Information Studies. He is a pioneer in the burgeoning field of AI and Robotics Law and Policy and a global leader in the field of privacy. His ongoing privacy work focuses on the interplay between emerging public and private sector surveillance technologies, civil liberties and human rights. His recent work, including his new book, Robot Law, studies the delegation of human tasks and decision making to machines with the aim of developing frameworks for the governance of robotics and artificial intelligence. Click here to learn more about Ian Kerr.

 

Yoshua Bengio
Canada Research Chair in Statistical Learning Algorithms, Université de Montréal

Yoshua Bengio is Full Professor of the Department of Computer Science and Operations Research at the Université de Montréal, and head of the Montreal Institute for Learning Algorithms (MILA).  He is also the Program co-director of the CIFAR program on Learning in Machines and Brains and the Canada Research Chair in Statistical Learning Algorithms. His main research ambition is to understand principles of learning that yield intelligence. Yoshua Bengio is currently action editor for the Journal of Machine Learning Research, associate editor for the Neural Computation journal, editor for Foundations and Trends in Machine Learning, and has been associate editor for the Machine Learning Journal and the IEEE Transactions on Neural Networks. Click here to learn more about Yoshua Bengio.

 

Geoffrey Hinton
Engineering Fellow, Google and Chief Scientific Advisor, The Vector Institute

Geoffrey Hinton investigates how neural networks can be used for learning, memory, perception and symbol processing. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications in deep learning. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, Helmholtz machines and products of experts. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input. Click here to learn more about Geoffrey Hinton.

 

Rich Sutton
AITF Chair in Reinforcement Learning and Artificial Intelligence, University of Alberta

Richard Sutton's work focuses on augmenting our understanding of what it means to be intelligent through computational models of learning. He seeks to identify the general computing principles that underlie intelligence and goal-directed behaviour in order to improve the performance and reliability of learning systems. He explores ways of representing human knowledge in terms of experience, seeking to reduce the dependence on manually-encoded knowledge. Click here to learn more about Rich Sutton.

 

Doina Precup
Canada Research Chair in Machine Learning, McGill University

Doina Precup's research interests lie mainly in the field of machine learning. She is especially interested in the learning problems that face a decision-maker interacting with a complex, uncertain environment. Doina uses the framework of reinforcement learning to tackle such problems. Her current research is focused on developing better knowledge representation methods for reinforcement learning agents. She is also more broadly interested in reasoning under uncertainty, and in the applications of machine learning techniques to real-world problems. Click here to learn more about Doina Precup.

 


Quotes

 

Ian Kerr:

“Although engaged citizens sign petitions every day, it is not often that captains of industry, scientists and technologists call for prohibitions on innovation of any sort — let alone an outright ban. The ban is an important signifier. The Canadian AI research community is clear: we must not permit AI to target or kill without meaningful human control.”

“Delegating life-or-death decisions to machines crosses a fundamental moral line — no matter which side builds or uses them. Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage autonomous weapons goes to the core of our humanity.”

 

Yoshua Bengio:  

“Leading in AI also means acting responsibly about it.”

 

Geoff Hinton:

 “Artificial Intelligence can improve people’s lives in so many ways, but researchers need to push for positive applications of technology by supporting a ban on autonomous weapons systems.”

 

Rich Sutton:  

“AI technology is not inherently either good or bad. It is up to us to see that it is used wisely.”

 

Doina Precup:  

“AI has tremendous potential to be a force for good in society. As part of the AI research community, I think it is our moral obligation to ensure it continues to develop in this direction, and prevent it from being mis-appropriated for harm.”

 


Media Contacts

Primary:

  • Ian Kerr
    Canada Research Chair in Ethics, Law and Technology
    University of Ottawa

    613-562-5800 ext. 3281
    Ian.Kerr@uOttawa.ca

 

For technical questions:

  • Yoshua Bengio
    Canada Research Chair in Statistical Learning Algorithms
    Université de Montréal

    514-343-6804
    Yoshua.Bengio@uMontreal.ca
  • Doina Precup
    Canada Research Chair in Machine Learning
    McGill University

    514-398-6443
    dprecup@cs.mcgill.ca

Coverage

 

 

 

Reactions:

Back to top