51³Ō¹Ļapp

Media contact

Neil Martin
n.martin@unsw.edu.au

Lethal autonomous weapons need to be added to the UNā€™s Convention on Certain Conventional Weapons, the open-ended treaty regulating new forms of weaponry.

That is the view of ³§³¦¾±±š²Ō³Ł¾±²¹ĢżProfessor Toby Walsh, chief scientist at UNSWā€™s AI Institute, in discussion as part of UNSWā€™s ā€˜Engineering the Futureā€™ podcast series.

The rules of war, widely accepted under the Geneva Convention that was first established in 1864, dictate what can and cannot be done during armed conflicts and aim to curb the most brutal aspects of war by setting limits on weapons and tactics that can be employed.

Read more:Ā 

Chemical and biological weapons have been banned for use in conflict since 1925, following the horrors of the First World War, and Prof. Walsh says AI-powered autonomous weapons should now also be prohibited.

The UNSW academic is banned from Russia for questioning the claims of developing an AI-powered anti-personnel land mine that was more humanitarian.

In addition to his concerns about the morality of such weapons, Prof. Walsh says other autonomous weapons that are starting to be used in the Ukraine conflict should be banned.

ā€œAI is transforming all aspects of our life and so, not surprisingly, it's starting to transform warfare. I'm pretty sure historians will look back at the Ukrainian conflict and say how drones and autonomy and AI started to transform the way we fought war ā€“ and not in a good way,ā€ he says.

ā€œI'm very concerned that we will completely change the character of war if we hand over the killing to machines.

ā€œFrom a legal perspective, it violates internationally humanitarian law ā€“ in particular, various principles like distinction and proportionality. We can't build machines that can make those sorts of subtle distinctions.

ā€œLaw is about holding people accountable. But you notice I said the word 'people'. Only people are held accountable. You can't hold machines accountable.ā€

Scientia Professor Toby Walsh

Scientia Professor Toby Walsh, chief scientist at UNSW's Artificial Intelligence Institute. Image from UNSW

Prof. Walsh says that in the fog of war, the use of non-human-controlled weaponry is far from ideal.

ā€œThe battlefield is a contested, adversarial setting where people are trying to fool you and you have no control over a lot of things that are going on. So it's the worst possible place to put a robot,ā€ he says.

ā€œAnd then the moral perspective is actually perhaps the most important and strongest argument against AI in warfare.

ā€œWar is sanctioned because it's one person's life against another. The fact that the other person may show empathy to you, that there is some dignity between soldiers, those features do not exist when you hand over the killing to machines that don't have empathy, don't have consciousness, can't be held accountable for their decisions.

ā€œI'm quite hopeful that we will, at some point, decide that autonomous weapons also be added to the lists of terrible ways to fight war like chemical weapons, like biological weapons. What worries me is that in most cases, we've only regulated various technologies for fighting after we've seen the horrors of them being misused in battle.ā€

Responsible AI

Joining Prof. Walsh on the ā€˜Engineering the Future of AIā€™ podcast was Stela Solar, director of the National Artificial Intelligence Centre hosted by CSIRO's Data61, as they discussed the potential fascinating use of AI in a wide variety of areas such as education, health and transportation.

Solar is involved in the , a world-first cross-ecosystem collaboration aimed at uplifting the practice of responsible AI across Australia'sĀ commercial sector.

Stela Solar

Stela Solar, director of Australiaā€™s National AI Centre hosted by CSIRO. Image from Stela Solar

And she agrees it is important that the ever-increasing development of AI is done in the right way.

ā€œThere is a need for us to really understand that AI is a tool that we're deciding how we use. So whether that's for positive impact or for negative consequences, it is very much about the human accountability of how we use the technology,ā€ she says.

ā€œAI is only as good as we lead it, and that is why the area of responsible AI is so important right now.

ā€œThere is a need for governance of AI systems that we're just discovering. AI systems generally are potentially more agile. They are continually updated, continually changing. And so we're just discovering what those governance models look like in order to ensure responsible use of AI tools and technologies.

ā€œIt's also one of the reasons why we've established the Responsible AI Network, to help more of Australia's industry take on some of those best practices for implementing AI responsibly.ā€

* Professor Toby Walsh and Stela Solar were in conversation as part of the Engineering the Future Podcast series.