AI apocalypse: Ex-Google worker fears ‘killer robots’ could cause ‘mass atrocities’

The next generation of autonomous weapons dubbed “killer robots” could cause mass atrocities or even start a war, a former Google Project Maven engineer has revealed. Laura Nolan left Google last year in protest at being sent to work on Project Maven to improve US military drone tech. She said killer robots not controlled by humans should be prohibited by similar types of treaty banning chemical weapons – and has now called for a ban of all AI killing machines not operated by humans.

Killer robots drones are distinct to military drones often remotely controlled thousands of miles from where the flying weapon is being deployed.

The machine doesn’t have the discernment or common sense that the human touch has

Laura Nolan

Ms Nolan told The Guardian how autonomous weapons have the potential to do “calamitous things that they were not originally programmed for.”

She said: “The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once.

“What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed.

“There could be large-scale accidents because these things will start to behave in unexpected ways.

“Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”

There is no suggestion Silicon Valley tech giant Google is involved in the development of AI weapons systems.

A UN panel of AI experts last month debated autonomous weapons and found Google to be abstaining from weaponising artificial intelligence and was engaging in best practice.

Google recruited Ms Nolan to work on Project Maven in 2017 after working there for four years, becoming one of its top software engineers in Ireland.

She said she became “increasingly ethically concerned” over her role in the Maven programme, set-up to accelerate the US Department of Defense’s drone video recognition technology.

Ms Nolan’s team was tasked with building a system where AI machines could distinguish between people and inanimate objects at an infinitely faster rate than parsing through countless hours of drone footage.

Google allowed the Project Maven contract to lapse in March this year after more than 3,000 of its employees protested against the company’s involvement.

Ms Nolan added: “As a site reliability engineer my expertise at Google was to ensure that our systems and infrastructures were kept running, and this is what I was supposed to help Maven with.

“Although I was not directly involved in speeding up the video footage recognition I realised that I was still part of the kill chain; that this would ultimately lead to more people being targeted and killed by the US military in places like Afghanistan.”

The former Google engineer predicts autonomous weapons currently in development pose a far greater risk to humanity than remote-controlled drones.

She outlined how external forces ranging from changing weather systems to machines being unable to work out complex human behaviour might throw killer robots off course, with potentially fatal consequences.

She told The Guardian: “You could have a scenario where autonomous weapons that have been sent out to do a job confront unexpected radar signals in an area they are searching; there could be weather that was not factored into its software or they come across a group of armed men who appear to be insurgent enemies but in fact are out with guns hunting for food.

“The machine doesn’t have the discernment or common sense that the human touch has.

“The other scary thing about these autonomous war systems is you can only really test them by deploying them in a real combat zone.

“Maybe that’s happening with the Russians at present in Syria, who knows?

“What we do know is that at the UN Russia has opposed any treaty let alone ban on these weapons by the way.

“If you are testing a machine that is making its own decisions about the world around it then it has to be in real time.

“Besides, how do you train a system that runs solely on software how to detect subtle human behaviour or discern the difference between hunters and insurgents?

“How does the killing machine out there on its own flying about distinguish between the 18-year-old combatant and the 18-year-old who is hunting for rabbits?”

Source: Read Full Article