AI ‘deepfakes’ tipped to destroy legal system and ‘incite conspiracy theorists’

Images created using Artificial Intelligence could “incite conspiracy theorists,” top law experts have warned.

Over the last few months, social media has been filled with deepfake images created by artificial intelligence.

From images of The Pope wearing a puffer jacket to Donald Trump being dragged out of his home in handcuffs by police, trying to discern fact from fiction is becoming harder than ever.

READ MORE: The Iron Sheik's foul-mouthed 'jabroni' Twitter feuds – from Hulk Hogan to Boris Johnson

And now it is being warned that these images made using special AI programmes is going to destroy the legal system – especially in live court cases.

Burkhard Schafer, professor of computational legal theory at Edinburgh Law School, told the Daily Express,: “Self-appointed, often anonymous Internet sleuths going over clips and images from (a) trial, but taken out of their context, and then 'refuted' with technically sounding but mostly idiotic analysis, often ignoring that the image they look at will be a copy of a copy of a copy, often having changed format, and not the one the jury saw.

“This I see increasingly often, and it can have the effect of undermining trust in the justice system and the law.

“While much of the public debate has focussed on the danger that deep fakes could introduce misleading evidence, the much greater danger in my view is ‘general deniability’ even of truthful digital images or clips.

“This plays into the hands of powerful figures who may wish to feed conspiracy theories and deflect blame away from themselves.”

  • AI-powered reincarnations of dead people to haunt and scam bereaved families online

The UK Government set up an AI taskforce last year, with an eye on protecting the country's legal system.

However, Matt Clifford, Rishi Sunak's key adviser on technology warned that artificial intelligence could “kill many humans” in the next two years.

Speaking to TalkTV, he said: “You can have really very dangerous threats to humans that could kill many humans, not all humans, simply from where we’d expect models to be in two years time.

“There are lots of different types of risks with AI and often in the industry we talk about near-term and long-term risks, and the near-term risks are actually pretty scary.

“You can use AI today to create new recipes for bio weapons or to launch large-scale cyber attacks. These are bad things.”

To get more stories from Daily Star delivered straight to your inbox sign up to one of our free newsletters here.

Source: Read Full Article