Artificial intelligence warning: AI deemed ’too dangerous’ released into the world

Experts at the Elon Musk-founded OpenAI feared the AI, dubbed “GPT-2”, was so powerful it could be maliciously misused by everyone from corrupt politicians to criminals. GPT-2 was designed to accurately predict the succeeding words when fed a piece of text.

By doing so, the artificial intelligence can create long strings of writing eerily indistinguishable from copy created by a human.

Due to our concerns about malicious applications of the technology, we are not releasing the trained model

OpenAI

But it soon became clear the AI was too good at its job.

GPT-2 is so powerful the machine learning could be used to scam civilians and undermine trust in anything you read.

In addition, the artificial intelligence can be abused by extremist groups to create “synthetic propaganda”.

This will allow people to automatically generate articles promoting racist propaganda or adverts promoting religious violence.

OpenAI wrote in a statement in February: “Due to our concerns about malicious applications of the technology, we are not releasing the trained model.

“As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

At that time, the organisation released only a scaled-back version of the AI tool, featuring 124 million parameters.

DON’T MISS
NASA supercomputer creates millions of ‘Universes’ [INTERVIEW]
Hubble snaps galaxy ‘like a portal to another dimension’ [PICTURES]
Shadow land: ‘Alien life can exist in 2D universe’ [ANALYSIS]

Due to our concerns about malicious applications of the technology, we are not releasing the trained model

OpenAI

But OpenAI has since released increasingly complex versions and has now made the full version available.

The full version is more convincing than the early incarnation of the AI.

The relatively “marginal” increase in credibility is what encouraged the researchers to make it available, OpenAI announced.

The company, which is no longer associated with SpaceX CEO Elon Musk, hopes the release can partly help the public understand how such a tool could be misused.

OpenAI believe GPT-2 will help inform debate among AI experts about how to mitigate such danger.

Scientists warned in February how malicious people could misuse the programme in numerous ways.

The outputted text could create misleading news articles, impersonate other people, automatically create abusive or fake content for social media.

They noted there were likely a variety of other uses not even have been imagined yet.

Such misuses would require the public to become more critical about the text they consume, which could have been generated by artificial intelligence, they said.

The researcher wrote: ”These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns.

“The public at large will need to become more skeptical of text they find online, just as the ‘deep fakes’ phenomenon calls for more skepticism about images.”

Source: Read Full Article