AI system capable of writing fake news is released by two students

AI system capable of writing fake news that was deemed ‘too dangerous’ to release has been made public by two rogue software developers

  • Experts shared code for the full version of AI software held back from the public 
  • The Open AI project was concerned that it was ‘too dangerous’ to release openly
  • The pair of students say they aren’t hoping to cause chaos by releasing the code
  • They want to show that this software is achievable without billions in funding 

An artificial intelligence project capable of writing fake news that was deemed ‘too dangerous’ to release to the public has been recreated by two university students.

Open AI, a project founded with the support of Elon Musk, is able to generate news stories from a headline or first line of text.

In February, the firm released a limited version of its software for other developers to use, to explore its potential. 

The firm, which Musk is no longer involved in, has since launched an updated version of the software with half of the power of the full AI. 

Now, computer science master’s students Aaron Gokaslan and Vanya Cohen from Brown University have shared code for what they say is the full version.

Scroll down for video

 An artificial intelligence project capable of writing fake news that was deemed ‘too dangerous’ to release to the public has been recreated by two university students (stock image)

This tool lets you try out Open AI’s watered down version of its GPT-2 text generating AI, which it claims is around half as powerful as the full model

The pair say they aren’t hoping to cause chaos by releasing the code, but want to show that creating this kind of software is achievable without the resources of someone like Elon Musk.

They used free cloud computing time provided by Google to academic institutions to complete the project.

Speaking to Wired, Mr Cohen said: ‘This allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses. 

‘I’ve gotten scores of messages, and most of them have been like, “Way to go”.’

The software, called GPT-2, was trained using eight million web pages, and adapts the style and content of what it produces in line with your input.

Far from the dystopian ‘fake news’ generator its creators cautioned of, the text it generates is disjointed and clearly not the work of a talented author.

When given the headline ‘Donald Trump declares he should be president for life’, the latest Open AI version gave the following output: ‘The announcement, made on Twitter on Tuesday night, came at a moment when the Republican billionaire is still a long shot to be elected president.

‘But if he wins the presidency, Trump has promised “major tax cuts and massive infrastructure spending.” And this is likely to be a top priority for his administration.

‘So what would Trump’s tax plan, unveiled during his campaign, look like?

‘”There’s a little more detail than he’s got, but he’s got a lot of information there,” said Matt Kibbe, a spokesman for the House GOP’s campaign arm. “There are a lot of details that are going to be released in the next few weeks”.’   

Open AI, a project founded with the support of Elon Musk, is able to generate news stories from a headline or first line of text

However, experts say that we should still remain cautious over the development of such software.

Speaking the to the BBC, Dave Coplin, founder of AI consultancy the Envisioners, said:  ‘Once the initial – and understandable – concern dies down, what is left is a fundamentally crucial debate for our society, which is about how we need to think about a world where the line between human-generated content and computer-generated content becomes increasingly hard to differentiate.’

OpenAI is a group founded by Elon Musk and backed by Silicon Valley heavyweights, including LinkedIn’s Reid Hoffman. 

Musk has famously been an outspoken critic of AI, calling it the biggest existential threat to humankind and warning ‘that we could create an immortal dictator from which we would never escape.’   

The researchers said: ‘Due to our concerns about malicious applications of the technology, we are not releasing the trained model. 

‘As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with. 

‘We’re not at a stage yet where we’re saying, this is a danger. We’re trying to make people aware of these issues and start a conversation.’

A TIMELINE OF ELON MUSK’S COMMENTS ON AI

Musk has been a long-standing, and very vocal, condemner of AI technology and the precautions humans should take 

Elon Musk is one of the most prominent names and faces in developing technologies. 

The billionaire entrepreneur heads up SpaceX, Tesla and the Boring company. 

But while he is on the forefront of creating AI technologies, he is also acutely aware of its dangers. 

Here is a comprehensive timeline of all Musk’s premonitions, thoughts and warnings about AI, so far.   

August 2014 – ‘We need to be super careful with AI. Potentially more dangerous than nukes.’ 

October 2014 – ‘I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence.’

October 2014 – ‘With artificial intelligence we are summoning the demon.’ 

June 2016 – ‘The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we’d be like a pet, or a house cat.’

July 2017 – ‘I think AI is something that is risky at the civilisation level, not merely at the individual risk level, and that’s why it really demands a lot of safety research.’ 

July 2017 – ‘I have exposure to the very most cutting-edge AI and I think people should be really concerned about it.’

July 2017 – ‘I keep sounding the alarm bell but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.’

August 2017 –  ‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.’

November 2017 – ‘Maybe there’s a five to 10 percent chance of success [of making AI safe].’

March 2018 – ‘AI is much more dangerous than nukes. So why do we have no regulatory oversight?’ 

April 2018 – ‘[AI is] a very important subject. It’s going to affect our lives in ways we can’t even imagine right now.’

April 2018 – ‘[We could create] an immortal dictator from which we would never escape.’ 

 

Source: Read Full Article