AI tool could spell the end of fake news

AI tool that can spot text written by a machine could spell the end of fake news, false reviews and phony social media accounts

  • The Giant Language Model Test Room is devised by Harvard University and MIT 
  • Exploits fact that AI text relies on text patterns, rather than words and sentences
  • Program’s designers say it can detect if articles have been written by humans

A team of U.S. researchers has developed a program that weeds-out fake news.

The Giant Language Model Test Room is devised by IT experts at Harvard University and the Massachusetts Institute of Technology (MIT) in a bid to counter inauthentic journalism.

Based around predictive language models, which allow computers and bots to write copy, the system aims to machine algorithms.

According to results of their own research, GLTR helped to improve the detection-rate of forged text from 54 percent to 72 percent – meaning the days of misinformation could potentially be numbered. 

Is it real? Due to their modeling power,automated language models have the potential to generate textual output that is indistinguishable from the real thing – AKA it’s often fake news

HOW DOES IT WORK? 

The Giant Language Model Test Room enables forensic analysis of how likely an automatic system generated a text. 

GLMTR makes the assumption that computer generated text fools humans by sticking to the most likely words. 

In contrast, human writing often selects unpredictable words.

It could also help to identify fake profiles on social media accounts, such as Twitter and Facebook, which can often be used to spread incorrect data.

The GLMTR program makes the assumption that computer generated text uses the most likely words in a sequence.

In contrast, human writing often selects unpredictable words, which still make sense and are entirely relevant even if unexpected in the rhythm of a sentence or clause.  

GLTR searches for these patterns over a sixty-word window and highlighting the most predictable sequence of words as suspect.

These results are colour-coded for ease of use.

Words that are statistically more predictable – and thus generated by a computer – are highlighted in green. More spontaneous word combinations are in yellow, purple and red.

Therefore, authentic copy should have a balanced combination of yellows, reds and purples. But suspicious text would be mostly green with flecks of yellow.  

Assessment: Words that are statistically more predictable – and thus generated by a computer – are highlighted in green. More spontaneous word combinations are in yellow, purple and red.

‘Obviously, GMTR is not perfect,’ said its creators. 

‘Its main limitation is its limited scale. It won’t be able to automatically detect large-scale abuse, only individual cases. 

‘Moreover, it requires at least an advanced knowledge of the language to know whether an uncommon word does make sense at a position. 

However, we speculate that it can spark the development of similar ideas that work at greater scale.’ 

HALF OF CURRENT JOBS WILL BE LOST TO AI WITHIN 15 YEARS

Kai-Fu Lee, the author of AI Superpowers: China, Silicon Valley, and the New World Order, told Dailymail.com the world of employments was facing a crisis ‘akin to that faced by farmers during the industrial revolution.’

Half of current jobs will be taken over by AI within 15 years, one of China’s leading AI experts has warned.

Kai-Fu Lee, the author of bestselling book AI Superpowers: China, Silicon Valley, and the New World Order, told Dailymail.com the world of employments was facing a crisis ‘akin to that faced by farmers during the industrial revolution.’

‘People aren’t really fully aware of the effect AI will have on their jobs,’ he said.

Lee, who is a VC in China and once headed up Google in the region, has over 30 years of experience in AI.

He is set to reiterate his views on a Scott Pelley report about AI on the next edition of 60 Minutes, Sunday, Jan. 13 at 7 p.m., ET/PT on CBS. 

He believes it is imperative to ‘warn people there is displacement coming, and to tell them how they can start retraining.’

Luckily, he said all is not lost for humanity.

 ‘AI is powerful and adaptable, but it can’t do everything that humans do.’ 

Lee believe AI cannot create, conceptualize, or do complex strategic planning, or undertake complex work that requires precise hand-eye coordination.

He also says it is poor at dealing with unknown and unstructured spaces.

Crucially, he says AI cannot interact with humans ‘exactly like humans’, with empathy, human-human connection, and compassion.

Source: Read Full Article