It’s easy to imagine the artificial intelligence (AI) revolution as years off in the future, but it might be much closer than you think.
When Sharif Shameem posted to Twitter an experiment he did with GPT-3, a closed-access artificial intelligence, thousands in the technology community were stunned.
‘But by ingesting terabytes and terabytes of data, it was able to understand the underlying patterns in how humans communicate.’
Multiple observers described the demo as ‘mind blowing’, while other commentators compared its potential to shape the coming decade as that of the iPhone for the 2010s.
Though there was argument about whether this would be positive or negative, there was little disagreement about the potential for GPT-3 to change the world.
OpenAI, the artificial intelligence research lab that created GPT-3, was founded in 2015 by Elon Musk and others. Last year, Microsoft invested a billion dollars.
Their aim was to create an AI that ‘benefits humanity as a whole’, instead of one used for nefarious purposes.
When OpenAI created the second generation version of its language model, GPT-2, it was thought so potentially dangerous for pumping out fake news or filling the internet with spam, that it was held back from the public at first.
But last month, OpenAI announced its more-powerful successor, GPT-3, trained on a much larger set of data (an archive of the web called the Common Crawl) and apparently much, much more powerful.
‘It blew me away the first time I used it,’ says Shameem.
So how does it work?
Without diving into the technical details, GPT-3 (which stands for generative pre-training) uses a vast data bank of English sentences and highly powerful computer models (called neural nets) to spot patterns and learn its own rules of how language operates (it has 175 billion rules of its own making).
It sounds complicated (and it is), but GPT-3’s size and nature makes it adaptable to all sorts of different tasks that involve any sort of language.
‘I can see a future in which every doctor just asks GPT-3 what the cause of a certain set of symptoms one of their patients might have would be, and then gives a reasonable response’ says Shameem.
But, he adds, because it’s still such early days, there’s no way to tell which sorts of fields might benefit the most.
A few days after his first experiment, Shameem posted another video of something he’d been working on.
In it, he simply asks the machine, in English, for ‘the google logo, a search box, and two light grey buttons that say “Search Google” and “I’m Feeling Lucky”’
Within seconds, GPT-3 appears to create code for what it “thinks” (GPT-3 doesn’t think in the same way a human brain) this should look like.
When rendered, it appears to resemble something like the Google homepage from ten years ago.
Though most of the comments expressed amazement and congratulations, there were murmurs of discontent.
What could this mean for entire swathes of industry?
Just as the power loom had put thousands of weavers out of business during the industrial revolution, some worried that this could be similarly destructive for thousands of coders.
But rather than destroy jobs, Shameem thinks this might improve jobs for coders everywhere.
‘I think it’ll lead to more productive coders,’ says Shameem
‘With every level of abstraction that programming has brought us, it only increased the number of potential coders, because it decreased the skill level required to become a productive programmer.
‘I think it’s actually going to result in far more people becoming programmers.’
There are many valid concerns about the use of powerful technologies for bad – such as “deepfake” photos undermining trust or a flood of fake reviews powered by an AI like GPT-3.
However, Shameem remains characteristically optimistic.
‘With every tool there are dangers, but I think the pros outweigh the cons,’
‘I think GPT-3 is inherently a net positive for humanity, and that we’re better off having it than not.’
Source: Read Full Article