The artificial intelligence tool Generative Pre-training Transformer (GPT-3) has stunned experts with its unerring ability to design websites, prescribe medication and even answer questions. GPT-3 is the third generation of the machine learning model, where computers can automatically learn from their experiences without having to be programmed.
The AI’s predecessor GPT-2, made headlines after being dubbed “too dangerous to release” due to its ability to create text apparently indistinguishable from that written by human beings.
We need to perform experimentation to find out what they can and can’t do.
OpenAI’s Jack Clark
While GPT-2 had 1.5 billion parameters which could be set, the AI’s successor has 175 billion parameters.
A parameter is a variable which affects the data’s prominence in the machine learning tool, and changing them affects the output of the tool.
At the time when GPT-2 was deemed “too dangerous” to release, it used 124 million parameters.
READ: Mars crater key to human colonisation in space captured in never-before-seen video
- Artificial intelligence: 60 percent of Brits fear autonomous AI
GPT-3 is currently in closed-access, with demonstrations of the AI’s incredible ability being shared on social media.
Coder Sharif Shameem has shown how artificial intelligence can be used to describe designs which will then be built by the AI despite it not being trained to do so.
Designer Jordan Singer created a similar process for app designs, while a medical student at Kings College London Qasim Munye showed how the program can access information to answer medical questions.
Given an incomplete image, the cutting-edge artificial intelligence can also be used to ‘auto-complete’ it.
The AI does so by using its tools to suggest what pixels ‘should’ be in the image based on its database.
The reason that GPT-3 can demonstrate such capabilities is because it has been trained on an archive of the internet called the Common Crawl, containing almost one trillion words of data.
The tool comes from OpenAI, an artificial intelligence research lab split into two sections: a for-profit corporation called OpenAI LP, and its non-profit parent organisation OpenAI Inc.
Last month, the product was made commercially available, but work remains for investigating how the tool should be used.
Kanye West announces bombshell US presidential bid with backing from Elon Musk [REPORT]
Elon Musk could collaborate with NASA for Mars mission ‘before 2030’ [REPORT]
Elon Musk reveals TRUE purpose of Tesla is to enable humans to become multiplanetary [INSIGHT]
- Ai news: Google system outperforms experts in spotting breast cancer
Jack Clark, the OpenAI’s head of policy, said: “We need to perform experimentation to find out what they can and can’t do.
“If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”
The achievement is visually impressive, with some going as far as to suggest the tool will be a threat to industry or even that it is showing self-awareness.
However, OpenAI’s CEO Sam Altman has described the “hype” as “way too much”.
He said: “It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes.
“AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”
Moreover, questions have been raised regarding exactly what achievements are made by GPT-3.
Kevin Lacker, who formerly worked as a computer scientist at Facebook and Google, showed that while the artificial intelligence can respond to ‘common sense’ questions, answers that would be obvious to a human are unavailable to the machine and questions which are ‘nonsense’ are responded to as if they are not.
Source: Read Full Article