Illustration: Allie Carl/Axios
We've hit the science-fiction moment in the debate over generative AI, where people are warning of the human-like conversational skills of ChatGPT.
- Why it matters: ChatGPT, which ate the internet so it can spit out answers to human questions, isn't sentient — it's not self-aware. But even the early, imperfect, restrained version of the tech shows how easy human-like conversations and ideas are to replicate — and abuse.
The backstory: Jim VandeHei and I have spent the past week reading everything we can get our hands on about the tech, and talking to experts who understand it best.
Our biggest takeaway: This is the most important tech breakthrough since at least the iPhone — and perhaps the internet itself.
- The ability of machines to devour billions of words written on the internet — then predict what we want to know, say and even think — is uncanny, thrilling and scary.
- You've read about tech columnists prompting creepy, human-like conversations with Sydney, the code name of Microsoft's new chat version of Bing.
Zoom out: Right now we're getting only a small glimpse of the technology's full power. Google, for instance, has been hesitant to unveil and unleash its full generative AI because of its awesome and potentially dangerous capabilities.
- Even Microsoft and OpenAI are only giving some people limited access to a not fully formed version of its ChatGPT.
What's out there: An app called Replika bills itself as the "World's best AI friend – Need a friend? Create one now." A 24/7 friend for just $5.83/month!(The app is now trying to rein in erotic roleplay).
- A host of paid AI image generators — including Midjourney, and DALL·E 2 (which, like ChatGPT, is from OpenAI) — are now available.
- Many more services are on the way.
How it works: AI isn't sentient, but it sure seems like it. Here's why:
- The tools have devoured lots and lots of what sentient beings have written — and therefore can mimic human emotions, Axios' chief tech correspondent Ina Fried explains.
- Generative AI essentially scans previous writing on the internet to predict the most likely next words — infinitely.
The best article I've seen on the mechanics of ChatGPT is by Stephen Wolfram, who has studied neural nets for 43 years.
- The gist is that it's just adding one word at a time: "ChatGPT is always fundamentally trying to do is to produce a 'reasonable continuation' of whatever text it’s got so far, where by 'reasonable' we mean 'what one might expect someone to write after seeing what people have written on billions of webpages, etc.'" (Go deeper.)
What we're watching: The longer the Bing sessions went on, the more open the door became for creepy responses.
- Beginning last Friday, Microsoft said, "the chat experience will be capped at 50 chat turns [a user question + Bing reply] per day and 5 chat turns per session."
The bottom line: Computer science experts are much more concerned with how ChatGPT and its brethren will spread misinformation and perpetuate bias than with the AI being sentient or even superhuman.
- Bing isn't really happy or mad or in love. But it knows really well what humans sound like when we are.
Go deeper: "ChatGPT's edge: We want to believe," by Scott Rosenberg.
Source: Read Full Article