AI leader says field's new territory is promising but risky

Photo illustration: Aïda Amer/Axios. Photo: Oli Scarff/AFP via Getty Images.

AI is in new territory that demands we be both "brave" and responsible about analyzing its benefits and risks before any release, the CEO and co-founder of one of the world's AI powerhouses told Axios. "Every time."

Why it matters: Demis Hassabis helms DeepMind, the leading AI lab that advanced a technique underpinning much of the field's recent progress and driving ChatGPT and other generative AI tools that are saturating headlines.

The backstory: DeepMind — which was co-founded by Hassabis in 2010 and acquired by what was then Google in 2014 — is inspired by Hassabis' neuroscience background, and is trying to understand human intelligence in order to build more intelligent machines.

  • DeepMind first focused on building computer programs to play strategy games — an approach that some experts say could lead to more intelligent machines (and others argue may not).
  • The company's AlphaGo beat the world's top human player at the ancient board game Go in 2016, years before researchers thought it was possible. Late last year, DeepNash learned to play Stratego, a complex game involving bluffing, deception and cooperation.

But Hassabis' "longstanding passion and motivation for doing AI" was to one day be able to "build learning systems that are able to help scientists accelerate scientific discovery," he told Axios.

  • Last summer, DeepMind reported a version of its AlphaFold program can predict the 3D structure of 350,000 proteins — information that is key to designing medicines and understanding disease but can be tedious and time-consuming to get with traditional methods.
  • AlphaFold is the "poster child for us of what can be done using AI to accelerate science," Hassabis says. The company is aiming its algorithms at other scientific challenges, like controlling the fuel in nuclear fusion reactors.

A version of AlphaFold is open source, which is a key way AI can bring "digital speed" to science, he says.

Yes, but: "It's not the case that open sourcing is a panacea," Hassabis says.

  • There's been a strong ethos in the AI community that its tools should be open source and freely available — look no further than OpenAI, the name of DeepMind's rival.
  • AI developers have to be "bold and responsible," he says. When and how a system is released should involve using the scientific method to understand these systems before they are released, he adds. (DeepMind has an internal ethics review board.)

The big picture: The "deep learning" techniques used by DeepMind and others have given rise to generative AI — algorithms that can be prompted to make predictions, create images or write text. The field's best-known exemplar is OpenAI's ChatGPT, which scans massive amounts of text to learn patterns between words and then respond to text prompts from users.

  • DeepMind has its own chatbot called Sparrow, which Hassabis first told Time magazine will be released more widely sometime this year.
  • It's not just OpenAI and DeepMind: There's Claude (from Anthropic, which was started by former OpenAI employees), Bard (from Google), Ernie Bot (from Baidu) and more.
  • "It's not like a magical sort of technology," Hassabis says. Today's generative AIs are "pretty interesting and very useful" but are still "somewhat toy demonstrations" that aren't "fully formed yet."

Echoing other AI researchers, he warns they are "far from perfect."

  • ChatGPT, Microsoft's Bing chatbot, Meta's Galactica (a generative AI designed to help scientists with tasks like annotating proteins or writing code), and other systems have been taken down or reined in after they've been found to generate unreliable or incorrect information, or spiraled into emotional-seeming reactions and even threats.
  • SnapChat rolled out its AI-powered chatbot My AI this week with a warning to “Please be aware of its many deficiencies and sorry in advance!”

Mitigations could be put "in place beforehand rather than just fixing them after the fact," says Hassabis. DeepMind is developing Sparrow to be able to cite its references.

  • But pressure is mounting to keep up as companies rush to incorporate generative AI tools into their products — potentially putting safety second to speed.
  • After ChatGPT was released, Google scrambled to put out its own tools. They had them but waited, and some questioned if it was out of fear of undermining their own search engine. Hassabis says, "of course, you have to respond to the current dynamics." But he says it's not a question of fear,"it's about being responsible."

What's next: For Hassabis, the algorithms at the center of the latest AI frenzy are just one part of the larger goal of creating a general AI on par with human intelligence.

  • "It's not the wrong direction, but it's not the whole solution either," he says. "They're missing things, really big things, like planning, reasoning, memory."
  • "Generative AI is the fashion and what everyone is talking about, but if you've been in AI for a long time, you know that fashions come and go," Hassabis says, adding it could be combined with other learning techniques to advance the quest for a more general AI.
  • A general AI could ultimately have the impact of something like electricity, he says.
  • "Someone famously asked Michael Faraday, 'what use is electricity?" And he said, 'What use is a newborn baby?' I kind of feel like we're at that moment now where Faraday did his famous Royal Institution  demonstrations for the general public… maybe these large chatbots are kind of like that."

Source: Read Full Article