Meta's AI systems are actually 'quite stupid', Nick Clegg claims

Meta’s AI systems are actually ‘quite stupid’, Nick Clegg claims – and talk of AI posing a threat to humanity has ‘run ahead of the technology’

  • Nick Clegg, President of Global Affairs at Meta, called Meta’s AI ‘quite stupid’
  • He added that the hype about AI has ‘somewhat run ahead of the technology’ 

He’s the President of Global Affairs for Meta, but it appears that Sir Nick Clegg is unimpressed with the tech giant’s AI systems. 

In fact, the former Liberal Democrat leader and deputy prime minister went so far as to call Meta’s AI systems ‘quite stupid’.

Speaking on BBC Radio 4’s Today programme on Wednesday, Clegg said: ‘My view is that the hype has somewhat run ahead of the technology.

‘I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself.

‘The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid.’

He’s the President of Global Affairs for Meta, but it appears that Sir Nick Clegg is unimpressed with the tech giant’s AI systems 

READ MORE: Biggest names in tech warn developments in AI could spell the end of humanity 

Clegg said concerns around ‘open-source’ models, which are made freely available and can be modified by the public, were exaggerated, and the technology could offer solutions to problems such as hate speech.

It comes Meta said that it was opening access to its new large language model, Llama 2, which will be free for research and commercial use.

Generative AI tools such as ChatGPT, a chatbot that can provide detailed prose responses and engage in human-like conversations, have become widely used in the public domain in the last year.

Clegg said a claim by Dame Wendy Hall, co-chair of the Government’s AI Review, that Meta’s model could not be regulated and was akin to ‘giving people a template to build a nuclear bomb’ was ‘complete hyperbole’. 

He added: ‘It’s not as if we’re at a T-junction where firms can choose to open source or not. Models are being open-sourced all the time already.’

Meta has 350 people ‘stress-testing’ its models over several months to check for potential issues, according to Clegg, who added that Llama 2 was safer than any other large language models currently available on the internet.

Meta has previously faced questions around security and trust, with the company fined 1.2 billion euros (£1 billion) in May over the transfer of data from European users to US servers.

Meta has 350 people ‘stress-testing’ its models over several months to check for potential issues, according to Clegg, who added that Llama 2 was safer than any other large language models currently available on the internet

Clegg’s claims come as more than 1,300 experts signed an open letter saying AI is a ‘force for good, not a threat to humanity.’ 

The letter was organised by BCS, the Chartered Institute for IT, as a way to counter ‘AI doom.’

Richard Carter, an AI startup founder who signed the letter, called fears over AI ‘far-fetched.’

Speaking to the BBC, he said: ‘Frankly, this notion that AI is an existential threat to humanity is too far-fetched. We’re just not in any kind of a position where that’s even feasible.’

However, not all experts agree. 

Back in May, some of the biggest names in technology joined forces to warn that AI could spell the end of humanity. 

Signatories include dozens of academics, senior bosses at companies including Google DeepMind, the co-founder of Skype, and Sam Altman, chief executive of ChatGPT-maker OpenAI. 

Another signatory is Geoffrey Hinton, sometimes nicknamed the ‘Godfather of AI’, who recently resigned from his job at Google, saying that ‘bad actors’ will use new AI technologies to harm others and that the tools he helped to create could spell the end of humanity.

The short statement says: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

WILL YOUR JOB BE TAKEN BY A ROBOT? PHYSICAL JOBS ARE AT THE GREATEST RISK

Physical jobs in predictable environments, including machine-operators and fast-food workers, are the most likely to be replaced by robots.

Management consultancy firm McKinsey, based in New York, focused on the amount of jobs that would be lost to automation, and what professions were most at risk.

The report said collecting and processing data are two other categories of activities that increasingly can be done better and faster with machines. 

This could displace large amounts of labour – for instance, in mortgages, paralegal work, accounting, and back-office transaction processing.

Conversely, jobs in unpredictable environments are least are risk.

The report added: ‘Occupations such as gardeners, plumbers, or providers of child- and eldercare – will also generally see less automation by 2030, because they are technically difficult to automate and often command relatively lower wages, which makes automation a less attractive business proposition.’

Source: Read Full Article