Elon Musk claims Neuralink brain chip will fix ‘almost anything’
In a “watershed” moment, the branch of the US Armed Forces used a computer programme as the co-pilot aboard a U-2 spy plane. The AI algorithm, known as ARTUµ, flew a U-2 Dragon Lady and performed specific in-flight tasks that otherwise would be done by the pilot. Dr Will Roper from the US Air Force described it as a “giant leap for “computerkind” in future military operations, adding that “algorithmic warfare has begun”.
But Elon Musk may not be as enthusiastic.
The technology used during the test was developed by the British AI research company DeepMind – who created AlphaGo – a computer programme to play chess and the board game, Go.
It was adapted by the U2 Federal Laboratory for military use.
Mr Musk was an early investor in DeepMind, which was later acquired by Google, but not because he wanted to turn a profit.
We will use your email address only for sending you newsletters. Please see our Privacy Notice for details of your data protection rights.
He previously said: “I like to just keep an eye on what’s going on with artificial intelligence.
“I think there is potentially a dangerous outcome there.
“There have been movies about this, you know, like Terminator.
“There are some scary outcomes. And we should try to make sure the outcomes are good, not bad.”
And years later, it appeared his fears had not dampened.
At the South by Southwest tech conference in Austin, Texas, in March 2018, the presenter stated: “A lot of experts don’t share the same level of concern you do [about AI].”
Mr Musk responded: “Famous last words.”
He added: “The biggest issue with so-called AI experts is that they think they know more than they do.
“They think they are smarter than they are.
“In general, we are all much smarter than we think we are, but much dumber than we think we are.
“So this tends to plague smart people, they define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them.”
Black hole shock: Scientist’s dire warning to humans [VIDEO]
Asteroid apocalypse: Scientist warns of ‘city-destroying’ space rock [OPINION]
Why ‘Trillion tonne rock hurtling towards Earth’ was ‘bad news’ [EXPLAINED]
He then made a direct reference to the technology being developed by DeepMind.
He added: “I’m really quite close to the cutting edge in AI and it scares the hell out of me.
“It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.
“You can see it through things like AlphaGo, which went from, in the span of six to nine months, being unable to beat even a reasonably good player, to then beating the European world champion.
“Then it beat the current world champion, then beat everyone simultaneously.
“Then there’s AlphaZero which crushed AlphaGo 100 to zero and that can play any game you give it.”
And it appears Mr Musk still feels the same way.
Speaking in July, he said DeepMind is his “top concern” when it comes to AI.
He added that his fears surrounded “the nature of the AI that they’re building” as it is “one that crushes all humans at all games”.
Mr Musk co-founded the OpenAI research lab in San Francisco in 2015, one year after Google acquired DeepMind.
OpenAI says its mission is to ensure AI benefits all of humanity.
In February 2018, Mr Musk left the OpenAI board but he continues to donate and advise the organisation.
Mr Musk and the co-founders of DeepMind have signed a pledge to not develop lethal autonomous weapons, along with thousands of other AI researchers, engineers, scientists, and entrepreneurs.
The pledge, organised by ‘The Future of Life Institute (FLI),’ a Boston-based research organisation, was published at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm.
It reads: “We the undersigned agree that the decision to take a human life should never be delegated to a machine.
“Lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilising for every country and individual.”
By signing the pledge, the signatories have promised to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”.
In its recent flight, the AI algorithm successfully co-piloted the spy plane in a training mission above California, but the pilot on board still had overall control.
The AI system was “responsible for sensor employment and tactical navigation”.
The US Air Force explained: “Together, they flew a reconnaissance mission during a simulated missile strike.
“ARTUµ’s primary responsibility was finding enemy launchers while the pilot was on the lookout for threatening aircraft, both sharing the U-2’s radar.”
Dr Roper, who serves as the Assistant Secretary of the Air Force for Acquisition, Technology and Logistics, said that the mission was a demonstration of “how completely our military must embrace AI to maintain the battlefield decision advantage”.
Earlier this year, the US Department of Defence announced plans to adopt ethical principles in order to lay the foundation for artificial intelligence to be used in warfare.
The principles called for “appropriate levels of judgement and care” when deploying AI systems, while also making them “traceable” and “governable”.
Arms control advocates warned that more needed to be done to prevent AI from making “life-or-death decisions” on the battlefield, calling for stronger restrictions on the technology.
Lucy Suchman, an anthropologist who specialises in AI in warfare, added: “I worry that the principles are a bit of an ethics-washing project.
“The word ‘appropriate’ is open to a lot of interpretations.”
An open letter from leading AI and robotics researchers in 2015 warned that “a global arms race is virtually inevitable” if major military powers continue to push ahead with AI weapon development.
Building machines that are just as smart as humans concerns some scientists like Mr Musk.
But last year, AI pioneer Yoshua Bengio told the BBC: “We are very far from super-intelligent AI systems and there may even be fundamental obstacles to get much beyond human intelligence.”
Source: Read Full Article