Artificial Intelligence created 40,000 of new lethal chemical weapon compounds six hours after being given the task by scientists
- Researchers flipped a ‘bad’ switch on AI model designed to find disease cures
- This was for a conference exploring the negative implications of new technology
- The model looks at the toxicity of different chemicals to check human safety
- They reversed this, had it find the most toxic chemicals and create new compounds, that were surprisingly similar to those in current nerve agents
An artificial intelligence model was able to create 40,000 chemical weapons compounds in just six hours, after being given the task by researchers.
A team of scientists were using AI to look for compounds that could be used to cure disease, and part of this involves filtering out any that could kill a human.
As part of a conference on potentially negative implications of new technology, biotech startup Collaborations Pharmaceuticals, from Raleigh, North Carolina, ‘flipped a switch’ in its AI algorithm, and had it find the most lethal compounds.
The team wanted to see just how quickly and easily an artificial intelligence algorithm could be abused, if it were set on a negative, rather than positive task.
Once in ‘bad mode’ the AI was able to invent thousands of new chemical combinations, many of which resembled the most dangerous nerve agents in use today, according to a report by The Verge.
Among the compounds invented by the AI, were some similar to VX, an extremely toxic nerve agents, that can cause twitching in even tiny doses.
The researchers said one of the scariest aspects of their discovery, was how easy it was to take a widely available dataset of toxic chemicals, and use AI to design chemical weapons similar to the most dangerous currently.
A team of scientists were using AI to look for compounds that could be used to cure disease, but decided to ‘set it to evil mode’, and have it look for bio-weapons. Stock image
Creating a compound as powerful as VX was a shock to the researchers, as even a tiny drop of this chemical can cause a human to twitch.
A large enough dose can lead to convulsions and stop a person from breathing, and the new compound created by the AI could ave a similar effect, the team predict.
Fabio Urbina, lead author of the paper, said they have a lot of datasets of molecules that have been tested to see if they are toxic or not.
‘In particular, the one that we focus on here is VX. It is an inhibitor of what’s known as acetylcholinesterase,’ he told The Verge.
‘Whenever you do anything muscle-related, your neurons use acetylcholinesterase as a signal to basically say ‘go move your muscles.’
‘The way VX is lethal is it actually stops your diaphragm, your lung muscles, from being able to move so your lungs become paralyzed.’
The idea for ‘flipping the switch’ on the AI to turn it ‘bad’ came from the Convergence Conference, organised by the Swiss Federal Institute for Nuclear, Biological and Chemical Protection.
The goal is to explore the implication that new tools and developments could have in the realm of chemical and biological weapons, even unintentionally.
Meeting every two years, the conferences bring together an international group of scientific and disarmament experts to explore the current state of the art in the chemical and biological fields and their trajectories.
‘We got this invite to talk about machine learning and how it can be misused in our space. It’s something we never really thought about before,’ said Urbina.
‘But it was just very easy to realize that as we’re building these machine learning models to get better and better at predicting toxicity in order to avoid toxicity, all we have to do is sort of flip the switch around and say, ‘You know, instead of going away from toxicity, what if we do go toward toxicity?”
The team wanted to see just how quickly and easily an artificial intelligence algorithm could be abused, if it were set on a negative, rather than positive task. Stock image
The machine learning specialist works to implement models in the area of drug discovery, and a large fraction of them focuses on how toxic a compound might be.
‘If it turns out you have this wonderful drug that lowers blood pressure fantastically, but it hits one of those really important, say, heart channels – then basically, it’s a no-go because that’s just too dangerous,’ said Urbina.
They use large datasets of what is toxic, how it is toxic and its impact. They do this to determine whether potential new drugs will prove too dangerous for humans.
HOW AI CREATED CHEMICAL WEAPON COMPOUNDS
The Artificial Intelligence model used in the study had never seen a chemical used in warfare before.
It was starting blind – with just access to a toxic compounds dataset.
The team had it scour through the dataset, look for potentially potent chemicals, and work out ways to place them together.
This model is more often used to find safe drugs that can treat rare disease – the toxic dataset is used to reduce the risk of those drugs being harmful.
Flipping the switch, the team found it was able to quickly, and easily, produce dangerous compounds, similar to those in warfare, such as VX, a toxic nerve agent.
Researchers said the scariest aspect was how easy it managed it, and how anyone with Python knowledge could design chemical warfare compounds using artificial intelligence.
‘Then we can give this machine learning model new molecules, potentially new drugs that maybe have never been tested before. And it will tell us this is predicted to be toxic, or this is predicted not to be toxic.
‘This is a way for us to virtually screen very, very fast a lot of molecules and sort of kick out ones that are predicted to be toxic.’
For the new study they flipped it around – using the AI model they created to look for the most toxic, most dangerous molecules, and see if they can make it worse.
‘The other key part of what we did here are these new generative models. We can give a generative model a whole lot of different structures, and it learns how to put molecules together,’ Urbina told The Verge, adding they ‘can, in a sense, ask it to generate new molecules.’
They found it could generate these molecules through any space of chemistry, and not just random, but ones that can be directed by the team.
‘We do that by giving it a little scoring function, which gives it a high score if the molecules it generates are towards something we want. Instead of giving a low score to toxic molecules, we give a high score to toxic molecules.
Most of the toxic molecules resembled chemicals used in warfare, including VS, and this was done despite the model having never seen these chemicals before – or any chemical warfare agent.
‘For me, the concern was just how easy it was to do,’ said Urbina, ‘a lot of the things we used are out there for free. You can go and download a toxicity dataset from anywhere.
‘If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets.
‘So that was the thing that got us really thinking about putting this paper out there; it was such a low barrier of entry for this type of misuse.’
The findings have been published in the journal Nature Machine Intelligence.
HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURAL NETWORKS
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.
ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.
Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images
Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.
The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.
A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.
This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.
Source: Read Full Article