First AI that sees like a human

First AI that sees like a human could lead to automated search and rescue robots, scientists say

  • Computer scientists have taught an agent to take snapshots of its surroundings
  • Most artificial intelligence systems are only trained for very specific tasks
  • It  takes glimpses around a room it has never seen before to create a ‘full scene’
  • The team of computer scientists say the skill could be used for search-and-rescue missions and would be equipped for new perception tasks as they arise

Computer scientists have taught an artificial intelligence agent how to take in its whole environment by just taking a few snapshots.

The new technology can gather visual information that can be used for a wide range of tasks including search-and-rescue.

Researchers have taught the computer system how to take quick glimpses around a room it has never seen before to create a ‘full scene’.

The scientists used deep learning, a type of machine learning inspired by the brain’s neural networks, to train their agent on thousands of 360-degree images of different environments. 

They say that their research could aid effective search-and-rescue missions by making robots that could relay information to authorities.

Most computer systems are trained for very specific tasks – such as to recognise an object or estimate its volume – in an environment they have experienced before.

Scroll down for video 

Computer scientists have taught an artificial intelligence agent how to do something that usually only humans can do by taking a few quick glimpses around and infer its whole environment

The tech, developed by a team of computer scientists from the University of Texas,  gathers visual information that can then be used for a wide range of tasks.

The main aim being that it could quickly locate people, flames and hazardous materials and relay that information to firefighters, the researchers said. 

After each glimpse, it chooses the next shot that it predicts will add the most new information about the whole scene.

They use the example of a human being in a shopping centre they had never visited before, and they saw apples, you would expect to find oranges nearby, but to locate the milk, you might glance the other way. 

Based on these glances, the agent infers what it would have seen if it had looked in all the other directions, reconstructing a full 360-degree image of its surroundings. 

When presented with a scene it has never seen before, the agent uses its experience to choose a few glimpses. 

Professor Kristen Grauman, who led the study, said : ‘Just as you bring in prior information about the regularities that exist in previously experienced environments – like all the grocery stores you have ever been to – this agent searches in a nonexhaustive way.’

‘We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise.

‘It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.’ 

‘What makes this system so effective is that it’s not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene, Professor Grauman said. 

The research was supported, in part, by the U.S. Defense Advanced Research Projects Agency and the US Air Force Office of Scientific Research.

HOW DOES ARTIFICIAL INTELLIGENCE LEARN?

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.   

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. 

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

Source: Read Full Article