AI developed by the US military that tracks changes in your behaviour

AI developed by the US military that tracks changes in your behaviour online could one day be used by bosses to keep tabs on you

  • US defence department are rolling out a new system that monitors web activity 
  • It senses ‘micro changes’ in behaviour of people with high security clearances 
  • The project will sift through data to detect if employees are trustworthy
  • If the pilot proves successful, it could be a model for the future of corporate HR 
  • e-mail

1

View
comments

AI software being tested by the US defence department could one day lead to systems for keeping tabs on employees, experts have warned.

The Defense Security Service (DSS) project monitors all online activity of employees with top-secret clearance – including emails, social media use and websites visited.

It’s trained to detect ‘micro changes’ in the behaviour of employees looking for evidence untrustworthy employees and the future risks they may pose.

The system would analyse employee data from their online activity and the information they provided through initial screening processes.  

If the pilot proves successful it could provide a model for the future of corporate AI, civil liberties groups suggest.

It raises questions about how closely companies should be tracking their employees’ digital lives in the future.

Scroll down for video  


A new AI-enabled project being tested by the US defence department senses ‘micro changes’ in the behaviour of employees with top-secret clearances. Through looking at online activity, the system would detect employees who have betrayed their trust 

The technology was brought to light through an in-depth report by Patrick Tucker for Defense One, a US specialist publication on developments in the military. 

Griff Ferris, legal and policy officer at Big Brother Watch, a British civil liberties and privacy campaigning organisation, says it sets a ‘worrying precedent’.

‘Using artificial intelligence to closely monitor employees’ every move at work as well as their personal lives in an attempt to predict what they will do in future is the epitome of total surveillance,’ he told MailOnline.

‘People shouldn’t be put under surveillance at work without suspicion as it erodes the principle that we are innocent until proven guilty.’ 

Military technology invariably becomes commercialised and is used by the private sector.

Past examples include the GPS tracking system, developed in the 1970’s for the military and now used the world-over.

  • Alexa may get its own robot BODY: Makers of the smart… An ill Wind in the Willows: Mole, Ratty and Mr Toad return… Wildlife tourists on safari may be harming the welfare of… Mammals developed arms before dinosaurs even existed!…

Share this article

According to the DSS, the new pilot system is based on an urgent need to get through a current the security clearance backlog of more than 600,000 people.

The average prospective Defence Department employee waits one year due to delays from a system that involves mailing questionnaires to former places of employment, waiting for a response, and scanning the returned paper document into a mainframe database.

Officials told Defense One that in addition to this system being old-fashioned, it only sheds light on an individual’s working past but an indication of future behaviour is needed. 

It involves collecting an individual’s digital footprint, or web activity and then matches that with other data that the department has on the person.

Since what we do online provides an insight into our behaviour they hope it will be a full snapshot the person. 

Using machine learning algorithms to derive insights, the pilot seeks a much fuller spectrum of digital information and then combines it with other data within the Defense Department.

They say that for the privilege and power that comes with holding a secret or top-secret clearance, it’s a ‘tradeoff’.


The system would analyse data from their online activity and the information they provided through initial screening processes. The Defense Security Service, or DSS, believes that this will eradicate any chances of leaks in information from those holding security clearances

When you are seeking a job with highly sensitive national secrets, you agree to give up a lot of information about yourself, said the researchers leading the programme. 

‘Once constructed fully, it will look at the bulk of the cyber data you generate,’ said Mark Nehmer, the technical director of research and development and technology transfer at the DSS’ National Background Investigative Services. 

‘It’s IP-based with a date time stamp on it. There’s no name associated with it; you actually have to go to a different set of logs to marry those two things up.’ 

The data from web activity will join data from what’s called ‘continuous evaluation’ a system which monitors life events related to clearance holders.

Examples of this are getting divorced or married, entering into a lot of debt, tax returns, arrests, and sudden foreign travel.

Mr Nehmer said that the eventual goal is a system that can sense not just impending insider crime but also far more intimate states of ‘pre-crime’.

‘We can begin to add whether or not the activity that the individual is producing is increasing, decreasing, or staying within a fairly normative range,’ he said. 

‘Fundamentally, we are there to look out for micro changes in behaviour that might indicate that a person is interested, or disinterested, in continuing their affiliation with the Department of Defense, or discontinuing their affiliation with life.’ he said. 

Mr Nehmer is adamant that the objective isn’t to slap the cuffs on people but to reveal changes before punishment becomes necessary.

The system is just a concept at the moment but poses questions about how much our employers should have access to our data. 

DSS experts say that as with a lot of military tech, companies in the future may want to implement something similar, creating a new norm for employee monitoring. 

When asked how likely it is for this type of AI surveillance is to be used by corporate employers, Griff Ferris, from Big Brother Watch, said that not enough is known about this type of AI employee surveillance being used in the UK. 

However, he did refer us to a Trade Unions Congress workplace monitoring report that claimed that over 56 per cent think it’s likely that they’re being monitored at work.

Around 70 per cent thought that surveillance is likely to become more common in the future and that the government should ‘ensure employers can only monitor their staff for legitimate reasons that protect the interests of workers’.

Recent reports form ex-Tesla employees claim that the company was surveilling staff at the Gigafactory outside Reno, Nevada to eavesdrop on the personal cellphones of employees while at work.

Tesla have denied the claims. 

WHY ARE PEOPLE SO WORRIED ABOUT AI?

It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.

SpaceX and Tesla CEO Elon Musk described AI as our ‘biggest existential threat’ and likened its development as ‘summoning the demon’.

He believes super intelligent machines could use humans as pets.

Professor Stephen Hawking said it is a ‘near certainty’ that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.

They could steal jobs 

More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.

And 27 percent predict that it will decrease the number of jobs ‘a lot’ with previous research suggesting admin and service sector workers will be the hardest hit.

As well as posing a threat to our jobs, other experts believe AI could ‘go rogue’ and become too complex for scientists to understand.

A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade. 

They could ‘go rogue’ 

Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don’t fully understand how they work.

If experts don’t understand how AI algorithms function, they won’t be able to predict when they fail.

This means driverless cars or intelligent robots could make unpredictable ‘out of character’ decisions during critical moments, which could put people in danger.

For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.

They could wipe out humanity 

Some people believe AI will wipe out humans completely.

‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.

He singled out artificial intelligence, or AI, as the ‘number one risk for this century’.

Musk warned that AI poses more of a threat to humanity than North Korea.

‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,’ the 46-year-old wrote on Twitter.

‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.’

Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.

He has argued that controls are necessary in order protect machines from advancing out of human control

Source: Read Full Article