Viral selfie may be too honest with classifications

Trump the ‘ex-president’, Hillary the ‘second-rater’ and Elon Musk the ‘demagogue’: AI selfie tool classifies people by their selfies – with some very cruel results

  • AI learned to classify people using a database of 14 million images
  • Users upload a picture and AI will reveal what it ‘thinks’ you look like
  • AI has been found be be cruel in some of the results 

You can now see what you look like through the eyes of an AI.

ImageNet Roulette was trained with millions of images and uses a neural network to classify pictures of people, with some ‘dubious and cruel’ results.

The technology was developed to show the importance of choosing the correct data when training a machine learning system, as it may learn how to be bias.

Scroll down for video 


The AI was trained using ImageNet, which is a massive 14 million image data system created in 2009. Users can upload a picture (like this one of US President Donald Trump) to the website


To see what this AI thinks of you, simply snap a picture using a webcam or upload an image (like this picture of Hillary Clinton) to the website – and in seconds it will produce a classification


ImageNet Roulette uses a neural network to classify pictures (such as this one of Kim Kardashian West) of people with some’dubious and cruel’ results. The AI classified the reality-TV star as ‘eccentric’

ImageNet Roulette was created by artist Trevor Paglen and Kate Crawford, co-founder of New York University’s AI Institute.

‘ImageNet Roulette is meant to demonstrate how various kinds of politics propagate through technical systems, often with the creators of those system even being aware of them,’ the team shared on the website.

The AI was trained on ImageNet, which is a massive 14 million image data system that was created in 2009.

The creators of ImageNet Roulette trained their AI on 2833 sub-categories of ‘person’ found in ImageNet.

To see what this AI thinks of you, simple snap a picture using a webcam or upload an image to the website – and in seconds it will produce a classification. 

The AI was trained using ImageNet, which is a massive 14 million image data system created in 2009, Business Insider reported.

The creators of ImageNet Roulette trained their AI on 2833 sub-categories of ‘person’ found in ImageNet.

To see what this AI thinks of you, simply snap a picture using a webcam or upload an image to the website – and in seconds it will produce a classification. 

The image can also be that of a celebrity, which can produce some interesting labels.

The AI labeled US President Donald Trump as ‘ex-president’, suggesting he may not be re-elected for a second term.

Another cruel classification was for Hillary Clinton – the AI dubbed her ‘second-rater’. 

Kim Kardashian West was labeled ‘eccentric’, Chrissy Teigen a ‘non-smoker’ and Meghan Markle was viewed as a ‘biographer’.


Kim Kardashian West was labeled ‘eccentric’ and Meghan Markle (pictured) was viewed as a ‘biographer’

The machine learning system also had something to say about SpaceX’s CEO, Elon Musk – it classified him as demagogue. 

ImageNet Roulette is not the first AI to ‘say’ exactly how it feels.

In 2016, Microsoft released ‘Tay’ on Twitter, which was developed to interact with users, but took a turn for the worst when people took advantage of flaws in Tay’s algorithm that meant the AI chatbot responded to certain questions with racist answers.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

The bot also managed to spout gems such as, ‘Bush did 9/11.’


The machine learning system also had something to say about SpaceX’s CEO, Elon Musk – it classified him as demagogue 


Chrissy Teigen was classified as a  ‘non-smoker’, which appears to be an accurate classification

It also said: ‘donald trump is the only hope we’ve got’, in addition to ‘Repeat after me, Hitler did nothing wrong.’

This was followed by, ‘Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say’

The reason this happened is because of the tweets sent by people to the bot’s account. The algorithm used to program her did not have the correct filters.

Another instance occurred when an algorithm used by officials in Florida automatically rated a more seasoned white criminal as being a lower risk of committing a future crime, than a black offender with only misdemeanors on her record.

This resulted in the AI characterizing black-sounding names as ‘unpleasant’, which they believe is a result of human prejudice hidden in the data.

 

Source: Read Full Article