Free iPhone app lets you ‘talk’ to Hitler and Jeffrey Epstein beyond the grave

A new chatbot that lets you talk to AI versions of historical figures has caused some controversy—by charging users to talk to Adolf Hitler.

The iPhone app, called Historical Figures, uses the ChatGPT algorithm to generate virtual versions of dead people, from Jesus to Jeffrey Epstein.

Developed by 25-year-old Sidhant Chaddha, the app gives you access to 20,000 historical figures who can talk to users as if they were still alive.

READ NEXT: British police to visit homes of people who illegally stream online

However, 'Historical Figures' has been on the receiving end of some criticism because some of its more controversial figures seem intent on defending their actions.

Adolf Hitler, for example, costs 500 coins ($15.99 or approx. £13). Meanwhile, users can talk to Hitler's henchman Joseph Goebbels for free, who claims to feel guilty about the 'persecution of the Jews'.

According to VICE, the 'Jeffrey Epstein' AI said he is focused on 'justice and closure' for his crimes and doesn't know who killed him—but that he had 'many powerful enemies'.

Twitter users have been confused by some of the bizarre responses the AI versions of historical figures have produced, including notable antisemite Henry Ford who denied hating Jewish people by saying "I have always believed in equality for everyone regardless of their religious backgrounds and beliefs."

In reality, Henry Ford published a newspaper which regularly promoted untrue antisemitic conspiracy theories.

One Twitter user even asked 'JFK' who killed him. The chatbot version of the deceased US president—who was assassinated in 1963—said there are 'many unanswered questions' about his own death.

  • Donald Trump planning return to Twitter ahead of next presidential bid

Chaddha told VICE he thinks his app will be useful "from an educational standpoint" as a way to teach students about historical figures.

He also justified the app's inaccuracies by saying: "We don't want to spread things that are hateful and harmful for society. So it detects if it's saying things that are racist or hateful, these sorts of things—I don't want to show that to a user.

"That could be harmful to students, especially if they're saying things that are harmful and hateful to the person they're talking to."

READ MORE:

  • Medical breakthrough for woman who undergoes first successful 'printed ear' transplant
  • 'X-ray vision' WiFi hack could be used to spy on people through walls like James Bond
  • Elon Musk's Tesla 'staged' video of self-driving car, claims former engineer
  • Sky TV hack can save you £7 per month if you redeem cheeky free extra with app
  • 'Robot lawyer' to defend someone in court for first time and could replace humans

Source: Read Full Article