Deep Fake Biden in drag Bud Light goes viral, as experts warn of risks

The weaponization of Deep Fakes: Freaky AI clips showing Biden dressed as Bud Light’s Dylan Mulvaney and Trump as Better Call Saul flood Instagram – as experts warn of their sinister impact on 2024 election

  • AI-made media ‘increasingly difficult to identify’ ahead of major 2024 elections
  • US won’t regulate AI in time for 2024, says ex-Google CEO and Biden advisor
  • READ MORE: Can YOU spot a Deep Fake from a real person? New tech can help 

Deep fake videos of President Joe Biden and Republican frontrunner Donald Trump highlight how the 2024 presidential race could be the first serious test of American democracy’s resilience to artificial intelligence.

Videos of Biden dressed as trans star Dylan Mulvaney promoting Bud Light and Trump teaching tax evasion inside a quiet Albuquerque nail salon show that not even the nation’s most powerful figures are safe from AI identity theft.

Experts say that while today it is relatively easy to spot these fakes, it will be impossible in the coming years because technology is advancing at such a fast pace.

There have already been glimpses of the real-world harms of AI. Just earlier this week, an AI-crafted image of black smoke billowing out of the Pentagon sent shockwaves through the stock market before media factcheckers could finally correct the record.

A post shared by Drunk America (@drunkamerica)

A Deep Fake of Biden pre-gaming in drag, posted by @drunkamerica on Instagram, received 223,107 likes in the past five days. Experts believe that the eerie accuracy of AI-generated voices and faces mean it will be ‘increasingly difficult to identify disinformation’

‘It is becoming increasingly difficult to identify disinformation, particularly sophisticated AI-generated Deep Fake,’ according to Cayce Myers, a professor in Virginia Tech’s School of Communication. 

‘Spotting this disinformation is going to require users to have more media literacy and savvy in examining the truth of any claim,’ said Professor Myers, who has been studying Deep Fake tech and its increasing prevalence.

‘The cost barrier for generative AI is also so low that now almost anyone with a computer and internet has access to AI,’ Myers said.

Myers emphasized the role that both tech companies and the average citizen will have to play in preventing these waves of uncanny, believable fakes from overwhelming US democracy in 2024. 

‘Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation,’ Myers said. ‘However, that is not going to be enough.’

‘Companies that produce AI content and social media companies where disinformation is spread will need to implement some level of guardrails to prevent the widespread disinformation from being spread.’ 

The fear is that videos of politicians ushering words they never said could be used as a potent tool of disinformation to sway voters.

Infamous troll farms in Russia and other parts of the world hostile to the US are being used to sow dissent on social media. 

It has been five years since BuzzFeed and director and comedian Jordan Peele produced an uncanny Deep Fake satire of former President Barack Obama to draw attention to the alarming potential of the technology.  

‘They could have me say things like, I don’t know, [Marvel supervillain] “Killmonger was right,” or “Ben Carson is in the sunken place,”‘ Peele said in his expert Obama impression.

A Deep Fake spoof of former president Trump superimposed his voice and likeness onto AMC network’s shady lawyer Saul Goodman of the series Breaking Bad and Better Call Saul. The video, from YouTube channel CtrlShiftFace, has received 24,000 likes since it was posted

‘Or, how about this: “Simply, President Trump is a total and complete dipshit.”‘

But its not just academics, comedians and news outlets making these claims. 

Major policy experts have echoed their concerns, with increasing urgency over the past few years. 

‘A well-timed and thoughtfully scripted deepfake or series of deepfakes could tip an election,’ experts writing for the Council on Foreign Relations said back in 2019.   


The technology behind deepfakes was developed in 2014 by Ian Goodfellow, who was the the director of machine learning at Apple’s Special Projects Group and a leader in the field.

The word stems from the collaboration of the terms ‘deep learning’ and ‘fake,’ and is a form of artificial intelligence.

The system studies a target person in pictures and videos, allowing it to capture multiple angles and mimic their behavior and speech patterns.

The technology gained attention during the election season, as many feared developers would use it to undermine political candidates’ reputations.

The group also warned that Deep Fakes could soon ‘spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society.’ 

While Myers at Virginia Tech does acknowledge that programs like photoshop have been capable of similarly lifelike forgeries for years, he says the difference is the disinformation AI can be made in high-volume with ever increasing sophistication. “

‘Photoshop allows for fake images,’ Myers said, ‘but AI can create altered videos that are very compelling. Given that disinformation is now a widespread source of content online this type of fake news content can reach a much wider audience, especially if the content goes viral.’

Much like the ‘Better Call Trump’ and Biden Bud Light videos have. 

Myers has argued that we will see a lot more disinformation, both visual and written, serious and comical, in the near future.

But help ― in the form of government regulation of any kind ― does not appear to on the way. 

This Wednesday, former Google CEO Erich Schmidt, a since long-serving advisor on the White House, who recently co-chaired the US National Security Commission on AI, said he doubts the US will impanel a new regulatory agency to reign in AI.

‘The issue is that lawmakers do not want to create a new law regulating AI before we know where the technology is going,’ Myers said. 

Dozens of verified accounts, such as WarMonitors, BloombergFeed and RT, passed along the picture that shows black smoke billowing up from the ground next to a white building


1. Unnatural eye movement. Eye movements that do not look natural — or a lack of eye movement, such as an absence of blinking — are huge red flags. It’s challenging to replicate the act of blinking in a way that looks natural. It’s also challenging to replicate a real person’s eye movements. That’s because someone’s eyes usually follow the person they’re talking to.

2. Unnatural facial expressions. When something doesn’t look right about a face, it could signal facial morphing. This occurs when one image has been stitched over another.

3. Awkward facial-feature positioning. If someone’s face is pointing one way and their nose is pointing another way, you should be skeptical about the video’s authenticity.

4. A lack of emotion. You also can spot what is known as ‘facial morphing’ or image stitches if someone’s face doesn’t seem to exhibit the emotion that should go along with what they’re supposedly saying.

5. Awkward-looking body or posture. Another sign is if a person’s body shape doesn’t look natural, or there is awkward or inconsistent positioning of head and body. This may be one of the easier inconsistencies to spot, because deepfake technology usually focuses on facial features rather than the whole body.

6. Unnatural body movement or body shape. If someone looks distorted or off when they turn to the side or move their head, or their movements are jerky and disjointed from one frame to the next, you should suspect the video is fake.

7. Unnatural colouring. Abnormal skin tone, discoloration, weird lighting, and misplaced shadows are all signs that what you’re seeing is likely fake.

8. Hair that doesn’t look real. You won’t see frizzy or flyaway hair. Why? Fake images won’t be able to generate these individual characteristics.

9. Teeth that don’t look real. Algorithms may not be able to generate individual teeth, so an absence of outlines of individual teeth could be a clue.

10. Blurring or misalignment. If the edges of images are blurry or visuals are misalign — for example, where someone’s face and neck meet their body — you’ll know that something is amiss.

11. Inconsistent noise or audio. Deepfake creators usually spend more time on the video images rather than the audio. The result can be poor lip-syncing, robotic- sounding voices, strange word pronunciation, digital background noise, or even the absence of audio.

12. Images that look unnatural when slowed down. If you watch a video on a screen that’s larger than your smartphone or have video-editing software that can slow down a video’s playback, you can zoom in and examine images more closely. Zooming in on lips, for example, will help you see if they’re really talking or if it’s bad lip-syncing.

13. Hashtag discrepancies. There’s a cryptographic algorithm that helps video creators show that their videos are authentic. The algorithm is used to insert hashtags at certain places throughout a video. If the hashtags change, then you should suspect video manipulation.

14. Digital fingerprints. Blockchain technology can also create a digital fingerprint for videos. While not foolproof, this blockchain-based verification can help establish a video’s authenticity. Here’s how it works. When a video is created, the content is registered to a ledger that can’t be changed. This technology can help prove the authenticity of a video.

15. Reverse image searches. A search for an original image, or a reverse image search with the help of a computer, can unearth similar videos online to help determine if an image, audio, or video has been altered in any way. While reverse video search technology is not publicly available yet, investing in a tool like this could be helpful.


Source: Read Full Article