Worries grow that TikTok is new home for manipulated video and photos – The Denver Post

The alligators of TikTok are not what they seem.

They appear in posts scattered across the video service, photoshopped into hurricane-flooded homes, blended into cheetah-pitbull hybrids or awaiting a wrestling match with a digitally engineered avatar of Tom Cruise.

And they are harmless, like much of the manipulated media on TikTok, warranting a few laughs and likes before slipping back into a relentless stream of content. But their existence worries people who study misinformation, because the same techniques are being applied to posts that sow political division, advance conspiracy theories and threaten the core tenets of democracy before the midterm elections.

“This kind of manipulation is only becoming more pervasive,” said Henry Ajder, an expert on manipulated and synthetic media. “When this volume of content can be created so quickly and at such scale, it completely changes the landscape.”

Edited or synthesized material also appears on other online platforms, such as Facebook, which has nearly 3 billion monthly active users. But experts said it was especially hard to catch on TikTok, which encourages its estimated 1.6 billion active users to put their own stamp on someone else’s content, and where reality, satire and outright deceit sometimes blend together in the fast-moving and occasionally livestreamed video feed.

The spread of potentially harmful manipulated media is hard to quantify, but researchers say they are seeing more examples emerge as the technologies that enable them become more widely accessible. Over time, experts said, they fear that the manipulations will become more common and difficult to detect.

In recent weeks, TikTok users have shared a fake screenshot of a nonexistent CNN story claiming that climate change is seasonal. One video was edited to imply that White House press secretary Karine Jean-Pierre ignored a question from Fox News reporter Peter Doocy. Another video, from 2021, resurfaced this fall with the audio altered so that Vice President Kamala Harris seemed to say virtually all people hospitalized with COVID-19 were vaccinated. (She had said “unvaccinated.”)

TikTok users have embraced even the most absurd altered posts, such as ones last month that portrayed President Joe Biden singing “Baby Shark” instead of the national anthem or that suggested a child at the White House lobbed an expletive at the first lady, Jill Biden.

But more than any single post, the danger of manipulated media lies in the way it risks further damaging the ability of many social media users to depend on concepts like truth and proof. The existence of deepfakes, which are usually created by grafting a digital face onto someone else’s body, is being used as an accusation and an excuse by those hoping to discredit reality and dodge accountability — a phenomenon known as the liar’s dividend.

Conspiracy theorists have posted official White House videos of the president on TikTok and offered debunked theories that he is a deepfake. Political consultant Roger Stone claimed on Telegram in September that footage showing him calling for violence ahead of the 2020 election, which CNN aired, was “fraudulent deepfake videos.” Lawyers for at least one person charged in the Jan. 6 riot at the U.S. Capitol in 2021 have tried to cast doubt on video evidence from the day by citing “widely available and insidious” deepfake-making technology.

“When we enter this kind of world, where things are being manipulated or can be manipulated, then we can simply dismiss inconvenient facts,” said Hany Farid, a computer science professor at the University of California, Berkeley, who sits on TikTok’s content advisory council.

Tech companies have spent years trying new tools to spot manipulations such as deepfakes. During the 2020 election season, TikTok, Facebook, Twitter and YouTube vowed to remove or label harmful manipulated content.

A 2019 California law made it illegal to create or share deceptive deepfakes of politicians within 60 days of an election, inspired in part by videos that year that were distorted to make Speaker Nancy Pelosi appear drunk.

TikTok said in a statement that it had removed videos, found by The New York Times, that breached its policies, which prohibit digital forgeries “that mislead users by distorting the truth of events and cause significant harm to the subject of the video, other persons or society.”

“TikTok is a place for authentic and entertaining content, which is why we prohibit and remove harmful misinformation, including synthetic or manipulated media, that is designed to mislead our community,” said Ben Rathe, a TikTok spokesperson.

But misinformation experts said individual examples were difficult to moderate and almost beside the point. Extended exposure to manipulated media can intensify polarization and whittle down viewers’ ability and willingness to distinguish truth from fiction.

Misinformation has become a problem on the platform before the midterms. In recent days, researchers from SumOfUs, a corporate accountability advocacy group, tested TikTok’s algorithm by creating an account and searching for and watching 20 widely viewed videos that sowed doubt about the election system. Within an hour, the algorithm had switched from serving neutral content to pushing more election disinformation, polarizing content, far-right extremism, QAnon conspiracy theories and false COVID-19 narratives, the researchers found.

TikTok said it had removed content, cited by the report, that violated its guidelines and would update its system to catch the search terms used to find the videos.

“Platforms like TikTok in particular, but really all of these social media feeds, are all about getting you through stuff quickly — they’re designed to be this fire hose barrage of content, and that’s a recipe for eliminating nuance,” said Halsey Burgund, a creative technologist in residence at the MIT Open Documentary Lab. “The vestiges of these quick, quick, quick emotional reactions just sit inside our brains and build up, and it’s kind of terrifying.”

In 2019, Burgund worked on a documentary project with multimedia artist and journalist Francesca Panetta that engineered a deepfake Richard Nixon announcing the failure of the 1969 Apollo 11 mission. (The actual expedition landed the first humans on the moon.) The project, “In Event of Moon Disaster,” won an Emmy last year.

The team used methods that are increasingly common in the online spread of misinformation, which can include miscaptioning photos, cutting footage or changing its speed or sequence, splitting sound from images, cloning voices, creating hoax text messages, creating synthetic accounts, automating lip syncs and text-to-speech, or even making a deepfake.

Most examples of manipulated content currently on social media are shoddily and obviously fabricated. But the technologies that can alter and synthesize with much more finesse are increasingly accessible and often easily learned, experts said.

“In the right hands, it’s quite creative, and there’s a lot of potential there,” Burgund said. “In the wrong hands, it’s all bad.”

Last month, several TikTok posts featuring manipulated video of Jill Biden promoting White House cancer initiatives at the Philadelphia Eagles’ home field were each viewed tens of thousands of times. In footage of the first lady singing alongside cancer patients and survivors, the sound from the crowd was replaced with loud booing and heckling, which fact checkers traced to older content from YouTube and TikTok.

Former President Donald Trump is a popular subject for parody on TikTok and other platforms. On TikTok, which offers tools for users to add extra audio or to “duet” with other users and “stitch” in their content, imitations of Trump have appeared in conversation with Harry Potter or performing as Marilyn Monroe.

“TikTok is literally designed so media can be mashed together — this is a whole platform designed for manipulation and remixing,” said Panetta, Burgund’s teammate. “What does fact-checking look like on a platform like this?”

Many TikTok users use labels and hashtags to disclose that they are experimenting with filters and edits. Sometimes, manipulated media is called out in the comments section. But such efforts are often overlooked in the TikTok speed-scroll.

Last year, the FBI warned that “malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations” through this fall. Media manipulation has already been weaponized abroad this year in the Russian invasion of Ukraine and in the Brazilian presidential election.

“We shouldn’t be playing Whac-a-Mole with every individual piece of content, because it feels like we’re playing a losing game, and there are much bigger battles to fight,” said Claire Wardle, a co-director of the Information Futures Lab at Brown University. “But this stuff is really dangerous, even though it feels like a fact checker or reverse image search would debunk it in two seconds. It’s fundamentally feeding into this constant drip, drip, drip of stuff that’s reinforcing your worldview.”

This article originally appeared in The New York Times.

Source: Read Full Article