AI S'pore launches $700,000 contest to combat deepfakes

The drive to identify deepfakes from genuine content in audiovisual media has received a boost, with a $700,000 international competition organised by AI Singapore, a national artificial intelligence (AI) programme under the National Research Foundation.

The five-month-long Trusted Media Challenge aims to encourage AI enthusiasts and researchers around the world to design and test models and solutions that can detect modified audio and video, AI Singapore said in a statement yesterday.

By incentivising involvement of international contributors and sourcing innovation ideas globally, the competition will also strengthen Singapore’s position as a global AI hub, it added.

The challenge is being conducted in partnership with news media outlets The Straits Times and Mediacorp’s CNA, which have together provided about 800 real video clips, including news reports and interviews.

Participants will be given access to data sets that include about 4,000 real clips and 8,000 fake ones to test and train their AI models on.

They will then need to build AI models that can estimate the probability that a given video is fake.

Anyone with interest and experience in AI technologies such as machine learning, deep learning, computer and media forensics can take part.

Mr Warren Fernandez, editor-in-chief of Singapore Press Holdings’ English/Malay/Tamil Media Group and editor of The Straits Times, said fake news is polluting the media landscape, which makes it harder for audiences to sift out the truth as it proliferates.

“This undermines our society’s ability to engage in meaningful discussions on the big issues of the day. Media organisations have a role to play in helping people grapple with this, and should employ all the technologies and tools available to do so,” he added.

“This AI challenge is one way to do so, and we are happy to be able to support this effort.”

Mr Willy Tan, who leads AI strategy and solutions at Mediacorp’s News Group, said fake media technology, also known as deepfake, is becoming more sophisticated and available, making it easier to create fake content that is difficult for the human eye to differentiate.

“Maliciously doctored content can lead to public misinformation and social fissures, if left unchecked,” he added.

“CNA is excited to partner in the Trusted Media Challenge, collaborating in continued efforts to combat this impending threat, in our mission to provide timely and accurate news to Singapore and the region.”

Professor Ho Teck Hua, executive chairman of AI Singapore, noted that verification tools to identify and counter deepfakes are being developed, but they are still in the nascent stages.

“We are in a race between those who want to use deepfake technology for nefarious purposes and those who want to create AI-based tools to counter them,” he said.

“With this as context, we designed the Trusted Media Challenge to provide a platform for AI experts to design and improve machine-learning models to help organisations and individuals reliably identify media that has been manipulated, in the near future.”

Participating teams or individuals can submit their code and models from now till Dec 15 at trustedmedia.aisingapore.org which will automatically rank the submissions on a leader board.

More details and the training data sets can also be accessed through the site.

The challenge will be divided into two phases, with the first lasting about four months. The top teams from the first phase will move on to the second phase, and prizes will be awarded based on the final ranking.

The winner will receive $100,000 in prize money and a start-up grant of $300,000 to develop the solutions further, using Singapore as a development base. Those who come in at second and third place will also win prizes and start-up grants.

The top three winners will be announced in January next year.

Join ST’s Telegram channel here and get the latest breaking news delivered to you.

Source: Read Full Article