First ‘certified’ deepfake warns viewers not to trust everything they see online

The world’s first certified and transparent deepfake video has been created in a new bid to counter disinformation on the internet.

The video, of the AI expert Nina Schick warning about the dangers of the technology, contains the first industry watermark declaring it is a deepfake.

Deepfake videos are generated by artificial intelligence and can create lifelike representations of people, sometimes without their consent.

The technology is advancing at pace, with one of President Zelensky purporting to surrender to Russia going viral in 2022. A pro-Chinese news channel featuring a pair of computer-generated presenters was discovered this year in the first case of a state-aligned operation using deepfakes.

Concern over the spread of the AI has increased as new tools have emerged to make deepfakes easier, with consumers able to create videos using just text commands.

Deepfake pictures are also becoming more sophisticated with viral shots of Donald Trump’s arrest and the Pope in a puffer jacket circulating widely last week. However, there is greater concern over the impact of fake videos as the public is more inclined to believe them.

In the certified video, a lifelike representation of Schick made with her consent and using her voice asks: “What if we can no longer rely on the authenticity of what we see and hear? We are at the dawn of artificial intelligence and already the lines of what is real and fiction are becoming blurred.”

The video has a cryptographic signature added to it that states it has been created with AI, who made it and when. This is connected to the video for its lifetime and any edits are made transparent. The viewer can see it in the top right of the screen under an “i” symbol.

The video was created by Truepic, a digital authenticity company, and, an AI studio used by Hollywood and the advertising industry.

Schick is hoping the video will “jolt” the tech and media industries into adopting the signature more widely.

“If you can’t trust digital media or any digital content then I think that’s an existential threat, not only to politicians or politics, but to business to society to everyone, because everyone has to exist in this ecosystem, we don’t really have a choice about it anymore,” she said.

“So part of this campaign is to show, to jolt, to force all the companies that are creating the models and creating AI-generated content, to start signing their content, to put transparency at the heart of generative AI. [It’s] also to start pushing platforms to adopt the open standard to ensure that we can see the authenticity of all content.”

Europe is looking to bring in tighter laws to regulate the technology and its draft AI Act stipulates that synthetic media should be labelled. The UK is looking at a more light-touch regulation.

John Penrose, the Conservative MP who is campaigning for the Online Safety Bill to tackle disinformation, welcomed the push for AI watermarking.

“The Russians have already deepfaked a video of Zelensky urging Ukrainian troops to surrender, and the potential for disinformation undermining our elections and destroying trust in everything from the NHS to news reports and our justice system will only get worse,” he said.

“Telling real pictures and videos from ever-improving fakes is essential, which is why being able to check where any image comes from, whether it has been altered and by who is vital. Otherwise we’ll never know if someone is trying to spin us a yarn or not.”

Henry Ajder, an expert in generative AI, said widespread watermarking was the right strategy, adding: “Detection [of malicious deepfakes] is not going to be a viable approach at scale, whereas I feel that these approaches are viable at scale, or at least certainly much more. They have some challenges around implementation.

“But I think this is the best approach that’s available to us right now for verifying all content and providing a new standard, a new benchmark for how we understand trust in media.”

The AI watermarking technology has been developed to a new standard created by the Coalition for Content Provenance and Authenticity (C2PA), an industry body aiming to address the issue of digital transparency. Its steering members include the BBC, Adobe, Intel, Sony, Microsoft and Truepic.

Last week Adobe, which owns Photoshop, released Firefly, an AI image generator that tags its output with the C2PA information. Adobe is an investor in Truepic, to which Schick is an adviser.