Video Promoting Passive Earning Program From Israel’s Central Bank Is Fake

June 24, 2024
June 10, 2024
Manipulated Media/Altered Mediablog main image
Screengrabs of the video sent to the DAU tipline

The Deepfakes Analysis Unit (DAU) analysed a video being purported as a news story on a passive income scheme introduced by an international central bank. After examining the video using A.I. detection tools and escalating it to our expert partners for assessment, we concluded that the video was created by stitching together disparate video clips with an A.I.-generated audio track.

The 37-second video in English, sent to the DAU tipline for assessment, opens with an anchor in a studio setting apparently introducing a get-rich-quick scheme. The narration in a male voice continues over a string of visuals, including a logo that resembles the official symbol of the Bank of Israel, the country’s central bank; graphics of stock trading from various exchanges and the potential returns from this supposed scheme, as well as visuals of a world heritage site in Haifa, Israel. Captions in the lower part of the video provide additional context throughout the length of the video.

The only face visible in the video is that of the anchor. During the seven seconds when he is apparently talking, his lips seem to somewhat synchronise with the spoken words. However, his delivery is static and does not match with the expressions visible on his face.

There is no logo of a news channel in the video even though it is packaged as a news story. The screen behind the anchor displays the logo of European Central Bank, the central bank of the European Union countries, however, only the word “European” is clearly visible along with the first three letters of “Central”; this kind of editing is not standard practice in broadcasting.

We undertook a reverse image search using screenshots of the anchor’s face and identified him as Rob Watts of Deutsche Welle, a German public broadcaster. To find the exact source of his visuals, we used a keyword search on YouTube with a combination of words that were visible in the video when he was in focus and a few related words such as “Eurosystem” along with “DW News”. We tracked down the original video, which was published on the official Youtube channel of DW News on March 17, 2023.

In the original and the manipulated video, the clothing and body language of Mr. Watts are identical, however, the backdrop has been cropped in the manipulated video. The audio track is also different in both videos. None of the other visuals in the manipulated video could be found in the original video.

To discern if A.I. had been used to manipulate the video, we put the video through A.I. detection tools.

The voice detection tool of Loccus.ai, a company that specialises in artificial intelligence solutions for voice safety, returned results which indicated that there was a 2.99 percent probability of the audio being real; suggesting a high percentage of synthetic speech in the video.

Screenshot of the analysis from Loccus.ai's audio detection tool

Hive AI’s deepfake video detection tool detected A.I. manipulation in portions of the video featuring the anchor, while its audio tool picked up indicators of A.I.-generated audio in some other portions of the video as well.

Screenshot of the analysis from Hive AI's deepfake video detection tool

To get more understanding on the A.I. manipulation in the video, we ran the video through TrueMedia’s deepfake detector which overall categorised the video as “highly suspicious”. The tool gave a 100 percent confidence score to the subcategory of “AI-generated audio detection”, more than 90 percent confidence scores to both “face manipulation” and “generative convolutional vision transformer”, the latter analyses the video for visual artefacts and uses latent data distribution to detect A.I. manipulation.

Screenshot of the overall analysis from TrueMedia' s deepfake detection tool
Screenshot of the analysis from TrueMedia's deepfake detection tool

To further analyse the audio track in the video received on the tipline, we used the A.I. speech classifier of ElevenLabs, a company specialising in voice A.I. research and deployment. The tool gave a 58 percent probability of the audio having been generated using their software.

Screenshot of the analysis from the A.I. speech classifier of ElevenLabs

After analysing the audio track, ElevenLabs confirmed that it is A.I.-generated. They told the DAU that they have identified the user who broke their “terms of use” when generating the synthetic audio using their software. They noted that they would use the audio escalated to them by the DAU to further train their in-house automated moderation system to better capture and block the generation of similar content in the future.

To get another expert view, we further escalated the video to a lab run by the team of Dr. Hany Farid, a professor of computer science at the University of California in Berkeley, who specialises in digital forensics and A.I. detection. They noted that the video is a lip-sync deepfake. They also said that there were inconsistencies near the mouth in parts of the video featuring the anchor.

They added that they ran the video through a technique where they “lip read” and compare it to the transcript, based on that they were able to determine that the lip-sync was pretty far off.

On the basis of our findings and analyses from experts, we have assessed that an A.I.-generated audio track was used to produce a fake video.

(Written by Debraj Sarkar and Debopriya Bhattacharya, and edited by Pamposh Raina.)

Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.