Screengrabs of the clip forwarded to the DAU by a fact-checking partner

The Deepfakes Analysis Unit (DAU) analysed a video, which appears to be a news story from a leading Indian news channel about a physical attack on Dr. Devi Prasad Shetty, a renowned cardiac surgeon; the supposed assailant is a disgruntled representative of a pharmaceutical company. After examining the video using A.I. detection tools and escalating it to our expert partners for assessment, we concluded that the video was created by patching together disparate video clips and adding A.I.-generated audio tracks to it.

The nearly two-minute long video was sent to the DAU by a fact-checking partner for verification. The entire clip is in Hindi and it appears to bear a logo that resembles the logo of a leading Hindi news channel. The video opens with an anchor setting the context, followed by footage of a brawl captured on what appears to be a television show, stitched together with visuals of an interview of Dr. Shetty’s and clips of him at a hospital and at public events; the video ends with generic footage of patients getting treatment in a hospital setting.

In portions of the video that feature close-ups of Shetty or the anchor, their audio is not at all synchronised with their lip movements. The audio track overlaid on Shetty’s interview makes it sound that he admits to being assaulted and that he is accusing pharmaceutical companies of selling placebos for hypertension and heart-related diseases and claims to have an alternate cure for those health conditions. The audio is robotic with no change in tone or pitch.

We ran a reverse image search using screenshots of the separate clips that feature in the video. A snippet between the 5-second and 22-second mark that supposedly features the assault on the doctor is actually a clip from a video posted in January 2017 from the official Youtube channel of News Nation, a Hindi television channel; it does not show Shetty in any frame.

The interview of Shetty between the 40-second and 55-second mark in the video has been lifted from a longer interview of his published on the NDTV Profit website on November 24, 2022. The footage that spotlights his work was traced to a Youtube video from January 2021 and yet another bit from the manipulated video could be seen in this report on Indian hospitals by Al Jazeera.

The exact clip featuring the anchor person could not be traced online, perhaps it was from a television broadcast and the recording for the same was not posted online. However, the anchor person featured in the video is a news presenter with a Hindi news channel.

Our investigation helped us establish that the narration in the video had nothing in common with the accompanying visuals. To discern if the fabricated audio was produced using generative A.I., we put the video through A.I. detection tools.

The voice detection tool of Loccus.ai, a company that specialises in artificial intelligence solutions for voice safety, returned results which indicated that the probability of the video being real was 17.33 percent, which means that there is a significant percentage of synthetic speech in this video.

Screenshot of the analysis from Loccus.ai's audio detection tool

TrueMedia’s deepfake detector returned an overall analysis of “highly suspicious”. It gave a 100 percent confidence score to A.I.- generated audio detection.

Screenshot of the audio analysis from TrueMedia's deepfake detection tool
Screenshot of the analysis from TrueMedia's deepfake detection tool

We also used Hive AI’s audio detection tool to get further clarity. It analysed the audio from this video in fragments of 20-second and gave high probability scores to the audio being A.I-.generated.

To get more insight into the nature of manipulation in this video, we sought the expertise of our partners.

ConTrails AI, a Bangalore-based startup, used its audio-spoof detection A.I. model to check the authenticity of the audio track in the video. They ran 40-second chunks of the audio through the tool, which gave a very high confidence score to the audio being fake. They were confident that A.I. was used to produce the voices being attributed to both the news anchor and Shetty in the video.

To get another expert view on the video under investigation, we escalated it to a lab run by the team of Dr. Hani Farid, a professor of computer science at the University of California in Berkeley, who specialises in digital forensics and A.I. detection. They noted that in this video there was a very limited effort to match the audio with the visuals. They added that the entire audio appears to be A.I.-generated.

On the basis of our findings and analyses from experts, we can conclude that the audio being attributed to the news anchor and Shetty in this video is not real but A.I.-generated.

(Written by Debraj Sarkar and edited by Pamposh Raina.)

Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.

You can read below the fact-checks related to this piece published by our partners:

Fact Check: This Aaj Tak show about doctor being attacked on air is a DEEPFAKE

Dr Devi Shetty's AI Voice Clone Peddled To Claim He Was Attacked On Air