Video of Anjana Om Kashyap Promoting Nandan Nilekani’s Trading App Is Fake

June 24, 2024
June 5, 2024
Manipulated Media/Altered Mediablog main image
Screengrabs of the video sent to the DAU tipline

The Deepfakes Analysis Unit (DAU) analysed a video that purportedly features an anchor from a leading Hindi television news channel presenting a story about a trading app being promoted by Nandan Nilekani, co-founder and non-executive chairman of the tech giant Infosys. After running the video through A.I. detection tools and escalating it to an expert, we concluded that the video is a patchwork of unrelated clips and A.I.-generated audio. 

The one minute and 45-second video in English was sent to the DAU tipline embedded in an Instagram link. The video opens with visuals of Anjana Om Kashyap, a television news anchor with a Hindi news channel, the logo of which is visible on the laptop placed in front of her. The female voice accompanying her visuals introduces the app, followed by a male voice recorded over the visuals of Mr. Nilekani, apparently speaking to the camera, stitched with random visuals including those of computers and people working together, and a logo of sorts resembling a lock. 

Throughout the video, whenever the focus is on Kashyap or Nilekani, their lip movements are not in sync with the voice over their visuals. Their lips vibrate oddly, and particularly in the case of Nilekani’s close-up, an unnatural, extra set of teeth can be seen when his lips move to speak.  

Both the voices have a strange non-Indian accent and the delivery for each is very static with no change in pitch; none of these characteristics bear any similarity to the style in which Kashyap or Nilekani are known to speak. 

We ran a reverse image search using screenshots from the separate clips that feature in the video. Nilekani’s close-up led us to this video, and others like it that were published from his YouTube channel on Jan.14 and Jan.20, 2014, respectively. The backdrop, clothing, and posture of Nilekani are identical in the original as well as the manipulated video, but the accompanying audio is different in both. We were unable to trace the original clip featuring Kashyap, which has been used in the manipulated video.  

To check whether the voice in the video that we were analysing is synthetic or not, we ran the video through a series of A.I. detection tools.

The voice detection tool of, a company that specialises in artificial intelligence solutions for voice safety, returned results which indicated that the probability of synthetic speech in the audio was very high with only 0.09 percent probability of the audio being real.

Screenshot of the analysis from’s audio detection tool

HIVE AI’s deepfake video detection tool recognised portions of A.I. manipulation for both the speakers. Their audio detection tool also identified strong indicators of audio manipulation using A.I. throughout the length of the video.

Screenshot of Kashyap’s analysis from Hive AI’s deepfake video detection tool
Screenshot of Nilekani’s analysis from Hive AI’s deepfake video detection tool

We also used TrueMedia’s deepfake detector, which overall categorised the video as “highly suspicious”, calling attention to a high probability of A.I. use in the production of this video. As the tool further broke down the analysis into subcategories, it gave a 100 percent confidence score to “AI generated audio detection”, 95 percent score to “advanced foundational features”, and 89 percent score to “audio analysis,” — all indicators that the audio was produced or synthesised using A.I.

Screenshot of the audio analysis from TrueMedia’s deepfake detection tool

The tool also gave an 85 percent score to the subcategory of “face manipulation,” which is an indicator of A.I. manipulation in the visual elements as well.

Screenshot of the analysis from TrueMedia’s deepfake detection tool

We even ran the video through the A.I. speech classifier of ElevenLabs, a company specialising in voice A.I. research and deployment, to further analyse the audio. The results that returned indicated that there was a very high probability that the audio track of the video was generated using their software.

Screenshot of the analysis from A.I. speech classifier of ElevenLabs

We reached out to ElevenLabs to get further analysis. They told the DAU that they analysed the audio and can confirm that it is A.I.-generated. They were also able to identify the user who broke the “terms of use” while generating the synthetic audio using their software.

They noted that they would use the audio escalated to them by the DAU to further train their in-house automated moderation system to better capture similar content in the future.

On the basis of our investigation and confirmation from Elevenlabs we can conclude that the video featuring Kashyap and Nilekani promoting a trading app is fabricated; it was produced using A.I. voice.

(Written by Debraj Sarkar and Areeba Falak, and edited by Pamposh Raina.)

Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.

You can read below the fact-checks related to this piece published by our partners:

News anchor’s video with entrepreneur Nandan Nilekani promoting a trading app is a deepfake