The Deepfakes Analysis Unit (DAU) analysed a video that apparently shows Indian Finance Minister Nirmala Sitharaman, and Sweta Singh, a television journalist, promoting a financial investment project. On analysing the video, we noticed that most of the words in the audio track were identical to those used in another financial scam video that we recently debunked. After getting experts to weigh in we were able to conclude that the video was manipulated using A.I.-generated audio.
A Facebook link to the three-minute-and-19-second video in English was sent to the DAU tipline for assessment. The video, embedded in a post, was published on April 2, 2025 and has since garnered more than 9,000 views. The audio script used in this video is similar to the one used in the manipulated video apparently featuring N.R. Narayana Murthy, co-founder of tech giant Infosys, and Shereen Bhan, managing editor of the CNBC TV18 business news channel.
The name on the Facebook profile of the video uploader is “Hilbert Towne” and their display picture —a photo of a woman— is the exact same one that was used by the account that had uploaded the doctored Bhan-Murthy video on Facebook.
Another similarity between the profile details of the two video uploaders is that each indicates that the account belongs to an “advertising agency” in California, U.S.A. although their street addresses are different. We do not know if the two accounts are linked nor do we have evidence to suggest whether the suspicious video originated from the aforementioned account or another one.
The video has been packaged like a news segment. It opens with two insets of unequal size placed side-by-side in the video frame. Ms. Sitharaman is visible in the narrower inset and Ms. Singh is visible in the wider one. Bold, static text graphics in English can be seen at the bottom of the video frame throughout; the first line of the text reads: “My Financial Project For Indians”. The text that follows mentions that, “every citizen of India who invests 21,000 rupees” in the supposed “financial project will receive 1,500,000 rupees in the first month”.
The video tracks of Singh and Sitharaman play out in their respective insets, however, accompanying audio can only be heard with Singh’s visuals while Sitharaman’s lips are visibly moving. The female voice recorded over Singh’s video track introduces the project and credits Sitharaman for launching the purported project.
The following segment, which continues till the end, features only Sitharaman. The framing, backdrop, clothing, and body language of Sitharaman in this clip is identical to that used with another financial scam video debunked by the DAU in February; the audio, though, is different.
Based on our previous investigation we were able to establish that the video track for Sitharaman’s segment was lifted from this video published on Feb. 2, 2025 from the official YouTube channel of India Today, an India-based English news channel.
The original video, also in English, does not mention any financial project. It does not carry text graphics. The India Today logo is fully visible in the top and bottom right corner of the video frame. However, in the manipulated segment the logo at the top is mostly cropped out and the one at the bottom has been hidden by text graphics.
Even as it became clear that the female voice used with Sitharaman's clip in the doctored video is not authentic, the visual oddities in that clip are additional signs of manipulation.
The lip movements and the accompanying audio are out of sync. The overall video quality is poor with dropped frames evident at various instances. The area below her nose all the way down to her chin is particularly pixellated with vertical jagged lines visible in some frames.
The movements of her mouth and chin do not necessarily match with those of the rest of the face. Her lips seem to quiver, move unnaturally fast, and change shape in some frames. Her teeth appear blurry. The upper set is barely visible while the lower set sticks out as a shiny white patch in some frames and blends in with the lower lip in other frames.
We undertook a reverse image search using screenshots from the short clip of Singh’s seen at the beginning of the doctored video to locate its origin. We traced it to this video published on April 9, 2021 from the YouTube channel of “exchange4media Group”, which publishes content related to the media and advertising industry in India. This video is in Hindi and does not mention any financial project.
The clothing and body language of Singh in the video we traced and the doctored video are identical, however, the backdrop in the two videos appears slightly different. Zoomed-in frames have been used in the manipulated video resulting in a portion of the background and the logo of “exchange4media Group” being cropped out. There are no text graphics in the original video but for Singh’s name appearing for a few seconds in the video.
It seems that a few seconds of footage from the original video has been looped to create the manipulated clip as her head and hand movements repeat in the exact same way at least once in that clip. Her lip movements are completely out of sync with the audio. The shape of her chin seems to change throughout; her lip movements, like Sitharaman’s, seem unnaturally fast, and her teeth appear blurry.
We were able to establish that two unrelated clips of Singh and Sitharaman were spliced with fake audio to create the manipulated video. On comparing the voice attributed to Sitharaman and Singh in their respective segments with their natural voice heard in recorded videos, the differences became quite evident.
The voice with Sitharaman’s visuals sounds somewhat like hers but has a peculiar accent which does not match her natural accent. The voice in Singh’s clip has a foreign accent and does not sound like her at all. Both the female voices are devoid of natural intonation and characteristic pauses in speech; the overall delivery for both sounds robotic and hastened.
To check for elements of A.I. in the audio we ran it through the A.I. speech classifier of ElevenLabs, a company specialising in voice A.I. research and deployment. The results that returned indicated that it was “very likely” that the audio track used in the video was generated using their platform.
We reached out to ElevenLabs for a comment on the analysis. They told us that based on technical signals analysed by them they were able to confirm that the audio track in the video was A.I.-generated. They added that they have taken action against the individuals who misused their tools to hold them accountable.
To get another expert analysis on the video, we escalated it to the Global Deepfake Detection System (GODDS), a detection system set up by Northwestern University’s Security & AI Lab (NSAIL).
The video was analysed by two human analysts and run through 22 deepfake detection algorithms. Of those 19 predictive models gave a lower probability of the video being fake, the remaining three indicated a higher probability of the video being fake.
In their report, the team pointed to several signs of A.I. manipulation that they observed in the video. They noted that the mouth movements of both Singh and Sitharaman are out of sync with their speech.
They pointed out that Sitharaman’s voice sounds monotone and lacks natural tonal inflections. They added that her voice has an unnaturally consistent cadence where there should likely be varied pacing.
They observed that there is a seemingly unnatural "static" sound as background noise, which can only be heard when Sitharaman is apparently speaking. They also highlighted a “glitch” in the video at a specific time code when Sitharaman’s head can be seen shifting positions, which gives the appearance that different clips have been edited together.
They pointed to a specific instance in the video where her blinking seems unnatural with one eyelid remaining folded and not opening fully after blinking, which according to them, is an indication of possible manipulation. They further stated that her nose and mouth frequently change shape as she appears to speak.
In the overall verdict, the GODDS team concluded that the video is likely fake and generated with artificial intelligence.
The last video that we debunked, which we also point to at the beginning, has a stark similarity with the manipulated video being analysed through this report, not only in terms of the audio script but also in terms of the text graphics and overall packaging. The only difference being that the subjects targeted are different. We have shared below some examples to draw out the similarities.
We have consistently been highlighting the patterns we notice in the scam videos that we debunk through our reports. We want to point to the copycat nature of such manipulated content only to inform our readers and not to give oxygen to bad actors behind such harmful and misleading content.


Based on our findings and analysis from experts, we can conclude that original footage featuring Sitharaman was used with an A.I.-generated audio track to fabricate yet another financial scam video. The use of similar scripts for scam videos points to a possible attempt by bad actors to expedite the process of generating such fake videos using synthetic voice.
(Written by Debraj Sarkar and Rahul Adhikari, edited by Pamposh Raina.)
Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.