The Deepfakes Analysis Unit (DAU) analysed a video that shows Amit Shah, India’s Home Minister apparently promoting a supposed financial investment platform purportedly launched by Nirmala Sitharaman, India’s Finance Minister. After running the video through A.I. detection tools and getting our expert partners to weigh in, we were able to conclude that separate video tracks were stitched together with A.I.-generated audio to fabricate the video.
A Facebook link to the three-minute-and-22-second video, which is mostly in Hindi with a small segment in English, was sent to the DAU tipline for assessment. The video, embedded in a post, was published on June 11, 2025 from an account with the display name of “Mizpah Ministries Inc. New Jerusalem Chu”, and a display picture with the words “sincere worship 2022” written in colourful font set against a black background.
The profile details mention that the account was created on July 17, 2015 with the name “The Joyous Voice Of Mizpah Church” and that it belongs to a “public figure”. The contact information indicates Monrovia, Liberia’s capital, as the location while specifying Vietnam as the primary location of the people who manage this account. So far, the video has garnered about six million views. We do not have any evidence to suggest that the video originated from the aforementioned account or another.
The video has been packaged like a television news segment. It opens with Mr. Shah apparently speaking from the floor of the Indian Parliament as is evident from the seating arrangement and the people visible in the backdrop. The last 40 seconds of the video show a female anchor speaking to the camera from a studio-like setting.
Shah has been filmed in a medium close-up, his body language is animated and his gaze shifts in different directions. He seems to be holding a set of papers, which he alternates between his hands and refers to as he appears to be addressing people in front of him and those around. Bold, static text in English visible at the bottom of the video frame for the entire duration of Shah’s segment reads: “The queues at the banks have already begun - be the first!”.
A male voice in Hindi recorded in first person over Shah’s video track claims that a “digital platform” that is exclusively available to Indians can help them earn between “35,000 rupees to 3,50,000 rupees per month”. The same voice declares that this is not some “election promise” and reassures that it is not a “Ponzi scheme” but a “solution” that “can change the financial situation” of hundreds of thousands of Indian citizens. It promises that the income generation will be “automatic, passive, and permanent”.
The voice warns that the “platform” cannot onboard an unlimited number of people and that it will give preference to English speakers over Hindi speakers as “personal managers and customer support is available only in English”. It states that the platform is legal, transparent, and professionally developed and that Shah has tried it himself.
Ms. Sitharaman’s name is invoked by that voice as it claims that she greenlit the supposed initiative. The purported speaker’s official position is used as a goad to assure viewers of a money-back guarantee if the supposed investment amount of “21,500 rupees” does not yield the desired results within 30 days. The segment ends with the voice requesting the “members of the Parliament” not to oppose the initiative, and instead allow it to change the lives of Indian citizens.
The next segment opens with a split screen. In one window a female anchor with a laptop in front is visible, and a red sticker with the words “Parul University NAACA++” can be seen on its lid. The other window shows random visuals of people queuing outside ATMs, withdrawing cash and counting it. As the anchor appears to speak the visuals play simultaneously though without any accompanying audio.
The words “big story” are presented vertically in bold between the two windows while a text graphic at the top of the video frame bears the same message as is visible in the previous segment: “The queues at the banks have already begun - be the first!” The top right corner of the video frame carries a logo resembling that of India Today, an English news channel in India.
A female voice, recorded in English over the anchor’s video track, announces that some “law regarding the Quantum AI platform” has supposedly been “officially approved and recognised at the national level”. Calling the platform “legal and accessible” for “all citizens of India”, the same voice urges viewers to register on some “official website” and thereafter “expect a call from an English speaking manager.”
The same investment amount of “21,500 rupees” and time period of “30 days” heard in the previous segment is repeated by the female voice. The video ends with the voice conveying that “slots are limited” and mentions that people across India are “already lining up at banks” and “everyone is rushing to make their deposit.”
The anchor’s segment has better video quality compared to Shah’s segment and unlike his segment has no jump cuts. Her lip-sync is imperfect and her mouth seems to be unnaturally open throughout without a natural break as she appears to talk. Her upper set of teeth is not visible in most of the frames.
As Shah appears to talk his teeth are barely visible and the inside of his mouth looks unnaturally dark. In some frames, his lower set of teeth appear to blend with the lips. The audio track and his lip movements, which seem unnaturally fast, are imperfectly synced, especially in frames where his face turns to the side. In a few frames, his profile gets blurry and distorts the face of the person seated behind him.
On comparing the voice attributed to Shah and the news anchor, respectively, with that heard in their recorded videos available online, similarities can be identified. However, Shah’s signature voice modulation is missing from the audio track and the Hindi used there is very chaste instead of conversational. The pace of the female voice sounds slower when compared with the pace of speech heard in the real voice samples of the anchor. Both the audio tracks lack intonation and pauses that are characteristic of human speech; the overall delivery for each voice sounds scripted.
We undertook a reverse image search using screenshots from the video and traced Shah’s visuals to this video published on Dec. 6, 2023 from the official YouTube channel of ANI News, an Indian news agency. The female anchor’s clips were traced to this video published on May 26, 2025 from the official YouTube channel of India Today.
The clothing and body language of Shah and the female anchor in the video we reviewed and the videos we were able to locate are identical. However, their respective background and foreground looks slightly different as the visuals used in the manipulated version are more zoomed-in with portions of the backdrop and foreground cropped out. The footage in the source videos features other subjects as well but they are not part of the doctored video.
The language spoken by Shah and the anchor is Hindi and English, respectively, in the original videos as is the case in the doctored video. However, the content of the audio tracks is totally different, there is no mention of any financial scheme in the original audio tracks.
In Shah’s original video, text graphics with Shah’s name and title are visible in the lower part of the video frame and below that a news ticker continuously scrolls information. The official logo of Sansad TV, an Indian channel that broadcasts Indian Parliamentary proceedings, can be spotted in the top left corner of the video frame. In the anchor’s original video, the India Today logo is prominent at the top-right and bottom-right corner of the video frame.
The packaging, tone, messaging, including the suggested “investment amount” mentioned in this video is similar to that repeatedly used in the slew of financial scam videos that we have debunked.
To discern the extent of A.I. manipulation in the video under review, we put it through A.I. detection tools.
The voice tool of Hiya, a company that specialises in artificial intelligence solutions for voice safety, indicated that there is a 72 percent probability of the audio track in the video having been generated or modified using A.I.

Hive AI’s deepfake video detection tool highlighted several markers of A.I. manipulation in both Shah’s and the anchor’s segments. Their audio detection tool indicated that the bulk of the audio track in the video is “A.I.-generated”.


We also ran the audio track from the video through Deepfake-O-Meter, an open platform developed by Media Forensics Lab (MDFL) at UB for detection of A.I.-generated image, video, and audio. The tool provides a selection of classifiers that can be used to analyse media files.
We chose six audio detectors, out of which two indicated that it was highly likely that the audio track in the video is A.I.-generated. AASIST (2021) and RawNet2 (2021) are designed to detect audio impersonations, voice clones, replay attacks, and other forms of audio spoofs. The Linear Frequency Cepstral Coefficient (LFCC) - Light Convolutional Neural Network (LCNN) 2021 model classifies genuine versus synthetic speech to detect audio deepfakes.
RawNet3 (2023) allows for nuanced detection of synthetic audio while RawNet2-Vocoder (2025) is useful in identifying synthesised speech. Whisper (2023) is designed to analyse synthetic human voices.

To check for elements of A.I. in the audio we ran it through the A.I. speech classifier of ElevenLabs, a company specialising in voice A.I. research and deployment. The results that returned indicated that it was “very likely” that the audio track used in the video was generated using their platform.
We reached out to ElevenLabs for a comment on the analysis. They told us that based on technical signals analysed by them they were able to confirm that the audio track in the video is synthetic or A.I.-generated. They added that they have taken action against the individuals who misused their tools to hold them accountable.
For expert analysis on the video we shared it with our detection partner ConTrailsAI, a Bangalore-based startup with its own A.I. tools for detection of audio and video spoofs.
The team ran the video through audio as well as video detection detection models, the results that returned indicated high confidence for A.I.-generation and A.I.-manipulation in the audio and video track, respectively.
In their report they noted that in Shah’s segment there were signs of poor lip-sync and that his lip region is blurrier than the rest of his face and it looks animated. Referring to the audio track being attributed to Shah, they observed that the voice sounds slightly like his, however, the tonality and pacing are unnatural.


For the frames featuring the anchor they added that it too has clear signs of A.I. manipulation when it comes to the lip-sync, and that the audio has been created using voice cloning or voice conversion method.


To get further analysis on the video, we escalated it to the Global Deepfake Detection System (GODDS), a detection system set up by Northwestern University’s Security & AI Lab (NSAIL). The video was analysed by two human analysts and run through 22 deepfake detection algorithms for video analysis; of those 10 gave a higher probability of the video being fake and the remaining 12 gave a lower probability of the video being fake.
In their report, they pointed to several indicators to highlight that the video may have been artificially manipulated. The team noted that in various instances in the video, the bottom half of Shah’s face blurs and it’s difficult to distinguish the mouth features. They added that the blurring could be an editing tactic used to disguise media manipulations. They further observed, as we have above, that his audio and mouth movements align imperfectly.
They mentioned that the anchor’s mouth and nose frequently change shape while she appears to be speaking. As per their observation her lip colour, which is dark, nearly disappears when her lips purse together.
Based on our findings and analyses from experts, we can conclude that separate video tracks of Shah and the anchor have been stitched together and used with synthetic audio to fabricate the video. This is yet another attempt to promote a dubious financial investment platform by falsely linking it to the government in a bid to scam the public.
(Written by Debraj Sarkar and Debopriya Bhattacharya, edited by Pamposh Raina.)
Kindly Note: The manipulated audio/video files that we receive on our tipline are not embedded in our assessment reports because we do not intend to contribute to their virality.