Meet Our Partners

June 24, 2024
May 7, 2024
blog main image

It has almost been six weeks since we launched the Deepfakes Analysis Unit or the DAU as we sometimes like to call it. We have been overwhelmed by the queries received on the DAU tipline. We are grateful to everyone who took the time to share an audio or video with us for analysis and to those reading but haven’t tried it yet, we encourage you to do so.  

Our goal is to inform and educate anyone engaging with our tipline by sharing assessment reports, partner fact-checks, and sometimes a simple bot response does the job by telling the sender if a video or an audio they have shared has been produced using generative A.I. or not. 

The DAU, which is part of the Misinformation Combat Alliance (MCA) — a cross-industry body of 16 members — is a resource for the public. Our endeavour is to mainstream the conversation about how A.I. can be weaponised to produce harmful and misleading content, but, we want to do this with responsibility and caution, we do not want to alarm people, instead we want to inoculate them. 

Our efforts will be incomplete without the support of our seven detection and forensics partners who are lending their expertise and knowledge in building this project in the true spirit of solidarity.      

“At IdentifAI, we’ve chosen to partner with the DAU to address the rapidly increasing threat of deepfakes, especially as elections worldwide frequently fall victim to these tactics,” said Paul Vann, co-founder of IdentifAI, a San Francisco-based deepfake security startup. 

“Our comprehensive approach to combating deepfakes aims to prevent their creation and use, and partnering with the DAU empowers us to implement our solutions for positive change,” Mr. Vann added.  

It is early days for us, but based on the audio and video content that the DAU has received through the tipline and reviewed, we have noticed multiple instances where videos of celebrities and politicians have been manipulated with A.I.-generated audio.  

One case was exceptionally challenging to assess as the audio from an original video was skillfully patched with the cloned voice of a Bollywood actor to create a false narrative. The manipulation in that video was easy to miss by anyone without a well-trained eye to focus on  the inconsistencies in the lip movement or lack of understanding of the nuances of real speech and synthetic speech.  

Contrails AI, a Bangalore-based startup with its own A.I. tools for detection of audio and video spoofs is one of DAU’s partners.  Digvijay Singh, co-founder of the company said that DAU’s work comes at a time when media literacy among the public about misinformation produced using generative A.I. is very low.  

“DAU has been pivotal in highlighting the urgency of investigating deepfake manipulation, bringing this new form of attack vectors to the forefront as it should be,” Mr. Singh said.  

As we at the DAU brace ourselves each day to identify and address the misinformation spread using generative A.I., our strength comes from the partnerships that we are developing with experts in India and abroad. From RIT, Digit ID to a digital forensics lab run by the team of Dr. Hany Farid, a professor of computer science at the University of California in Berkeley to DeepTrust and KroopAI, we have a diverse and a growing list of partners.    

We are eager to learn and share our experiences. While we want to gain insights about new tools and techniques to identify elements of A.I. in a piece of content, we are equally passionate about media literacy on deepfakes and synthetic media as well as developing best practices to label content that is A.I.-generated — not everything that’s A.I.-generated is a deepfake.

If you are interested in being a partner on this project, please feel free to reach out. 

Related Articles