Technology

Protecting Your Voice from Deepfakes

Protecting Your Voice from Deepfakes

Deepfake defense entails taking proactive steps to secure your digital identity and prevent hostile actors from manufacturing manipulated audio content. AntiFake is a program developed by computer scientists to safeguard voice recordings against unlawful speech synthesis.

Recent advancements in generative artificial intelligence have accelerated progress in realistic speech synthesis. While this technology has the potential to improve lives through personalized voice assistants and accessibility-enhancing communication tools, it has also resulted in the creation of deepfakes, which employ synthesized speech to trick humans and machines for malicious purposes.

In response to this evolving threat, Ning Zhang, an assistant professor of computer science and engineering at Washington University in St. Louis’ McKelvey School of Engineering, created AntiFake, a novel defense mechanism designed to prevent unauthorized speech synthesis from occurring. Zhang presented AntiFake on November 27 at the Association for Computing Machinery’s Conference on Computer and Communications Security in Copenhagen, Denmark.

AntiFake makes sure that when we put voice data out there, it’s hard for criminals to use that information to synthesize our voices and impersonate us. The tool uses a technique of adversarial AI that was originally part of the cybercriminals’ toolbox, but now we’re using it to defend against them.

Ning Zhang

Unlike typical deepfake detection technologies, which are used to review and reveal synthetic audio as a post-attack mitigation tool, AntiFake takes a proactive approach. It leverages adversarial tactics to avoid the synthesis of misleading speech by making it more difficult for AI tools to read crucial qualities from voice recordings. The code is freely available to users.

“AntiFake makes sure that when we put voice data out there, it’s hard for criminals to use that information to synthesize our voices and impersonate us,” Zhang said. “The tool uses a technique of adversarial AI that was originally part of the cybercriminals’ toolbox, but now we’re using it to defend against them. We mess up the recorded audio signal just a little bit, distort or perturb it just enough that it still sounds right to human listeners, but it’s completely different to AI.”

Defending your voice against deepfakes

To ensure AntiFake can stand up against an ever-changing landscape of potential attackers and unknown synthesis models, Zhang and first author Zhiyuan Yu, a graduate student in Zhang’s lab, built the tool to be generalizable and tested it against five state-of-the-art speech synthesizers. AntiFake achieved a protection rate of over 95%, even against unseen commercial synthesizers. They also tested AntiFake’s usability with 24 human participants to confirm the tool is accessible to diverse populations.

AntiFake can currently protect brief samples of speech, focusing on the most common sort of voice impersonation. However, there is nothing stopping this method from being developed to cover longer recordings, or even music, in the ongoing fight against deception, according to Zhang.

“Eventually, we want to be able to fully protect voice recordings,” Zhang went on to say. “While I’m not sure what’s next in AI voice technology – new tools and features are being developed all the time – I believe our strategy of using adversaries’ techniques against them will remain effective.” Even if the engineering specifics may need to evolve to keep this as a winning strategy, AI remains subject to adversarial perturbations.”