Computer

Researchers are Ready for the Impending Surge of Deepfake Propaganda in a War Between AI and AI

Researchers are Ready for the Impending Surge of Deepfake Propaganda in a War Between AI and AI

An anonymous whistleblower sends a video to an investigative journalist. It depicts a presidential contender confessing to criminal behavior. But is this video real? If so, it would be a tremendous scoop and may drastically alter the outcome of the upcoming elections.

The journalist uses a sophisticated technology to examine the footage, which reveals that it isn’t what it first appears to be. In fact, it’s a “deepfake,” a video made using artificial intelligence with deep learning.

Journalists all over the world could soon be using a tool like this. In a few years, everyone might even use a program like this to weed out bogus content from their social media feeds.

We envision a future for these technologies as researchers who have been investigating deepfake detection and creating a tool for journalists. However, they won’t be able to address every issue we have, and they will only be one tool in the larger armory used to combat false information.

The problem with deepfakes

Most people know that you can’t believe everything you see. Savvy news consumers have grown accustomed to viewing photographs that have been altered using photo-editing software during the last few decades. Videos, though, are another story.

To create a realistic scenario, Hollywood directors often spend millions on special effects. However, by employing deepfakes, novices with access to a few thousand dollars’ worth of computer equipment and a few weeks to spare could create something that was almost as realistic.

With the use of deepfakes, it is feasible to cast actors like Tom Cruise as Iron Man into movie scenes they have never been in, which makes for interesting videos. Sadly, it also makes it easy to produce pornography without the subjects’ permission. So far, those individuals nearly all of whom are women have been the deepfake technology’s largest victims.

Deepfakes can be employed to produce films of political figures making untrue statements. A low-quality, non-deepfake, but still fraudulent video of President Trump disparaging Belgium was released by the Belgian Socialist Party. The video sparked enough outrage to highlight the potential dangers of higher-quality deepfakes.

The scariest aspect of them is that they can be exploited to cast doubt on the veracity of real videos by implying that they might be deepfakes.

Given these dangers, it would be tremendously beneficial to be able to identify and properly label deepfakes. This would guarantee that the general public wouldn’t be duped by false videos and that actual ones may be taken seriously.

Spotting fakes

Deepfake detection as a field of research was begun a little over three years ago. Early research centered on finding obvious issues in the videos, like deepfakes without blinking. The fakes, however, have improved with time and are now more difficult for both humans and detecting tools to identify as fakes.

There are two major categories of deepfake detection research. The first involves looking at the behavior of people in the videos. Suppose you have a lot of video of someone famous, such as President Obama. His hand motions and speech pauses can all be used in this movie to teach artificial intelligence (AI) about his routines. It can then see a deepfake of him and note any instances in which it deviates from those patterns. This method has the benefit of potentially working even if the video quality is nearly flawless.

Our team and other academics have been concentrating on the distinctions between deepfakes in general and actual videos. Individually generated frames are frequently combined to make deepfake videos. Due to this, our team’s approaches capture the crucial information from the faces in each frame of a video and then follow the faces over groups of contemporaneous frames. This enables us to spot irregularities in the transfer of information between frames. We use a similar approach for our fake audio detection system as well.

These subtle details are hard for people to see, but show how deepfakes are not quite perfect yet. Detectors like these can work for any person, not just a few world leaders. In the end, it may be that both types of deepfake detectors will be needed.

On movies expressly gathered for tool evaluation, recent detection algorithms perform exceptionally well. Sadly, even the finest models do poorly in web videos. The most important next step is to make these tools more reliable and effective.

Who should use deepfake detectors?

Ideally, a deepfake verification tool should be available to everyone. However, this technology is in the early stages of development. Before making the tools widely available, researchers must enhance them and add security measures against hackers.

However, anyone who wants to deceive the public can use the resources to create deepfakes. The alternative is not to participate. Working with journalists struck just the perfect balance for our team because they are the first line of defense against the spread of false information.

Before publishing stories, journalists need to verify the information. They already use tried-and-true techniques, such as consulting reliable sources and having other people confirm important data. As a result, by giving them the tool, we provide them with more knowledge and are confident that they won’t rely solely on technology because technology is fallible.

Can the detectors win the arms race?

It is great to see Facebook and Microsoft teams putting money into research to better understand and identify deepfakes. In order to stay up with the rapid advancements in deepfake technology, this topic needs more study.

When deepfakes are discovered, journalists and social media platforms must choose the most effective way to alert the public. Research has shown that people remember the lie, but not the fact that it was a lie. Will the same be true for fake videos? Simply putting “Deepfake” in the title might not be enough to counter some kinds of disinformation.

Deepfakes are here to stay. With the development of increasingly potent artificial intelligence, controlling misinformation and safeguarding the public will become more difficult than ever. We are a part of an expanding scientific community that is addressing this problem; nevertheless, detection is only the first step.