Computer

A Computer Model Aims to Explain how Misinformation Spreads and Proposes Countermeasures

A Computer Model Aims to Explain how Misinformation Spreads and Proposes Countermeasures

It begins with a superspreader and progresses via a complex web of interconnections, finally leaving no one unaffected. Those who have previously been exposed may only encounter minor side effects.

It’s not a virus, to be sure. It’s the spread of viral misinformation and disinformation, specifically misinformation with the intent to deceive.

Tufts University academics have developed a computer model that closely resembles how disinformation spreads in the real world. According to the researchers, the research could provide insight into how to safeguard individuals from the current misinformation epidemic that is endangering public health and democracy.

“Our society has been grappling with widespread beliefs in conspiracies, increasing political polarization, and distrust in scientific findings,” said Nicholas Rabb, a Ph.D. computer science student at Tufts School of Engineering and lead author of the study, which came out January 7 in the journal PLOS ONE. “This model could help us get a handle on how misinformation and conspiracy theories are spread, to help come up with strategies to counter them.”

Modeling the transmission of incorrect perceptions on how disease spreads via a social network is a common practice among scientists who investigate information dissemination. Most of those models, on the other hand, approach people in networks as if they are all equally accepting of whatever new belief that contacts pass on to them.

Instead, the Tufts researchers based their model on the idea that our pre-existing ideas can have a big impact on whether or not we accept new information. Many people dismiss factual information backed up by facts if it contradicts their existing beliefs.

It’s becoming all too clear that simply broadcasting factual information may not be enough to make an impact on public mindset, particularly among those who are locked into a belief system that is not fact-based. Our initial effort to incorporate that insight into our models of the mechanics of misinformation spread in society may teach us how to bring the public conversation back to facts and evidence.

Lenore Cowen

The power of this effect has been noted by healthcare providers, who have seen that some COVID patients die believing that COVID does not exist. To account for this in their model, the researchers gave each person in the artificial social network a “belief.”

To do so, the researchers assigned a number from 0 to 6 to the participants’ beliefs in the computer model, with 0 signifying severe scepticism and 6 representing strong belief. The statistics could represent a wide range of viewpoints on any topic.

For example, the number 0 could represent great scepticism that COVID vaccines help and are safe, whereas the number 6 could suggest strong belief that COVID vaccines are safe and effective.

The model then constructs a vast network of virtual individuals and virtual institutional sources, which are the sources of much of the data that flows through the network.

In real life, they may include news organizations, religions, governments, and social media influencers, who are essentially information super-spreaders. The paradigm begins with information being injected into the network by an institutional source.

If an individual obtains the information that is similar to their beliefs, such as a 5 versus their current 6, they are more likely to update that opinion to a 5. If the incoming information is drastically different from their current beliefs, say a 2 vs a 6, they will almost certainly reject it and stick to their 6-level opinion.

Other factors can influence how people update their opinions, such as the percentage of their connections that send them the information (essentially, social pressure) or the level of confidence they have in the source. The dissemination and persistence of misinformation can then be tracked using a population-wide network model of these interactions.

Future iterations of the model will incorporate new findings from network science and psychology, as well as a comparison of the model’s results with real-world opinion surveys and network topologies over time.

While the current model shows that views can only change slowly, other scenarios may be modeled that result in a larger shift in beliefs, such as a jump from 3 to 6 that could occur when an influencer experiences a dramatic occurrence and implores their followers to change their thoughts.

According to the researchers, who include Rabb’s faculty advisor Lenore Cowen, a professor of computer science; computer scientist Matthias Scheutz; and J.P deRuiter, a professor of both psychology and computer science, the computer model can become more complex over time to accurately reflect what is happening on the ground.

“It’s becoming all too clear that simply broadcasting factual information may not be enough to make an impact on public mindset, particularly among those who are locked into a belief system that is not fact-based.” said Cowen.

“Our initial effort to incorporate that insight into our models of the mechanics of misinformation spread in society may teach us how to bring the public conversation back to facts and evidence.”