Technology

The Impact of Artificial Intelligence on Human Interaction Trust

The Impact of Artificial Intelligence on Human Interaction Trust

The impact of AI on human interaction can be a complicated and multifaceted topic. On the one hand, artificial intelligence technologies have the potential to increase trust by improving the efficiency, reliability, and accuracy of various systems and processes. AI-powered security systems, for example, can help protect personal information and assets, and AI algorithms can aid in the detection of fraudulent activities.

As artificial intelligence becomes more realistic, our trust in those with whom we communicate may be jeopardized. The University of Gothenburg investigated how advanced AI systems affect our trust in the people we interact with.

In one scenario, a would-be scammer calls an elderly man only to be connected to a computer system that communicates via pre-recorded loops. The scammer spends a significant amount of time trying the con, patiently listening to the “man’s” somewhat confusing and repetitive stories. According to Oskar Lindwall, a communication professor at the University of Gothenburg, it often takes a long time for people to realize they are interacting with a technical system.

He co-authored an article titled Suspicious Minds: The Problem of Trust and Conversational Agents with Professor of Informatics Jonas Ivarsson, which investigates how individuals interpret and relate to situations in which one of the parties is an AI agent. The article highlights the negative consequences of harboring suspicion toward others, such as the damage it can cause to relationships.

A pervasive design perspective is driving the development of AI with increasingly human-like characteristics. While this may be appealing in some circumstances, it can also be problematic, especially when it is unclear with whom you are communicating.

Jonas Ivarsson

Ivarsson provides an example of a romantic relationship where trust issues arise, leading to jealousy and an increased tendency to search for evidence of deception. The authors argue that being unable to fully trust a conversational partner’s intentions and identity may result in excessive suspicion even when there is no reason for it.

During interactions between two humans, some behaviors were interpreted as signs that one of them was actually a robot, according to their research.

According to the researchers, a pervasive design perspective is driving the development of AI with increasingly human-like characteristics. While this may be appealing in some circumstances, it can also be problematic, especially when it is unclear with whom you are communicating. Ivarsson wonders if AI should have such human-like voices because they create a sense of intimacy and lead people to form impressions based solely on the voice.

Fig: The influence of AI on trust in human interaction

The scam is only revealed after a long time in the case of the would-be fraudster calling the “older man,” which Lindwall and Ivarsson attribute to the believability of the human voice and the assumption that the confused behavior is due to age. Once an AI has a voice, we infer attributes like gender, age, and socioeconomic status, making it more difficult to tell that we are interacting with a computer.

The researchers propose developing AI with fully functional and eloquent voices that are still clearly synthetic, thereby increasing transparency.

Communication with others entails not only deception but also the formation of relationships and the creation of shared meaning. This aspect of communication is influenced by the uncertainty of whether one is speaking to a human or a computer. While it may not matter in some cases, such as cognitive-behavioral therapy, other types of therapy that require more human connection may suffer.

Jonas Ivarsson and Oskar Lindwall examined data from YouTube. They looked at three different types of conversations as well as audience reactions and comments. In the first case, an unknown to the person on the other end, a robot calls to book a hair appointment. In the second type, one person calls another for the same reason. In the third type, telemarketers are transferred to a computer system with pre-recorded speech.