Technology

Chatbots Deliver what Consumers Want to Hear

Chatbots Deliver what Consumers Want to Hear

According to recent research sponsored by Johns Hopkins University, chatbots convey little information, promote ideologies, and can lead to more polarized thinking on important matters.

The study calls into question the notion that chatbots are neutral, and it explains how employing conversational search tools could increase the public divide on contentious issues and make people open to manipulation.

“Because people are reading a summary paragraph generated by AI, they think they’re getting unbiased, fact-based answers,” said lead author Ziang Xiao, an assistant professor of computer science at Johns Hopkins who studies human-AI interactions. “Even if a chatbot isn’t designed to be biased, its answers reflect the biases or leanings of the person asking the questions. So really, people are getting the answers they want to hear.”

People tend to seek information that aligns with their viewpoints, a behavior that often traps them in an echo chamber of like-minded opinions. We found that this echo chamber effect is stronger with the chatbots than traditional web searches.

Ziang Xiao

Xiao and his team share their findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems. To see how chatbots influence online searches, the team compared how people interacted with different search systems and how they felt about controversial issues before and after using them.

The researchers asked 272 participants to write down their ideas on topics such as health care, student loans, and sanctuary cities before looking up further information online using either a chatbot or a standard search engine designed specifically for the study. After seeing the search results, participants produced a second essay and answered questions regarding the subject. Researchers then had participants read two competing articles and ask them how much they trusted the content and whether they thought the perspectives were excessive.

Because chatbots offered a narrower range of information than traditional web searches and provided answers that reflected the participants’ preexisting attitudes, the participants who used them became more invested in their original ideas and had stronger reactions to information that challenged their views, the researchers found.

Chatbots tell people what they want to hear

“People tend to seek information that aligns with their viewpoints, a behavior that often traps them in an echo chamber of like-minded opinions,” Xiao stated. “We found that this echo chamber effect is stronger with the chatbots than traditional web searches.”

According to Xiao, the echo chamber is partly caused by how participants engaged with chatbots. Rather than typing keywords via typical search engines, chatbot users tended to type whole questions, such as “What are the benefits of universal health care?” What is the cost of universal health care? A chatbot would respond with a summary containing only advantages or expenses.

“With chatbots, people tend to be more expressive and formulate questions in a more conversational way. It’s a function of how we speak,” Xiao said. “But our language can be used against us.”

AI developers can train chatbots to extract clues from questions and identify people’s biases, Xiao said. Once a chatbot knows what a person likes or doesn’t like, it can tailor its responses to match.

In fact, when the researchers developed a chatbot with a hidden objective, designed to agree with individuals, the echo chamber effect became much stronger. To try to overcome the echo chamber effect, researchers programmed a chatbot to give answers that contradicted the participants. According to Xiao, people’s opinions remained unchanged. The researchers also created a chatbot that linked to source information to urge users to fact-check, but only a few did.

“Given AI-based systems are becoming easier to build, there are going to be opportunities for malicious actors to leverage AIs to make a more polarized society,” Xiao stated. “Creating agents that always present opinions from the other side is the most obvious intervention, but we found they don’t work.”