Technology

Writing with AI support can help you change your mind

Writing with AI support can help you change your mind

According to recent research, artificial intelligence-powered writing assistants that autocomplete phrases or provide “smart replies” not only put words into people’s mouths but also concepts into their heads.

Maurice Jakesch, a doctorate student in information science, challenged over 1,500 participants to compose a paragraph answering the topic, “Is social media good for society?” People who employed an AI writing helper that was prejudiced for or against social media were twice as likely to compose a paragraph agreeing with the assistant and much more likely to indicate they had the same perspective as those who did not use AI’s assistance.

According to the researchers, the biases included in AI writing tools, whether intentional or unintended, could have serious consequences for culture and politics.

“We’re rushing to implement these AI models in all walks of life, but we need to better understand the implications,” said co-author Mor Naaman, a professor at Cornell Tech’s Jacobs Technion-Cornell Institute and of information science in Cornell’s Ann S. Bowers College of Computing and Information Science. “Aside from increased efficiency and creativity, there may be other implications for individuals as well as our society—shifts in language and opinions.”

Academic-Paper-using-ChatGPT-demonstrates-the-Opportunities-and-Difficulties-of-AI-1
Writing with AI support can help you change your mind

While previous research has looked at how big language models like ChatGPT can develop convincing commercials and political messaging, this is the first study to show that the process of writing with an AI-powered tool can influence people’s attitudes. Jakesch presented his research, “Co-Writing with Opinionated Language Models Affects Users’ Views,” in April at the 2023 CHI Conference on Human Factors in Computing Systems, where it won an honorable mention.

Jakesch directed a huge language model to have either good or negative attitudes of social media in order to explore how people engage with AI writing helpers. Participants composed their lines on a platform he created that resembles a social media website, either alone or with one of the opinionated assistants.

As participants type, the platform captures data such as which AI ideas they accept and how long it takes to compose the text.

As decided by objective assessors, people who co-authored with the pro-social media AI helper wrote more sentences arguing that social media is good, and vice versa, than individuals who did not use a writing assistant. In a follow-up question, these participants were also more likely to express their assistant’s opinion.

The researchers looked into the idea that people were simply accepting AI advice to finish the work faster. Even individuals who took their time writing their paragraphs ended up with strongly affected statements. According to the survey, the majority of participants were unaware that the AI was prejudiced and were being influenced.

“The process of co-writing doesn’t really feel like I’m being persuaded,” Naaman explained. “It feels very natural and organic—I’m expressing my own thoughts with some assistance.”

The research team discovered that participants were swayed by the assistance when they repeated the experiment with a different topic. The team is now investigating how this event causes the shift and how long the consequences last.

Biased AI writing tools could generate similar swings in thinking, depending on the tools users choose, just as social media has changed the political landscape by encouraging the spread of misinformation and the establishment of echo chambers. Some organizations, for example, have stated plans to create an alternative to ChatGPT that will voice more conservative perspectives.

According to the researchers, these technologies merit further public discussion about how they may be abused and how they should be controlled and regulated.

“The more powerful these technologies become, and the deeper we embed them in the social fabric of our societies,” Jakesch explained, “the more careful we might want to be about how we govern the values, priorities, and opinions built into them.”

The work was co-authored by Advait Bhat of Microsoft Research, Daniel Buschek of the University of Bayreuth, and Lior Zalmanson of Tel Aviv University.