Technology

A Study Claims That Gen Z is Most Likely to be Deceived and That More Than HALF of People Can’t Tell the Difference Between Words Written by ChatGPT or Humans

A Study Claims That Gen Z is Most Likely to be Deceived and That More Than HALF of People Can’t Tell the Difference Between Words Written by ChatGPT or Humans

According to a new study, Generation Z is the worst at being unable to tell whether words were written by AI chatbots like ChatGPT.

53 percent of individuals, according to research, were unable to distinguish between content created by a human, an AI, or an AI that had been edited by a human.

Only four out of ten young adults (18 to 24) could correctly identify AI content, compared to more than half of individuals 65 and older.

The most recent poll was carried out by Tooltester, whose CEO and founder, Robert Brandl, told DailyMail.com: We were taken aback by the reality that younger readers had a harder time spotting AI content.

It may indicate that older readers are presently less trusting of AI-related content, particularly given how frequently it has been featured in the media.

Because they have had many more years of exposure to this information, older readers will have a larger knowledge base to draw from and can compare internal views of how a question should be answered better than a younger person.

ChatGPT-1
A Study Claims That Gen Z is Most Likely to be Deceived and That More Than HALF of People Can’t Tell the Difference Between Words Written by ChatGPT or Humans

Being young and possibly more tech-savvy is not a defense against being duped by online content, according to a University of Florida research that revealed that younger audiences are equally susceptible to fake news online as older generations.

The study also discovered that people think there ought to be disclosures that material has been created with AI.

It involved 1,900 Americans who were tasked with deciding whether writing in a variety of areas, such as technology and health, was produced by a human or an AI.

Only 40.8 percent of people were totally unfamiliar with ChatGPT, the type of system that could accurately identify AI content, suggesting that knowledge of the concept of “generative AI,” such as ChatGPT, seemed to help.

Eighty-five percent of people think news or blog publishing businesses ought to disclose when artificial intelligence (AI) has been used.

71.3 percent of respondents said they would have less faith in a business if it had used AI-generated content without disclosing it.

As people cannot distinguish between human and AI-generated content, the results “appear to demonstrate that the general public may need to rely on artificial intelligence disclosures online to know what has and has not been created by AI,” Brandl said.

We were taken aback by how readily people accepted the idea of AI authoring a work by a person. The data indicates that many people relied on guesswork because they simply couldn’t tell and weren’t sure.

Additionally, Brandl noted that many respondents to the survey appeared to presume that any copy was produced by AI and that this prudence may be helpful.

Researchers in the field of cybersecurity has recently cautioned that tools like ChatGPT, which are well known for introducing factual mistakes into documents, can also be used as tools for fraud.

The study comes as cybersecurity experts have forewarned of an impending wave of fraud and phishing attacks authored by AI.

Cybersecurity firm Norton issued a warning about how criminals are using ChatGPT and other AI tools to build “lures” to rob people.

According to a New Scientist article, cybercrime gangs could save up to 96% on expenses by using ChatGPT to create emails.

We discovered that readers frequently assumed that any text, whether produced by a human or an AI, was produced by an AI, which may be indicative of how people currently view online material, according to Brandl.

Given that generative AI technology is imperfect and prone to many errors, a cautious reader might be less likely to take AI content at face value. Therefore, this might not be such a bad idea.

The researchers discovered that people’s ability to recognize AI-generated content differed by industry. AI-generated health content had the greatest ability to fool users, with 56.1% believing that AI content was created or edited by a human.

With 51% of people accurately identifying AI-generated content, the technology sector was where people could spot it most easily.

The fastest-growing internet program ever, ChatGPT had an average of 13 million users per day in January, according to analytics company SimilarWeb.

After its global launch, TikTok took about nine months to achieve 100 million users, while Instagram took more than two years.

In late November, ChatGPT was made freely accessible to the general public by OpenAI, a for-profit organization supported by Microsoft Corp.