Technology

Most people do not realize when a personal message they receive was written by AI, study finds

2026-04-20 12:33
716 views
Most people do not realize when a personal message they receive was written by AI, study finds

When people know someone used AI to write a message, they see the person as lazy or insincere.

  • Home

Edition

Africa Australia Brasil Canada Canada (français) Català España Europe France Global Indonesia New Zealand United Kingdom United States Skip to content The Conversation Edition: Global
  • Africa
  • Australia
  • Brasil
  • Canada
  • Canada (français)
  • Català
  • España
  • Europe
  • France
  • Indonesia
  • New Zealand
  • United Kingdom
  • United States
s Newsletters The Conversation Academic rigour, journalistic flair cartoon image of a typewriter with text on a piece of paper in it People tend to be offended when they get a personal note written by AI – if they know. Ekaterina Buravleva/iStock via Getty Images Most people do not realize when a personal message they receive was written by AI, study finds Published: April 20, 2026 1.33pm BST https://theconversation.com/most-people-do-not-realize-when-a-personal-message-they-receive-was-written-by-ai-study-finds-278874 https://theconversation.com/most-people-do-not-realize-when-a-personal-message-they-receive-was-written-by-ai-study-finds-278874 Link copied Share article

Share article

Copy link Email Bluesky Facebook WhatsApp Messenger LinkedIn X (Twitter)

Print article

Two new experiments show that most people do not even consider that a personal message could be AI-generated, even when they themselves use artificial intelligence to write.

To see how people judge someone based on their writing in the age of ChatGPT, my colleague Jiaqi Zhu and I recruited more than 1,300 U.S.-based participants, ages 18 to 84, and showed them AI-generated messages like an apology sent in an email. We split our volunteers into four groups: Some people saw the messages with no information about who or what wrote them, as in everyday life. Others were told the messages were definitely written by a human, definitely AI-generated, or that the source could be either.

A text message presenting an apology generated by AI. An AI-generated fictional apology sent via text was one of the messages participants evaluated in a recent study. Zhu & Molnar (2026)

We found a clear “AI disclosure penalty.” When people knew a message was AI-generated, they rated the sender much more negatively – “lazy,” “insincere,” “lack of effort” – than when they believed that the same text was written by a person – “genuine,” “grateful,” “thoughtful.”

But here is the twist: The participants who were not told anything about authorship formed impressions that were just as positive as those from people who were told the messages were genuinely human.

This complete lack of skepticism surprised us – and it raises new questions. Maybe participants were not familiar enough with AI to realize that today’s models can produce detailed and personal messages. (They can.) Or perhaps participants have never used AI themselves. (They likely have.) So we also tested whether participants’ own AI use changed how they judged senders.

To our even bigger surprise, we found little to no effect. People who use generative AI quite frequently in their daily lives – at least every other day – did penalize AI use slightly less when AI authorship was disclosed, compared with people who never or rarely use AI. But participants were no more skeptical by default: When authorship was not disclosed, heavy AI users, light AI users and nonusers all tended to assume the text was written by a person and formed essentially the same impressions.

A word cloud showing words that describe how people reading text messages felt. Word clouds depict participants’ first impressions of senders who wrote messages themselves, left, and those who used AI, right. Andras Molnar

Why it matters

Lack of skepticism and a lack of negative impressions matter because people make social judgments from text all the time. Recipients consider taking the time and effort to send written messages as an insight into the writer’s sincerity, authenticity or competence, and those impressions shape people’s decisions in friendships, dating and work.

Yet our main findings reveal a striking disconnect: People usually do not suspect AI use unless it is obvious. This unawareness creates a moral dilemma: People who use AI in secret can enjoy the benefits while facing almost no risk of detection. Meanwhile, paradoxically, people who are upfront and admit to using AI suffer a reputational hit.

Over time, lack of skepticism and awareness could reshape what writing means in everyday life. Readers might learn to treat writing as a less reliable signal of someone’s character or effort, and instead rely on other forms of communication. For example, widespread AI use has already prompted employers to discount the value of cover letters from job applicants. Instead, they are relying more on personal recommendations from an applicant’s current supervisor or connections made through in-person networking.

What other research is being done

Other researchers have documented a wide range of negative impressions about people who disclose their AI use. Studies show it makes job applicants seem less desirable and employees seem less competent. Readers of creative writing perceive AI users as less creative and inauthentic. People see personal apologies and corporate apologies that stem from AI as less effective. In general, disclosing AI use decreases trust and undermines legitimacy.

Yet without disclosure, there is clear evidence that most people cannot reliably detect AI-generated text, even with the help of detection tools, especially when the text is a mix of human-written and AI-generated content. Even when people feel confident about their ability to spot AI text, their confidence may be nothing more than a self-affirming illusion.

What’s next

Even though our experiments did not reveal suspicion of AI use, that doesn’t mean people never suspect it in the real world. In some settings, people may already be hypervigilant about AI. Use in academia is an obvious example. In our next studies, we want to understand when and why people naturally start to suspect AI use, and what flips the switch between trust and doubt.

Until then, if you want your personal message to be judged as heartfelt, the safest strategy may be to make a phone call, leave a voicemail or, better yet, say it in person.

The Research Brief is a short take on interesting academic work.

  • Artificial intelligence (AI)
  • Distrust
  • Overconfidence
  • Human communication
  • Quick reads
  • Artificial intelligence and jobs
  • Research Brief
  • Artificial Intelligence ethics
  • AI risk
  • AI writing
Andras Molnar, University of Michigan

Author

Disclosure statement

Andras Molnar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Partners

University of Michigan provides funding as a founding partner of The Conversation US.

View all partners

DOI

https://doi.org/10.64628/AAI.xt4jtmchj

Events

More events

Jobs

More jobs
  • Editorial Policies
  • Community standards
  • Republishing guidelines
  • Analytics
  • Our feeds
  • Get newsletter
  • Who we are
  • Our charter
  • Our team
  • Partners and funders
  • Resource for media
  • Contact us
Privacy policy Terms and conditions Corrections