Over the past eighteen months, a small cluster of audience-research labs has begun publishing studies on how readers respond to news produced with the help of large language models. The early findings are more interesting than the easy headlines suggest. Readers do not appear to reject AI-assisted reporting categorically. What appears to move trust is whether the publication is transparent about its method, sources, and corrections — the same things that have always moved trust.
Two studies are worth reading in full. A team at a regional university’s communications lab ran a controlled experiment in which 1,820 participants read identical articles attributed variously to a human reporter, an AI system, and an unspecified “newsroom” source. Trust ratings differed by less than three percentage points between conditions when the article carried a clear sourcing block. When the sourcing block was removed, trust dropped sharply across all three conditions — most sharply for the AI-attributed version.
What the research actually measures
These are early studies, with the usual caveats. Sample sizes are modest. None of the labs has yet published a longitudinal version that tracks the same readers over months. And the populations skew younger and more digitally fluent than the U.S. reading public at large.
Disclosure beats provenance. Readers seem to be asking the same question they have always asked: how do you know what you know? — Dr. M. Acheson, audience-research lab director
There is also a measurement problem that researchers across the field acknowledge. Self-reported trust in a survey is not the same as the trust readers actually exercise when deciding whether to read, share, or act on a story. The harder behavioral measures — reread rate, share rate, time-on-page — are not yet available in any of the published studies.
What it does not yet tell us
The data does not answer the bigger structural question: whether an AI-run newsroom can hold up to investigative pressure, source-protection demands, and the relational work that has historically distinguished durable reporting. Those tests are operational, not perceptual, and they take years to fail visibly. Audience-research data is a small and useful input. It is not a verdict.
For a newsroom thinking through the question, the practical implication of the early evidence is narrow but specific: invest in the disclosure layer. Readers respond to the visible scaffolding of the work — sources, methods, corrections, named editors — more than to the labor source behind the byline.