New York Takes Aim at AI in the Newsroom, Testing Where Automation Ends and Editorial Judgment Begins
When a reporter recently asked a newsroom chatbot to suggest the lead story for that dayâs news report, the software declined.
Instead of offering a headline, the system explained that its training ended in late 2024 and it did not have access to current wire services or internal databases. Choosing a 2026 lead, it said, would risk âfabricating developments, dates, or sources.â It suggested that a human editor select a verified story and use the bot only to help with background and structure.
The exchange captured a tension now playing out across newsrooms, legislatures and living rooms: whether artificial intelligence should remain a humble assistant behind the scenes or be allowed to help drive what the public reads, watches and hears.
Generative AI tools are already embedded in much of the news business, from transcription and translation to summarizing government meetings. At the same time, lawmakers in New York are advancing one of the first bills in the United States aimed specifically at how news organizations deploy the technology, even as research suggests AIâwritten stories are spreading with little or no disclosure and audience trust remains fragile.
The result is a threeâway struggle over who controls the âcoâpilotâ in the newsroom â journalists and their unions, technology companies building the tools, and state officials who say they want to protect the public.
A tool that knows when to say no
News organizations and AI vendors describe large language models as aids to speed and efficiency, not as replacements for reporters.
âThese tools can help with accuracy, fairness and speed,â Amanda Barrett, vice president for standards and inclusion at The Associated Press, said when the wire service updated its AI guidance. âBut the central role of the AP journalist as the news gatherer, reporter, writer, and editor ⌠will not change. We view them as tools to be used with care, not a replacement of journalists in any way.â
The technical limits help explain why. Most commercial systems are trained on large, static collections of text and then âfrozen.â They do not automatically see todayâs wire copy, court filings or local police reports. Unless they are tethered to live databases, they answer questions by predicting the most likely next word based on older patterns in their training data.
That makes them prone to âhallucinationsâ â fluent but inaccurate statements â especially when asked about recent or obscure events. Media standards groups have warned that this behavior is particularly dangerous in news, where fabricated casualty counts or invented quotes can carry legal and reputational consequences.
The AP instructs its journalists to treat generative AI output as âunvetted source material,â subject to the same verification rules as any tip or email. Other outlets, including Wired and The Guardian, say they will not publish AIâgenerated stories under their bylines except in rare cases where the use of AI is itself the subject of the piece.
Yet an academic review of 186,000 articles from American newspapers in 2025 found that roughly 9% bore detectable signs of being at least partly generated by AI. The analysis reported that usage was highest in smaller local outlets and in lowerâprestige beats such as routine weather, technology and market updates. Only a small fraction of those stories included any notice to readers that software had helped produce the text.
A skeptical public
Surveys suggest that the public is uneasy with that direction.
Nearly half of Americans say they do not want news written by generative AI at all, according to a 2025 study by Poynter and the University of Minnesota. About one in five respondents said publishers should not use the technology in news production under any circumstances. More than half reported having little or no confidence in news articles or photographs created with AI.
Those most engaged with journalism â people who follow news closely and can correctly answer basic questions about how it works â are also the most insistent on transparency. More than 9 in 10 in that group said outlets should clearly label text or images that relied on AI.
News organizations and training groups have responded by rolling out toolkits to help explain AI use in plain language. One such project urges editors to go beyond boilerplate and describe specifically where AI is used â for example, to generate closed captions or translate stories â and where it is not.
âIf we donât talk with our audiences about how we are using AI, we can very quickly lose trust and credibility,â one of the toolkitâs authors warned in launching the effort.
At the same time, some research indicates that mandatory AI labels can lower perceived credibility even for accurate work, raising questions about whether blanket disclosure rules might backfire.
New York moves to mandate labels and human review
In early February, legislators in New York introduced a proposal that would effectively try to codify the âassistant, not authorâ model in state law.
The bill, known as the New York Fundamental Artificial Intelligence Requirements in News Act, would require any news story âsubstantially composed, authored, or createdâ using generative AI to carry a visible disclaimer. It would mandate that AIâassisted content be reviewed and approved by a human with editorial authority before publication.
The measure would also require media employers to tell staff how and when AI systems are used in their work and would bar using employeesâ journalism to train AI tools without notice. It calls for safeguards to prevent confidential material â including the identities of protected sources â from being uploaded to external AI systems.
State Sen. Pat Fahy and Assemblywoman Nily Rozic, both Democrats, are sponsoring the bill. They argue it is needed to protect both the integrity of the news and the workers who produce it.
Unions including the Writers Guild of America East and the NewsGuild of New York have lined up in support, saying the law would help prevent publishers from quietly automating local coverage and would keep journalistsâ work from being repurposed to train systems that might later displace them.
First Amendment scholars and media lawyers have raised alarms. Some warn that forcing news outlets to label certain stories as AIâassisted amounts to compelled speech â governmentâmandated messaging about how journalism must be produced.
They also question whether the state should have a say in newsroom processes at all, beyond general labor and business regulation. If lawmakers can dictate when an article must carry an AI disclosure, critics ask, could they later require labels for dataâdriven stories, specific sourcing practices or other editorial choices?
The broader cost of AIâassisted news
The New York Legislature is weighing a second AIârelated bill that reaches outside the newsroom and into the power grid.
A separate proposal would impose a threeâyear moratorium on new permits for large data centers, citing concerns that rapid growth in AI and cloud computing could add roughly 10 gigawatts of electricity demand in the state over the next five years. Utilities have warned of rising rates tied in part to energyâintensive computing workloads.
For journalism, that debate underscores that AI tools are not just software features but physical infrastructure. Transcribing a city hall meeting or summarizing a court docket with AI depends on sprawling server farms that draw significant amounts of energy and water. Communities near those sites can experience higher noise levels and environmental impacts without necessarily sharing in the economic benefits of highâtech jobs.
Choosing who chooses
For now, major outlets say their rules are clear: reporters remain responsible for every word under their bylines, even when AI has helped along the way.
In updated guidance last year, the AP said its journalists may use generative AI to suggest headlines, summarize their own stories or translate Englishâlanguage reports into Spanish, but only with human editing and prominent labels noting that automation was involved. AIâgenerated images or video are allowed only when the artificial nature of the material is central to the story and clearly disclosed.
Local newsrooms under financial strain face stronger incentives to lean on automation. Industry trainers say that makes basic AI literacy â understanding how systems work, where they fail and how to verify their output â increasingly essential for reporters and editors.
The chatbot that declined to pick a lead did so because its designers gave it a rule: when in doubt about recent facts, defer to a human. As legislatures and publishers wrestle with the scope of new laws and policies, the question is whether that instinct â to say no when the machine does not know â will remain a product choice, a professional standard, or a matter of statute.