The Guardian has updated its editorial code of practice to clarify how journalists may use generative artificial intelligence. The policy emphasises human oversight, transparency with readers and senior editorial approval for significant AI use, while introducing staff training and newsroom tools designed to support reporting without replacing journalists’ expertise.
The Guardian has updated its editorial code of practice to establish clearer standards for the use of generative artificial intelligence in its journalism, reinforcing a principle that AI should support reporters rather than replace them and must always operate under human oversight.
The revised guidelines, released as part of an updated editorial code amended in March 2026, outline how journalists across Guardian News & Media may incorporate emerging AI technologies into reporting while maintaining the organisation’s long-standing emphasis on accuracy, transparency and editorial accountability. The move comes as news organisations worldwide grapple with integrating generative AI into newsroom workflows without compromising public trust.
Central to the updated policy is the insistence that journalists remain fully responsible for the work published under their bylines. The document states that while AI tools can assist in certain stages of the reporting process, they must never substitute for the judgement, verification and ethical responsibility of human reporters. “Gen AI can be used to enhance our journalists’ expertise but not replace it; active human oversight and control is essential,” the code explains. It further stresses that any substantial use of generative AI in published journalism must be explicitly approved by a senior editor and clearly communicated to readers.
The update forms part of a broader effort by the Guardian to ensure its editorial practices evolve alongside rapidly developing technology while preserving the core values that underpin its journalism. The publication’s editorial code—first formalised to articulate professional and ethical standards—has long emphasised that trust between the organisation and its readers is its most valuable asset. The introduction to the updated document reiterates that maintaining this trust remains the overriding objective of the newsroom’s rules and guidance.
Guardian leadership says the latest revision reflects a careful balance between innovation and caution. In a joint statement included in the updated guidance, chief executive Anna Bateson and editor-in-chief Katherine Viner emphasised that the organisation’s approach to AI would remain deliberate and transparent. “If we wish to include significant elements generated by AI in a piece of work, we will only do so with clear evidence of a specific benefit, human oversight, and the explicit permission of a senior editor,” they wrote. “We will be open with our readers when we do this.”
Their comments echo concerns shared across the journalism industry about the reliability and ethical implications of generative AI. The Guardian’s guidance notes that such technologies can introduce inaccuracies, bias or copyright issues, and may produce misleading material if used without rigorous scrutiny. Because of these risks, the code stresses that the use of AI must be accompanied by “absolute rigour and responsibility,” and journalists remain accountable for verifying all material generated or assisted by AI systems.
Alongside the new rules, the Guardian has launched several initiatives aimed at helping its newsroom adapt responsibly to AI-driven tools. Chris Moran, the organisation’s head of editorial innovation and editorial lead on generative AI, outlined three major steps being taken to integrate the technology into the editorial workflow.
The first is a mandatory training programme for all staff members. The course is designed to ensure journalists understand both the capabilities and the limitations of generative AI systems. By making the training compulsory, the Guardian aims to equip reporters, editors and producers with the knowledge needed to use AI tools carefully and ethically.
The second measure focuses on transparency with readers. Any significant use of generative AI—whether in creating illustrative images, analysing large datasets or supporting other aspects of a story—must be clearly signalled within the published piece. The newsroom believes that openly acknowledging the role of AI in journalistic production is essential to maintaining credibility in an era when automated tools are becoming increasingly common.
The third element involves the development of internal AI tools tailored specifically for the newsroom. Rather than relying solely on publicly available systems, the Guardian has begun building its own tools aligned with its editorial standards and style guidelines. These tools are intended to streamline routine tasks without affecting the core reporting process.
Among the early examples is a suggestion tool designed to assist journalists in writing alt text—the descriptive captions used to make images accessible to readers who rely on screen-reading technology. Other tools include internal research systems capable of searching the Guardian’s extensive archive, analysing parliamentary documents and transcribing audio recordings into text.
Such tools, Moran explained, are intended to free journalists from time-consuming administrative tasks so they can focus more fully on reporting, analysis and storytelling. By embedding editorial standards into the design of these systems, the Guardian hopes to avoid many of the pitfalls associated with the unchecked use of external AI services.
The organisation’s approach reflects a broader trend in media companies experimenting with AI while simultaneously setting guardrails for its use. Over the past two years, generative AI models capable of producing text, images and audio have rapidly entered mainstream professional environments. Newsrooms have been particularly cautious, aware that inaccuracies produced by automated systems could undermine credibility and damage public trust.
The Guardian’s editorial code explicitly warns that generative AI systems are “not reliable or consistent” and may introduce errors unpredictably, making human oversight indispensable.
In many ways, the updated guidelines extend principles that have long been part of the newspaper’s editorial philosophy. The code emphasises accuracy, fairness and accountability as core duties of journalists, and these standards apply equally when technology becomes part of the reporting process. Even when AI tools are used, the journalist responsible for the piece must ensure the information is verified and ethically obtained.
The Guardian has also embedded the AI guidance within a wider editorial framework covering issues such as privacy, harassment, the treatment of vulnerable individuals and the protection of confidential sources. By integrating AI rules into this established code rather than issuing them as separate guidelines, the organisation signals that the technology should be governed by the same ethical principles that apply to every other aspect of reporting.
Industry observers say the policy represents a pragmatic attempt to adapt journalism to a rapidly evolving technological landscape. While some newsrooms have embraced automation more aggressively, others have warned that generative AI could flood the information ecosystem with unreliable or misleading content. The Guardian’s stance positions the technology as a supportive tool rather than a replacement for editorial judgement.
Ultimately, the publication believes the relationship between journalists and readers must remain rooted in trust. As the updated code emphasises, audiences expect that work appearing under a journalist’s byline has genuinely been authored and verified by that individual. The use of AI may expand the toolkit available to reporters, but responsibility for the final product—and for maintaining the integrity of the journalism—remains firmly human.
Discover more from Creative Brands Mag
Subscribe to get the latest posts sent to your email.
Leave a comment