1. Introduction
The evolution of AI in all its forms offers tremendous opportunities for creativity, innovation and improved productivity in every area of the ³ÉÈË¿ìÊÖ.
However, it can also pose significant risks. Wherever any form of AI - Generative AI or AI more broadly - is used to create, present or distribute content it must comply with the ³ÉÈË¿ìÊÖ’s Editorial Guidelines and its editorial values.
The pace of change in artificial intelligence, the range of different uses and the potential absence of human oversight all raise serious challenges for its deployment by the ³ÉÈË¿ìÊÖ and how it manages its future development.
The ³ÉÈË¿ìÊÖ has made clear that its use of AI will always be in line with its public service values, it will prioritise talent and creativity, and it will be open and accountable.
³ÉÈË¿ìÊÖ use of AI must
- never undermine the trust of audiences.
- always be transparent and accountable with effective and informed human oversight.
- always be used in a manner consistent with ³ÉÈË¿ìÊÖ editorial values, in particular of accuracy, impartiality, fairness and privacy.
This Guidance is intended to outline how AI can be used by the ³ÉÈË¿ìÊÖ, and all those who supply content to the ³ÉÈË¿ìÊÖ, in a manner consistent with Editorial Guidelines and the ³ÉÈË¿ìÊÖ’s editorial values.
It gives advice on where, when and how further advice should be obtained at any stage on any proposed editorial use of AI.
For staff and freelancers working for the ³ÉÈË¿ìÊÖ, it should be read alongside the which outlines key non-editorial issues to be considered and the processes which must be followed by all users of AI.
As experience, expertise and technology change, this guidance will be updated regularly.
2. What is Artificial Intelligence (AI)?
Artificial intelligence can be defined as a machine-based system that can perform tasks commonly associated with human intelligence, including making content, predictions and/or decisions.
Generative AI can be defined as a type of artificial intelligence capable of creating text, images, speech, music, video and code in response to prompts from a user.
3. Editorial issues in the use of AI
Whenever the use of AI is proposed, content creators, content curators or product teams must first consider whether both the deployment of AI in principle and the specific product or tool is appropriate for the task it is being required to do.
They should also be aware that AI may also be integrated into tools provided by external suppliers or tools that are openly available on the internet.
Any use of AI by the ³ÉÈË¿ìÊÖ in the creation, presentation or distribution of content must be consistent with the Editorial Guidelines, including the principles of impartiality, accuracy, fairness and privacy.
Any use of AI by the ³ÉÈË¿ìÊÖ in the creation, presentation or distribution of content must include active human editorial oversight and approval, appropriate to the nature of its use and consistent with the Editorial Guidelines.
For example, oversight of a recommendation engine may be at a high level to ensure that its output is consistent with the Editorial Guidelines. But where an AI is used in data analysis for a journalistic project, human oversight should engage with the detailed output.
In all cases, there must be a senior editorial figure who is responsible and accountable for overseeing its deployment and continuing use. Editorial line managers must also make sure they are aware of and effectively managing any use of AI by their teams.
Any use of AI by the ³ÉÈË¿ìÊÖ in the creation, presentation or distribution of content must be transparent and clear to the audience. The audience should be informed in a manner appropriate to the context and it may be helpful to explain not just that AI has been used but how and why it has been used.
Particular care should be taken around use of AI content intended for U18s.
4. Algorithmic bias and training data
The outcomes produced by AI are determined by both the algorithm behind it and the data that it has been trained on. Both the algorithm and the training data may introduce biases or inaccuracies into the outcomes of the AI.
For example, an early use of facial recognition software in a UK passport renewal system found it difficult to identify some skin tones, which made it impossible for those individuals to renew their passport online.
Any proposed use of any AI must consider whether any inherent biases affect its deployment by the ³ÉÈË¿ìÊÖ and therefore whether it is an appropriate tool.
5. Hallucinations
Generative AI operates by predicting likely responses to queries or instruction, based on the nature of its algorithm and training data - rather than providing content or answers that are necessarily factually accurate.
Any proposed use of generative AI must take into account the potential that content presented as accurate, may in reality be a creation of the algorithm and be a ‘hallucination’ or a fabrication with no basis in fact.
6. Plagiarism and mimicry
Similarly generative AI may simply adapt content from a web search or from a database of trusted content and present it as original.
Any proposed use of AI must take into account the potential that content presented as original may in reality be plagiarised or mimicked.
The ³ÉÈË¿ìÊÖ also has a responsibility to consider not only the rights of creators and artists in its use of AI but also not to jeopardise the role that they play in the wider creative community. Any use of AI must consider the rights of talent and contributors, while also allowing for the creative use of new forms of expression.
7. Non editorial issues in the use of AI
There are also important non editorial issues that must be taken into account in any proposed responsible use of AI.
There may be legal and commercial rights issues affecting whether the user of the AI or developer of the AI owns or is liable for any output created by it.
Similarly any information input into an AI may be used by the developer of that AI to train it further or be shared with third parties.
This may raise significant data protection or information security concerns around for example confidential or commercial information, copyright protected content or any personal data input, produced by or interacted with outputs of any tool.
8. Seeking guidance
For staff and freelancers working for the ³ÉÈË¿ìÊÖ, a proposal to use AI must first be referred to a senior editorial figure, who should consult Editorial Policy. Editorial Policy may consider referring proposed uses and questions to the AI Risk Advisory Group (AIRA), particularly non-editorial issues.
AIRA includes subject matter experts on AI risk from across the ³ÉÈË¿ìÊÖ, including legal, data protection, commercial and business affairs, and infosec as well as editorial policy. This multi-disciplinary approach reflects the range of different issues that inform many deployments of AI. The AI Risk Advisory Group is able to give detailed advice on both the editorial and non-editorial risks in the use of AI.
9. Use cases
9.1 Using Generative AI to create content
Generative AI should not be used to directly create news content published or broadcast by ³ÉÈË¿ìÊÖ News/Nations, current affairs or factual journalism unless it is the subject of the content and its use is illustrative. Exemptions, such as in the creation of graphics, may be considered subject to a piloting process.
Using AI tools to create media may be otherwise considered where their use does not challenge the editorial meaning of the content, distort the meaning of events, alter the impact of genuine material or otherwise materially mislead audiences.
Examples of acceptable use might include creating a synthesised voice to deliver text based content, where it does not seek to replicate the voice of another individual, or a ‘deepfake’ face used to preserve anonymity in a documentary.
News, current affairs and factual journalism video and still images must not be manipulated beyond a sympathetic crop and minor adjustments to brightness and contrast.
Any generative AI options provided in editing software such as – but not only - ‘generative fill’, that allows the addition or removal of content in images or video, should only be employed where their use would not materially mislead audiences.
Whenever these techniques are used they should be signalled to the audience in an appropriate manner.
9.2 Using AI to support editorial production or research
Generative AI or AI driven tools may be considered for use as part of the production process where they do not directly create content for publication but provide information, insight or analysis that might aid that process.
However it is important to consider the level of automation relevant to the editorial significance of the task it is performing. Using generative AI to spark creativity such as through storyboarding or ideation techniques may present a low risk.
On the other hand using it to analyse data may provide different or potentially inappropriate editorial outputs depending on the specific algorithm and the data it was trained on.
Similarly any use of transcription technologies would require careful human editorial oversight.
Any use of generative AI or AI driven tools must be actively monitored and outcomes must be further assessed by human editorial oversight before it is employed in ³ÉÈË¿ìÊÖ content.
Editorial Policy and potentially the AI Risk Advisory Group should be consulted whenever it is proposed to use AI tools in this way whether they have been developed internally or acquired from external sources.
Any external AI tools used must be authorised in line with existing ³ÉÈË¿ìÊÖ software authorisation or procurement processes, before being used.
9.3 Using AI to distribute or curate content
The ³ÉÈË¿ìÊÖ is already using AI in personalisation and recommendation engines to curate content to audiences on platforms like iPlayer and Sounds.
Products that distribute content in this way are considered to be an editorial experience and are therefore subject to editorial approval and human oversight.
As more, and more sophisticated, products are developed they must be consistent with ³ÉÈË¿ìÊÖ editorial values including impartiality, fairness, and harm and offence.
There are particular considerations around reporting crime and court cases, where contempt issues may be a risk if content is recommended that may be prejudicial to a fair trial. Advice should be sought from the programme legal team.
There are also heightened risks to impartiality during election periods.
9.4 Use of AI by third parties, including independent producers
The ³ÉÈË¿ìÊÖ must take particular care about how AI may have been used in content it has acquired from or had supplied to it by third parties, whether or not that use was deliberate or inadvertent.
Independent production companies or anyone commissioned to make content for the ³ÉÈË¿ìÊÖ that involves the use of AI must do so in a manner that is consistent with the Editorial Guidelines and the relevant processes outlined in this guidance, including the principles of impartiality, accuracy, fairness and privacy.
A senior editorial figure, who is responsible for compliance within the production team should be responsible and accountable for its use of AI.
Any proposed use of AI, where there may be a material impact on content for ³ÉÈË¿ìÊÖ audiences, should be discussed as part of the commissioning process. Independent production companies should contact their commissioning executive when they need guidance, who may in turn consult Editorial Policy.
The ³ÉÈË¿ìÊÖ should also be aware of the potential use of AI in acquired content and ensure that its broadcast or publication is in line with this guidance.
³ÉÈË¿ìÊÖ producers must also be mindful of the use of AI or synthetic media in material from external sources being used as part of ³ÉÈË¿ìÊÖ content for example in user generated content. They must always authenticate user generated content carefully and should consult experts in the UGC Hub/ ³ÉÈË¿ìÊÖ Verify.