UK regulator Ofcom has published a discussion paper exploring the different tools and techniques that tech firms can use to help users identify deepfake AI-generated videos.
The paper explores the merits of four ‘attribution measures’: watermarking, provenance metadata, AI labels, and context annotations.
These four measures are designed to provide information about how AI-generated content has been created, and – in some cases – can indicate whether the content is accurate or misleading.
This comes as new Ofcom research reveals that 85% of adults support online platforms attaching AI labels to content, although only one in three (34%) have ever seen one. Deepfakes have been used for financial scams, to depict people in non-consensual sexual imagery and to spread disinformation about politicians.
The discussion paper is a follow-up to Ofcom’s first Deepfake Defences paper, published last July.
The paper includes eight key takeaways to guide industry, government and researchers:
- Evidence shows that attribution measures can help users to engage with content more critically, when deployed with care and proper testing.
- Users should not be left to identify deepfakes on their own, and platforms should avoid placing the full burden on individuals to detect misleading content.
- Striking the right balance between simplicity and detail is crucial when communicating information about AI to users.
- Attribution measures need to accommodate content that is neither wholly real nor entirely synthetic, communicating how AI has been used to create content and not just whether it has been used.
- Attribution measures can be susceptible to removal and manipulation. Ofcom’s technical tests show that watermarks can often be stripped from content following basic edits.
- Greater standardisation across individual attribution measures could boost the efficacy and take-up of these measures.
- The pace of change means it would be unwise to make sweeping claims about attribution measures.
- Attribution measures should be used in combination with other interventions, from AI classifiers and reporting mechanisms, to tackle the greatest range of deepfakes.
Ofcom said the research will also inform its policy development and supervision of regulated services under the Online Safety Act.
Alex Mahon and Charlotte Moore receive New Year Honours
Former Channel 4 Chief Executive Alex Mahon and ex-BBC Chief Content Officer Charlotte Moore are among the recipients in the 2026 New Year Honours list.
Women directed 8% of top 100 movies in 2025
The representation of women directors of the top films at the North American box office dropped significantly in 2025, according to the latest study from the University of Southern California (USC)’s Annenberg Inclusion Initiative.
WBD likely to reject Paramount's latest hostile bid
Warner Bros. Discovery (WBD) is likely to reject Paramount Skydance's $108.4bn hostile bid, according to reports.
FACT and UK police warn illegal streamers
The Federation Against Copyright Theft (FACT) has contacted over a thousand individuals across the UK, warning them to immediately cease using illegal TV streaming services or face the risk of prosecution.
UK actors vote to refuse being digitally scanned on set
Members of the UK performer union Equity working in film and TV have voted to refuse to be digitally scanned on set by a landslide 99.6%, in an effort to secure artificial intelligence protections.


