UK regulator Ofcom has published a discussion paper exploring the different tools and techniques that tech firms can use to help users identify deepfake AI-generated videos.
The paper explores the merits of four ‘attribution measures’: watermarking, provenance metadata, AI labels, and context annotations.
These four measures are designed to provide information about how AI-generated content has been created, and – in some cases – can indicate whether the content is accurate or misleading.
This comes as new Ofcom research reveals that 85% of adults support online platforms attaching AI labels to content, although only one in three (34%) have ever seen one. Deepfakes have been used for financial scams, to depict people in non-consensual sexual imagery and to spread disinformation about politicians.
The discussion paper is a follow-up to Ofcom’s first Deepfake Defences paper, published last July.
The paper includes eight key takeaways to guide industry, government and researchers:
- Evidence shows that attribution measures can help users to engage with content more critically, when deployed with care and proper testing.
- Users should not be left to identify deepfakes on their own, and platforms should avoid placing the full burden on individuals to detect misleading content.
- Striking the right balance between simplicity and detail is crucial when communicating information about AI to users.
- Attribution measures need to accommodate content that is neither wholly real nor entirely synthetic, communicating how AI has been used to create content and not just whether it has been used.
- Attribution measures can be susceptible to removal and manipulation. Ofcom’s technical tests show that watermarks can often be stripped from content following basic edits.
- Greater standardisation across individual attribution measures could boost the efficacy and take-up of these measures.
- The pace of change means it would be unwise to make sweeping claims about attribution measures.
- Attribution measures should be used in combination with other interventions, from AI classifiers and reporting mechanisms, to tackle the greatest range of deepfakes.
Ofcom said the research will also inform its policy development and supervision of regulated services under the Online Safety Act.
Tim Davie on “national asset” BBC World Service: “We should be doubling the funding”
The BBC World Service is a “UK national asset”, “important to its national defence and reputation”, for which the government "should be doubling the funding”, according to the organisation’s outgoing Director General, Tim Davie.
Canal+ launches AI-powered content search with OpenAI
To enable users to find content through natural language queries, the Canal+ app will roll out a search function powered by OpenAI technology in June 2026.
Documentary Film Council appoints Mandy Chang as CEO
The UK’s Documentary Film Council has named Mandy Chang as its first Chief Executive.
Head of Eurovision broadcaster ORF resigns
The Director General of Austrian national broadcaster ORF has resigned over allegations of sexual harassment, two months before the network is due to host the Eurovision Song Contest.
Sound body AMPS calls out impact of noisy LED film lighting
The Association of Motion Picture Sound (AMPS) has called on manufacturers and productions to consider the impact of noisy high-output LED film lighting on capturing performance on set.

.jpg)

