UK regulator Ofcom has published a discussion paper exploring the different tools and techniques that tech firms can use to help users identify deepfake AI-generated videos.
The paper explores the merits of four ‘attribution measures’: watermarking, provenance metadata, AI labels, and context annotations.
These four measures are designed to provide information about how AI-generated content has been created, and – in some cases – can indicate whether the content is accurate or misleading.
This comes as new Ofcom research reveals that 85% of adults support online platforms attaching AI labels to content, although only one in three (34%) have ever seen one. Deepfakes have been used for financial scams, to depict people in non-consensual sexual imagery and to spread disinformation about politicians.
The discussion paper is a follow-up to Ofcom’s first Deepfake Defences paper, published last July.
The paper includes eight key takeaways to guide industry, government and researchers:
- Evidence shows that attribution measures can help users to engage with content more critically, when deployed with care and proper testing.
- Users should not be left to identify deepfakes on their own, and platforms should avoid placing the full burden on individuals to detect misleading content.
- Striking the right balance between simplicity and detail is crucial when communicating information about AI to users.
- Attribution measures need to accommodate content that is neither wholly real nor entirely synthetic, communicating how AI has been used to create content and not just whether it has been used.
- Attribution measures can be susceptible to removal and manipulation. Ofcom’s technical tests show that watermarks can often be stripped from content following basic edits.
- Greater standardisation across individual attribution measures could boost the efficacy and take-up of these measures.
- The pace of change means it would be unwise to make sweeping claims about attribution measures.
- Attribution measures should be used in combination with other interventions, from AI classifiers and reporting mechanisms, to tackle the greatest range of deepfakes.
Ofcom said the research will also inform its policy development and supervision of regulated services under the Online Safety Act.
BBC to cut 2,000 jobs: "Put simply, the gap between our costs and our income is growing"
In an internal, all-staff call held today, Rhodri Talfan Davies, Interim Director General for the BBC, revealed that the organisation is planning to cut between 1,800 and 2,000 jobs.
AJA to acquire video encoding company Comprimato
AJA Video Systems has agreed to acquire Comprimato, a live video encoding and processing software provider for virtualised and cloud productions and broadcasts.
Spain’s LaLiga teams with Fastly to target streaming piracy
LaLiga is collaborating with San Francisco-based edge cloud platform provider Fastly to develop technical solutions to address illegal streaming of live sports, with a special focus on the Spanish league’s football matches.
Women's elite sports revenues to reach $3bn in 2026
Global revenues in women’s elite sports will reach at least $3bn (£2.2bn) for the first time in 2026, according to new research by consultancy Deloitte.
SVOD market entering a ‘more disciplined phase’ – report
Global SVOD subscriptions have reached 2.2 billion worldwide and are on track to achieve 2.6 billion by 2030, according to Futuresource Consulting.

.jpg)

