News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

Submit content

My Account

Advertise with us

YouTube expands AI deepfake detection tool to journalists

YouTube is expanding its AI-generated likeness detection tool to include journalists, government officials and political candidates, as the platform responds to growing concerns about deepfakes and impersonation online.

Copyright enforcement

The feature, first introduced last year to creators in the YouTube Partner Programme, allows individuals to identify and flag AI-generated videos that use their face or likeness without permission. The company says the tool functions in a similar way to its copyright enforcement system, Content ID, but focuses specifically on detecting synthetic media that replicates a person’s appearance.

If the system detects a possible match in AI-generated content, the individual is notified and can review the video. They may then request its removal if it violates the platform’s privacy policies.

However, detection does not automatically lead to takedown. YouTube said it will continue balancing identity protection with freedom of expression, noting that parody, satire and content created in the public interest may still be allowed on the platform—even when it depicts public figures.

The expansion comes amid increasing concerns about the misuse of generative AI to create convincing deepfakes of public figures, particularly in political and news contexts.

Pilot group

Initially, the programme will be rolled out to a pilot group of journalists, political candidates and government officials while YouTube refines the tool. The company says it plans to expand access more broadly in the coming months.

To prevent misuse, participants must verify their identity before enrolling. According to YouTube, the data used for verification will only support the safety feature and will not be used to train generative AI models developed by its parent company, Google.

The platform also reiterated its support for stronger legal protections against AI impersonation, citing the proposed NO FAKES Act, which would establish a federal right of publicity and create clearer rules around the use of a person’s likeness in AI-generated content

More news
Let's do Biz