Meta To Start Labelling AI-Generated Content On Its Platforms

Meta, the parent company of Facebook, has unveiled significant updates to its policies on digitally created and altered media. This development arrives as the US gears up for elections, highlighting the growing concern over the dissemination of deceptive content created by advanced artificial intelligence (AI) technologies.

The tech giant announced that starting in May, it would begin tagging AI-generated videos, images, and audio on its platforms with "Made with AI" labels. This move, detailed by Monika Bickert, Vice President of Content Policy at Meta, in a blog post, marks a broadening of the company's policy. Previously, the focus was on a limited category of altered videos.

Additionally, Meta plans to introduce more noticeable labels for digitally altered content that carries a "particularly high risk of materially deceiving the public on a matter of importance," irrespective of whether AI or other methods were used in its creation.

This new direction represents a shift away from Meta's earlier approach, which primarily concentrated on removing specific misleading posts. The revised strategy aims to retain the controversial content on the platform while providing users with insights into the creation process of such media. Earlier, Meta had also discussed a strategy to identify images created using generative AI tools from other companies through invisible markers embedded in the files, though a specific commencement date was not provided.

A Meta spokesperson informed Reuters that the updated labeling policy would be applied to content shared across Facebook, Instagram, and Threads. The company’s other services, including WhatsApp and Quest virtual reality headsets, adhere to different regulations. The more critical "high-risk" labels are set to be implemented immediately, according to the spokesperson.

The policy updates precede the US presidential election in November, a period that tech researchers predict could see a significant impact from new generative AI technologies. Political campaigns, not just in the US but globally, have begun to utilize AI tools, testing the limits of guidelines established by entities like Meta and OpenAI, the leading name in generative AI technology.

In a notable instance from February, Meta’s oversight board criticized the company's then-current rules on manipulated media as "incoherent". This critique came in the wake of a review of a manipulated video of US President Joe Biden, which remained on Facebook. The video falsely suggested inappropriate behavior by altering real footage. Under Meta’s prevailing "manipulated media" policy at that time, misleadingly altered videos were prohibited only if they were AI-generated or depicted individuals uttering words they did not actually say.

The oversight board advocated for the policy to encompass non-AI content, which could be "not necessarily any less misleading" than AI-generated material, and to extend to audio-only content as well as videos showing individuals engaging in actions they never performed. This feedback from the oversight board has evidently played a role in guiding Meta's latest policy revisions, marking a significant step in the company's efforts to combat misinformation and maintain the integrity of content on its platforms.

24K Gold / Gram
22K Gold / Gram
Advertisement
First Name
Last Name
Email Address
Age
Select Age
  • 18 to 24
  • 25 to 34
  • 35 to 44
  • 45 to 54
  • 55 to 64
  • 65 or over
Gender
Select Gender
  • Male
  • Female
  • Transgender
Location
Explore by Category
Get Instant News Updates
Enable All Notifications
Select to receive notifications from