Users of Facebook and Instagram will soon be able to name AI-generated photographs that show up in their feeds as part of a larger tech industry effort to distinguish between real and fake.
On Tuesday, Meta announced that it is developing technological standards with industry partners to facilitate the identification of photos and, in the future, audio and video produced by artificial intelligence systems.
How well it functions in a time when it’s simpler than ever to create and disseminate artificial intelligence (AI)-generated imagery that can be harmful, from nonconsensual fake nudes of celebrities to election propaganda, is still to be seen.
Assistant Cornell University information science professor Gili Vidan said, “It’s kind of a signal that they’re taking seriously the fact that generation of fake content online is an issue for their platforms.” Although it won’t probably detect everything, she claimed it may be “quite effective” in flagging a significant percentage of AI-generated content created with commercial tools.
Nick Clegg, head of worldwide relations at Meta, stated on Tuesday that the labels will be available in many languages and “in the coming months,” without providing an exact date. He also mentioned that “a number of important elections are taking place around the world.”
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” he said on his blog.
Although Meta already labels photorealistic photographs created with its own technology as “Imagined with AI,” the majority of the AI-generated content that floods its social media platforms is not created by Meta.
Standards-setting efforts have been made by a variety of tech industry partnerships, such as the Adobe-led Content Authenticity Initiative. A drive for digital watermarking and content labeling generated by artificial intelligence was included in an executive order signed by U.S. President Joe Biden in October.
“Images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools” are what Clegg stated Meta will be working to classify.
Google announced last year that YouTube and its other services will soon support AI labels.
The CEO of YouTube, Neal Mohan, reaffirmed in a blog post a year ahead of schedule on Tuesday, “In the coming months, we’ll introduce labels that inform viewers when the realistic content they’re seeing is synthetic.”
Customers may become alarmed if tech platforms become increasingly proficient at distinguishing AI-generated content from a select group of significant commercial providers while overlooking content created using other technologies.
According to Vidan of Cornell, “a lot would depend on how this is communicated by platforms to users.” What does this symbol represent? How confidently should I approach it? What is the meaning of its absence for me?