In Davos, Switzerland at the World Economic Forum, Nick Clegg, who is the president of global matters at Meta, referred to a beginning effort to identify artificially produced content as “the most pressing task” facing the tech industry today.
On Tuesday, Mr. Clegg suggested a solution. Meta stated that it would encourage technological standards that companies throughout the industry could utilize to identify indicators in photo, video, and audio content that would indicate that the content was produced using artificial intelligence.
The criteria could allow social media firms to quickly recognize content created with A.I. that has been published to their platforms and allow them to include a tag to that substance. If broadly accepted, these criteria could help identify A.I.-generated content from companies such as Google, OpenAI, Microsoft, Adobe, Midjourney, and others that provide tools enabling people to promptly and effortlessly produce synthetic articles.
“Although this is not a perfect solution, we did not want to make perfection the deterrent to the good,” Mr. Clegg remarked during an interview.
He appended that he wished this effort would serve as a unifying call for companies across the industry to adopt standards for identifying and marking artificial content, making it easier for all of them to recognize it.
As the United States approaches a presidential election year, industry experts anticipate that A.I. tools will be broadly utilized to upload misleading content. Over the previous year, individuals have utilized A.I to manufacture and spread false videos of President Biden making inaccurate or provocative remarks. Additionally, the attorney general’s office in New Hampshire is looking into a series of pre-recorded messages that seemed to utilize an A.I.-generated voice of Mr. Biden, urging people not to vote in a recent primary.
Senators Brian Schatz, a Democrat from Hawaii, and John Kennedy, a Republican from Louisiana, recommended legislation last October requiring companies to disclose and label artificially generated content and cooperate to create or utilize standards comparable to those favored by Meta.
Meta, the owner of Facebook, Instagram, WhatsApp, and Messenger, holds a unique position as it is developing technology to drive extensive consumer acceptance of A.I. tools while being the world’s largest social network capable of dispensing A.I.-generated content. Mr. Clegg stated that Meta’s position provided it with special insight into both the production and distribution aspects of the issue.
Meta is zeroing in on a set of technical specifications designated as the IPTC and C2PA standards. This information specifies whether a piece of digital media is genuine in the metadata of the content. Metadata is the basic information secured in digital content that delivers a technical description of that content. Both standards are already commonly employed by news organizations and photographers to depict photos or videos.
Adobe, which develops the Photoshop editing software, and numerous other tech and media companies have spent years advocating for their fellow companies to embrace the C2PA standard and have launched the Content Authenticity Initiative. The initiative is an association among dozens of companies, including The New York Times, to counteract misinformation and “add a layer of tamper-evident provenance to all types of digital content, starting with photos, video, and documents,” according to the organization.
Companies that supply A.I. generation tools could include the standards into the metadata of the videos, photos, or audio files they helped create. This would notify social networks such as Facebook, X (formerly Twitter), and YouTube that such content was synthetic upon upload to their platforms. In turn, these companies could add labels that indicated this content was A.I.-generated to inform users who viewed them on the social networks.
Furthermore, Meta and others obligate users who publish A.I. content to specify whether they have done so when uploading it to the companies’ apps. Neglecting to do so results in penalties, although the companies have not specified what those penalties may involve.
Mr. Clegg also acknowledged that if the company determined that a digitally fabricated or altered post “poses a particularly high risk of materially deceiving the public on an important matter,” Meta could add a more evident label to the post to offer the public more information and context about its origin.
A.I. technology is advancing rapidly, prompting researchers to work on developing tools to identify fake content online. Although companies like Meta, TikTok, and OpenAI have developed ways to detect such content, technologists have quickly discovered ways to evade those tools. Artificially created video and audio have proved even more difficult to detect than A.I.-produced photos.
(The New York Times Company is suing OpenAI and Microsoft for copyright infringement over the use of Times articles to train artificial intelligence systems.)
“Negative actors will always seek to bypass any standards we create,” Mr. Clegg stated. He described the technology as both a tool and a shield for the industry.
Some of the difficulty arises from the disjointed manner in which tech companies are approaching it. Last autumn, TikTok revealed a new policy requiring its users to attach labels to videos or photos they upload that were generated using A.I. YouTube announced a similar initiative in November.
Meta’s new proposition would endeavor to connect some of these efforts. Other industry initiatives, such as the Partnership on A.I., have unified dozens of companies to discuss similar solutions.
Mr. Clegg expressed hope that more companies would consent to participate in the standard, particularly heading into the presidential election.
“We strongly felt that during this election year, waiting for all the pieces of the puzzle to fall into place before acting would not be justified,” he remarked.