- Meta is asking political advertisers to “self disclose” when they want to run digitally created ads.
- It lets Meta label or pull such ads, including “deep fakes,” with no incentive for disclosure.
- Deep fakes are intended to mislead. Admitting it’s fake is contrary to the goal of the content.
Meta released a new way to fend off political ads that are created or manipulated by generative AI during the 2024 election cycle. The trouble is, it’s up to advertisers to admit how their ads are made.
Starting in January, any advertisers must “self-disclose” when political or social-issue ads run on a Meta platform were made or meaningfully altered by a digital tool like generative AI. The policy will be enacted globally, the company said in a Wednesday blog post, and will apply to “photorealistic image or video, or realistic sounding audio.”
The aim is for advertisers to willingly reveal when an ad created by generative AI or similar tool does the following:
-
Depicts “a real person as saying or doing something they did not say or do”
-
Shows “a realistic-looking person that does not exist or a realistic-looking event that did not happen”
-
Alters “footage of a real event that happened; or a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event”
If an advertiser makes such a disclosure, Meta will mark the ad in some way that lets people know it was “digitally created or altered.”
Meta also just barred political advertisers from using its own generative AI tools for ads.
The new policy is similar to what Google and YouTube announced in September. 2024 is an election year in the US, along with 39 other countries, and generative AI has become wildly popular and powerful over the last year. There is growing concern that synthetic media will be a major problem for the public and the voting process.
Such altered pictures and videos are often known as “deep fakes,” a type of media intended to mislead, sometimes for comedy or a viral marketing ploy, and sometimes for public revenge or underhanded political reasons.
The creation of a deep fake is typically intentional. So willingly disclosing to a platform like Meta that you’ve created a deep fake of, let’s say a political actor, seems contrary to the goal of the content.
Nick Clegg, Meta’s president of global affairs, added in a post on Threads that this new policy asking political advertisers to disclose when they’ve made a deep fake “builds on Meta’s industry leading transparency measures for political ads.”
“These advertisers are required to complete an authorization process & include a ‘Paid for by’ disclaimer on their ads,” Clegg said.
Should an advertiser manage to get past this new self-disclosure rule, Meta has a plan: If the content starts to go viral, it will take a look.
“Our independent fact-checking partners review and rate viral misinformation and we do not allow an ad to run if it’s rated as False, Altered, Partly False, or Missing Context,” Meta said in the blog post. “For example, fact-checking partners can rate content as ‘Altered’ if they determine it was created or edited in ways that could mislead people, including through the use of AI or other digital tools.”
After that, Meta could decide to remove a misleading ad, the company said. If it happens to catch an advertiser trying to publish an improperly altered ad without disclosing it, it can also reject the ad. And “repeated failure to disclose may result in penalties against the advertiser,” the company said.
Are you a Meta employee or someone with a tip or insight to share? Contact Kali Hays at [email protected], on secure messaging app Signal at 949-280-0267, or through Twitter DM at @hayskali. Reach out using a non-work device.
Read the full article here