(CTN News) – The Meta Platforms (META.O) on Wednesday announced that starting in 2024, advertisers will be required to reveal when artificial intelligence (AI) or other digital methods are used to alter or create advertisements related to politics, social issues, or elections on Facebook and Instagram.
In a blog post, Meta,
The world’s second-largest digital advertisement platform, announced that it would require advertisers to disclose whether their altered or created ads portray real people doing or saying things that they did not do or if they created a person who looked like a real person but was not.
Additionally, the company would be asking advertisers to disclose if these ads depict events that did not actually occur, alter footage of an actual event, or even depict a real event without the actual image, video, or audio recording of the actual event as shown in the ads.
Meta announced earlier this year that political advertisers would not be allowed to use generative AI ad tools.
The policy updates come a month after the Facebook-owned company announced that it would be expanding advertisers’ access to AI-powered advertising tools that can automatically create backgrounds, make adjustments to images, and change ad copy in response to simple text prompts.
It was announced last week that Alphabet’s (GOOGL.O) Google, one of the biggest digital advertising companies in the world, was launching similar image-customizing generative AI ad tools and said it would keep politics out of its products by blocking use of a list of “political keywords” as prompts in its products.
With the advent of a slew of new “generative AI” tools that are making it cheap and easy to create convincing deepfakes, lawmakers have been increasingly concerned that artificial intelligence would be used to create content that falsely depicts candidates in political advertisements to influence federal elections.
In addition to the fact that Meta has already been blocking its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures, Nick Clegg, the company’s chief policy executive, said last month that the use of generative artificial intelligence in political advertising was “clearly an area where our rules need to be updated.”
In accordance with the new policy, the company will not require disclosures when the digital content is “insignificant or irrelevant to the claim, assertion, or issue raised in the ad.” This includes adjusting the image size, cropping an image, color correcting an image, and sharpening an image.