(CTN News) – Hundreds of Facebook profiles that were tied to covert Meta influence activities from nations such as China, Israel, Iran, Russia, and other countries have been removed by Meta Platforms Inc.
These profiles were removed because they were found to be associated with these activities. Some of these activities included technology that were powered by artificial intelligence in order to disseminate misleading information, as depicted in the quarterly risk report that the company made available to the public.
Threat actors have been using artificial intelligence (AI) to generate falsified words, images, and videos on Meta in an effort to divert users away from the legitimate content that Meta, the parent company of Facebook, Instagram, and WhatsApp, supplies. This is being done in an effort to divert users away from the content that contains legitimate content.
On the other hand, Meta mentioned in the report that was made public on Wednesday that the installation of generative artificial intelligence did not in any way affect the company’s capacity to disrupt such networks. This was stated in the report.
Disinformation operations were identified by Meta.
Both a Chinese network and an Israeli network were involved in these attempts. The Chinese network published images generated by artificial intelligence of a fictitious pro-Sikh movement, and the Israeli network posted remarks made by AI that praised Israel’s Israel Defense Forces.
Prior to these networks gaining a significant amount of traction, they were eliminated from existence.
During a press event that took place on Tuesday, David Agranovich, who is the policy director for threat disruption at Meta, made the following statement: “Right now, we are not seeing machine learning being used in terribly sophisticated ways.”
According to him, the utilization of artificial intelligence to produce a substantial amount of spam or to construct profile images has not yet been demonstrated to be successful.
However, Agranovich made the observation that “But we are aware that these networks are inherently antagonistic.” These networks are inherently antagonistic. They intend to develop new tactics to account for the gains made in their technology. As the firm prepares for the election season of 2024, Nick Clegg,
President of Meta’s Worldwide Affairs,
Provides an argument for the necessity of detecting and categorizing information generated by artificial intelligence. As the company prepares for election season, this is especially important. In more than thirty countries, worldwide elections will be taking place this year.
The United States of America, India, and Brazil are just a few examples of the countries that are among those that are extremely dependent on the apps that the corporation provides.
Regarding the issue of watermarking, Clegg stressed how urgently an industry standard is required at the present time. In order to detect and categorize pictures that have been created by artificial intelligence systems from firms such as Google and OpenAI, Meta is now working on developing tools that are capable of doing so.
They began stamping particular images with tags that were visible to the naked eye as well as those that were not visible to the naked eye. As a result of the modifications that were made to Meta’s policy, content that is generated by artificial intelligence that is fraudulent is now detected rather than eliminated.
However, despite the fact that Facebook and Instagram require advertisers to declare the use of artificial intelligence in marketing that pertains to social issues, elections, or political campaigns, the business does not fact-check political advertisements that are generated by political celebrities.
SEE ALSO:
Nvidia May Overtake Apple As World’s Second-Most Valuable Company