Connect with us


Fears Rise as AI Threatens UK Elections: Senior Politicians and Security Services on Alert



Fears Rise as AI Threatens UK Elections Senior Politicians and Security Services on Alert

(CTN News) – There is concern among high-ranking politicians and security officials that the United Kingdom could be the next country to have its elections impacted by the use of artificial intelligence (AI).

Sir Robert Buckland, the former secretary of justice, has called on the government to take stronger action against a “clear and present danger” to democracy in the United Kingdom.

Especially worrisome to the Conservative lawmaker, who is now chair of the Northern Ireland select committee, is the proliferation of deepfakes, which are convincing audio and video recordings of politicians making false statements.

He contends that AI-generated disinformation’s danger to democracies is not confined to a dismal future.

Here we are in the future. It is taking place.

“Unless the policymakers [in the UK] are showing some leadership on the need for a strong and effective domestic set of guardrails – plus international work – then we are going to be behind the curve.”

The next general election is scheduled for January 2025, and he is worried that it may be similarly disrupted as the one in 2017—when campaigning was halted only days before voting day in the aftermath of the bombing at Manchester Arena.

Tom Tugendhat, minister of home affairs and security, heads the Defending Democracy Taskforce, which the UK government established last year to prevent foreign meddling in elections.

It does not address novel dangers. All around the globe, election campaigns have a history of using false information and underhanded tactics. For a long time, people have been able to alter photos and memes using Photoshop, and even leaders’ voices have been altered.

In its annual report, the National Cyber Security Centre (NCSC), a division of GCHQ, highlighted the new development: the widespread availability of robust generative AI tools capable of producing convincing imposters.

Some perceive the proliferation of massive language models like ChatGPT and text-to-speech or text-to-video technologies as a boon to anyone determined to obstruct elections, whether they are nefarious state actors or bedroom miscreants.

“Large language models will almost certainly be used to generate fabricated content, AI-created hyper-realistic bots will make the spread of disinformation easier and the manipulation of media for use in deepfake campaigns will likely become more advanced,” the NCSC states in its research.

An audio clip of Labour Party leader Sir Keir Starmer allegedly angrily abusing aides surfaced on social media during the party convention in September, giving the Labour Party a taste of what was to follow. Despite 1.5 million views, the video was swiftly criticized as a hoax.

In November, a phony audio recording purporting to be from London Mayor Sadiq Khan requesting the rescheduling of Armistice Day in light of a pro-Palestinian demonstration went viral on social media.

Even while the Met Police found no crime, Mr. Khan warned that deepfakes may be a “slippery slope” for democracy if they were not adequately controlled.

A deepfake video of a party leader stepping forward in the final moments before an intensely contested election is the worst possible outcome, according to Sir Robert Buckland and others concerned about this matter.

In September, the general election in Slovakia occurred precisely when a phoney audio clip supposedly showing the leader of the liberal Progressive Slovakia party, Michal Šimečka, outlining methods to manipulate the vote surfaced.

Mr. Šimečka was ultimately defeated in the election by the pro-Moscow populist Smer-SSD party.

“Who knows how many votes it changed – or how many were convinced not to vote at all?” “In a recent speech,” Tom Tugendhat said.

Even in Argentina, where the right-wing libertarian Javier Milei was elected, AI-generated visuals and music played a role in the recent election and referendum.

According to Sir Robert Buckland, these elections demonstrate the consequences of insufficient legislation. He has urged the government to move forward with measures to enhance Ofcom’s oversight of disinformation.

His letter to Science Secretary Michelle Donelan asks for more specific instructions for social media companies on how to adhere to new national security regulations meant to prevent foreign involvement, and he is among a number of Tory MPs who have made this request.

Ms. Donelan assured a gathering of Labour, Tory, and SNP lawmakers last week that the government was treating the AI danger with the utmost seriousness.

Ms. Donelan, a member of the Defending Democracy Taskforce, dismissed the idea of additional legislation while stating that the United Kingdom is cooperating with social media platforms and foreign partners, including the United States, to counter the danger.

She assured the science and technology committee, “I expect that by the next general election we will have robust mechanisms in place that will be able to tackle these topics.”

The question then becomes, how can we prevent deepfakes from eroding democracy?

The spreading of pornographic deepfakes is already unlawful in England and Wales, according to others who think they should be outlawed altogether.

Ms. Donelan is among many who have suggested that anti-fake technology should be a component of the solution.

Is there a foolproof way to determine if a video is fake?

The “cat and mouse game” is how Jan Nicola Beyer, a research coordinator at Democracy Reporting International, puts it.

“The detection mechanisms get better, but in the moment they get better, the generative AI models get better in order to generate even more convincing and even harder to detect content.”

He went on to say that audio was especially difficult to disprove.

He said it was equally crucial to stop them from going viral as it was for fact-checkers and news outlets to identify probable fakes and offer proof for their judgment.

Regarding the global elections that will be held in 2024, most computer companies are already hard at work on security solutions.

However, according to Mr. Beyer, they need to stop recommending content from questionable sources and “demonetize” those that aren’t.

Perhaps deepfakes aren’t the main issue, though.

There is a “slight risk” of “fixating” on one sort of risk, according to October remarks made by Ken McCallum, director general of MI5, an agency assisting the government in fighting foreign election influence.

“And then if you’ve got creative adversaries, they decide not to play that card and do something quite different,” he informed the media.

“So I wouldn’t want to make some sort of strong prediction that [deepfakes] will feature in the forthcoming election, but we would be not doing our jobs properly if we didn’t really think through the possibility.”

An anonymous security expert told the BBC that, although deepfakes pose a greater danger in the long run, the use of AI to create more convincing “spearphishing” emails—emails that trick victims into clicking on links that corrupt their computers—is a more pressing concern.

Russian intelligence employed this method in 2016 to obtain the emails of Hillary Clinton’s campaign chair, which were subsequently leaked online after her close loss in the presidential election.

Some UK security officials secretly hope that foreign spies and their assisters will be too preoccupied with events in the US next November to interfere in a UK election that could happen around the same time, given that the US election is likely to be just as fiercely contested.

The risk that focusing too much on the possibility of deepfakes and AI meddling in politics could cause people to lose faith in the democratic process is another concern voiced by high-ranking national security officials.

The generative AI bug has been fixed; however, deepfakes will still be a concern.

Some experts are concerned that voters can become confused about what is real and what is not if social media is inundated with synthetic content, even if it is clearly labeled.

A phenomenon known as the “liar’s dividend” might emerge in such a setting, making it easier for dishonest politicians to pass off false claims as true.

At the next election, it will be the task of the authorities, the media, the digital companies, and the political parties to stop that from happening.

Continue Reading

CTN News App

CTN News App

české casino

Recent News


compras monedas fc 24

Volunteering at Soi Dog

Find a Job

Jooble jobs

Free ibomma Movies