Despite an influx of digital disinformation around the event, governments and technology companies have been urged not to ban the use of all generative artificial intelligence technology and instead introduce mandatory disclosures to arm voters with information.
The call comes after an interim report from the Adopting Artificial Intelligence inquiry recommended Australia draft laws to restrict deepfake political ads before the 2029 election but stopped short of fast-tracking rules for 2025.
ChatGPT, see app pictured, is a commonly used tool to create deepfake images. (Bianca De Marchi/AAP PHOTOS)
The US presidential election has seen many examples of deepfake content being used to deceive and amuse audiences, AI expert and Adobe advisor Henry Ajder said, and similar technology has been used in campaigns in India, Indonesia and the UK.
“It’s no longer a question of if deepfakes are going to be used in a political context – it’s how effective are they going to be and how persuasive are they going to be at actually changing people’s minds,” he said.
“(In the UK), we saw certain candidates in hotly contested seats having fake audio released claiming they didn’t care about Palestinians and the crisis in Gaza.
“We’ve seen numerous cases of deepfakes of (US Democrat presidential candidate) Kamala Harris dressed like a sex worker, claiming that this was her back in the day.”
The 2024 US election is yet to deliver a “smoking gun” deepfake creation proven to have changed voters’ minds, Mr Ajder said, but he warned fake audio clips are coming close and could become a more serious risk in future.
Can AI be trusted? Only if safety and transparency are prioritised for those who interact with it. (Bianca De Marchi/AAP PHOTOS)
“We heard fake audio of (US President Joe) Biden telling people not to vote in primaries earlier this year in New Hampshire,” Mr Ajder told AAP.
“And we’ve heard similar examples, a lot of them targeting Elon Musk, who is now a political figure in his own right.”
Deepfake content has mostly been used in the campaign for memes and satirical content.
However, the present risk doesn’t require laws banning the use of AI tools in election materials, Mr Ajder said.
Mandatory disclosures – including watermarks and digital nutrition labels, similar to photographic metadata – would be a more effective way to tackle the issue, he added.
Some technology companies have taken additional action to prevent the threat, however, with GitHub chief legal officer Shelley McKinley revealing the company strengthened user rules before the US election to limit risks.
Coders on the expo floor at GitHub Universe 2024, a world fair for software, in San Francisco. (Marion Rae/AAP PHOTOS)
The open-source software developer platform is a member of a tech accord designed to safeguard against malicious and abusive online activity.
“As a result of that, over the past month, we have updated our terms of use to manage non-consensual sexual imagery – simulated media deepfakes essentially,” Ms McKinley told AAP in San Francisco.
“If we see things on the platform getting created that are intended to be nefarious then we can take them down.
“We already had the right to do that but we’ve made it exceptionally clear.”
An increase in tech-driven attacks and harassment of women had already resulted in a steep decline in women wanting to be candidates for political office, she said.
*AAP travelled to the US with the assistance of GitHub.
Copyright for syndicated content belongs to the linked Source link