Google has announced in a blog post that it will “restrict the types of election-related queries for which Bard and SGE will return responses”. Bard is Google’s AI chatbot, while SGE (Search Generative Experience) is its AI-powered search tool.
And Google’s not alone in tweaking its approach ahead of a big election year: Meta says it’ll require political advertisers to disclose whether their content was created or altered with AI, while Open AI is actually paying Politico and other brands for the right to summarise their articles in Chat-GPT responses.
Zooming out, there are two main areas when it comes to AI and election risks:
1) Chatbots: according to Wired, Microsoft’s Copilot chatbot responded to election queries with conspiracy theories, misinformation, and just good ol’ fashioned BS.
2) Deepfakes:we’ve already seen authorised fakes appear for Russia’s Vladimir Putin and Pakistan’s jailed Imran Khan, while unauthorised fakes have caused confusion in Slovakia’s election as well as on Bangladeshi social media ahead of next month’s elections.
And as the tech keeps evolving, the vulnerabilities will shift: eg, chatbot mistakes should become easier to fix, while deepfakes could get tougher to detect.
Either way, the core risk is that voter trust gets eroded, and ultimately this places more of an obligation on companies to safeguard their tech.
Of course, that opens up a whole other conversation about freedom of speech and safeguarding the safeguards, but the common thread through any effective response seems clear: transparency.
And that brings us back to these latest announcements from AI companies, which add a dash of transparency just as we enter the world’s biggest-ever election year, including:
- 🇹🇼 Taiwan’s presidential election on 13 January
- 🇮🇩 Indonesia’selections on 14 February
- 🇮🇳 India’s general election sometime between April and May
- 🇲🇽 Mexico’selection on 2 June
- 🇪🇺 The EU’s parliamentary elections on 6-9 June
- 🇺🇸 USA’s presidential elections on 5 November
- And 🇿🇦 South African and 🇬🇧 UK elections with dates still tbc.
So, if there was ever a year to reflect on how AI can shape our elections, this is it.
There are three overlapping time cycles at play here. We mentioned the first above (the world’s epic 2024 election cycle).
The second is the tech lifecycle: we’re currently sitting in what cyber policy guru Kat Duffy describes as a “post-market, pre-norms” stage. I.e., the industry has already released some powerful generative AI tools to the market, but we as societies haven’t really figured out our response yet.
The third is the broader business cycle: i.e., this is all happening right after widespread tech sector layoffs, meaning the tech world’s policy teams aren’t exactly flush with resources to handle these electoral challenges right now.
And the above three time cycles are all overlapping in 2024.
Honestly, we’re optimistic we’ll find a healthy equilibrium eventually – there are plenty of good, smart folks (including friends of ours) thinking it all through. But this confluence of time cycles does increase the likelihood of us seeing some white-knuckle moments along the way.
Also worth noting:
- In its provisional AI Act, the EU has classified all AI systems that are “used to influence the outcome of elections and voter behaviour” as high-risk, meaning they’ll be more heavily regulated.
- Albania announced earlier this month that it’s using ChatGPT to help speed up its EU membership process by automating translation and legal processes.