This week I moderated a superstar panel at CogX to discuss elections in the age of AI. With elections happening in 80+ countries this year amidst a generative AI revolution that puts powerful creation tools in the hands of bad actors worldwide, humanity is running a global social experiment in real time: can we curb the harms that AI could inflict on democracy and societal trust while leveraging those same tools to ensure that self-government can thrive in the 21st century?
Some of my takeaways:
💡The threats are real. Awareness is half the battle.💡
Deepfakes, massive waves of misinformation, AI-enabled micro-targeted "October surprises"—imagine your uncle's local pastor “calling” him to inform him of an active shooter at his specific polling location, suggesting that he stay home and rest assured that he can vote tomorrow instead. We’ve seen early versions of that already - expect much more. Mara Abrams spoke about the need to prepare voters for what’s coming—and that starts with every one of us preparing our friends and family.
💡This is a shared responsibility.💡
There’s no government body coming to save us here, and we don’t (yet?) have meaningful policies in place to guide ethical AI development (Europe’s AI Act is a good start). AI developers must accept some responsibility for what’s created using their models (including the open-source models they release). Craig Forman cited the voice cloning company Respeecher as an example of a company with strong ethical principles (link in comments) that help to ensure their tech doesn’t fall into the wrong hands.
💡A glimmer of hope: have we reached peak bulls*t?💡
Karen Mcluskie highlighted that despite significant attempts at AI-driven persuasion—including in Taiwan's recent elections—these tactics have yet to decisively sway electoral outcomes. Is it *possible* that we’ve nearly reached peak information insanity, and the coming misinformation storms won’t move the needle all that much? We also discussed the difference between efforts at *persuasion* vs. those aimed at *sowing distrust and disengagement in the democratic process itself* (the latter scares me far more).
💡We need tech solutions.💡
The coming AI wave isn’t slowing down, so we need equally powerful AI tools to confront the worst threats to our discourse and elections. Sedale Turbovsky told us about what he's building at SHFT to track and combat misinformation as it’s spreading, and shared his optimism that we can build solutions to “de-inflame” our citizen base and promote healthier dialogue. As usual, though, it’s going to come down to business models and incentives...how can we get those right moving forward?
This is an all-hands-on-deck moment to prove that democracy can thrive in the 21st century - let's rise to the occasion. The worst thing we can do is disengage. Big thanks to Jeremy Boxer, Charlie Muirhead, and CogX for featuring this critical subject at their inaugural conference in LA.
Strategist. Writer. Entrepreneur.
1moThoughtful piece, Craig Forman. When I attended the NVIDIA conference this spring, I was blown away by the intensity of work on video. I don't hold much with conspiracy theories, but I came away from the conference convinced that within a year we should *assume* every piece of video has been manipulated and need to be "convinced" that it's real -- instead of the other way around. Kind of a frightening scenario for any video journalism. Now I think that timeline might have been too generous.