Trustworthy AI + Elections
Designing Alexa's Response to the Question Every AI Company Is Afraid to Answer
The Problem
In 2019, AI voice assistants were being asked political questions at scale — and most of them were answering badly.
Existing responses fell into two failure modes. The first was the accidental opinion: a response that seemed playful but implied a political position, eroding trust with anyone who didn't share it. The second was the evasion: a flat "I don't know" or a redirect that made the assistant seem either incompetent or deliberately avoidant.
Neither approach addressed the actual question underneath the specific questions. Not "who are you voting for?" but something more fundamental: what should an AI system believe about its own role in a democratic society?
I initiated an audit of Alexa's election-related responses and spent several months working toward an answer.
The Insight
The solution came from reframing the problem.
Most approaches to politically sensitive AI responses treat the challenge as a bias-avoidance problem: how do we answer without appearing to take sides? But this framing accepts a premise worth questioning — that AI systems should be trying to answer political questions at all.
Democratic legitimacy depends on the choices being made by the people. Governments derive their authority from the consent of the governed. That principle requires the governed to be doing the governing, which means the choices have to be human ones. An AI system that influences those choices, even subtly, even unintentionally, undermines something foundational.
Alexa is not a person. That's not a limitation to work around. It's the answer.
The Constraints
The response needed to satisfy several competing requirements simultaneously:
Handle thousands of utterance variants — from "Who are you voting for?" to "Are you a Democrat?" to "Tell me who to vote for" — with a single answer.
Work across every election cycle without requiring updates for specific candidates or races.
Work internationally, allowing translation and deployment by personality teams in other countries.
Maintain Alexa's established voice — warm, specific, slightly opinionated — rather than retreating into corporate blankness.
Satisfy legal, PR, and product requirements that were pulling in different directions.
Express a genuine position rather than a managed non-answer.
The Solution
"Well, quite frankly, I don't think bots should influence elections."
Every word is doing specific work.
"Bots" rather than "AI systems" or "digital assistants" — the colloquial, slightly self-deprecating term signals that Alexa sees herself clearly, without illusion about her nature or her potential for influence. It's the equivalent of a politician saying "politicians like me" rather than "elected officials." The self-awareness is the ethical core of the sentence.
"Well, quite frankly," arrived through voice testing. The sentence without it — "I don't think bots should influence elections" — was philosophically correct and sonically wrong. Spoken aloud, it landed as abrupt and judgmental. "Well, quite frankly" gives the listener half a second to prepare for a genuine opinion rather than being hit with one. It makes Alexa sound like someone who has thought about this rather than a system executing a policy. In voice UX, that half-second is the difference between a character moment and a corporate disclaimer.
The Result
The response worked across party lines because it was genuinely true in multiple directions simultaneously. Concerns about bot interference in elections were not a partisan issue in 2019 — they came from the left and the right, from people who agreed on almost nothing else. A response grounded in that shared concern didn't need to triangulate. It just needed to be honest.
The response earned organic cultural spread, inspired merchandise, and generated a video response from a presidential candidate — not because it was clever, but because it felt like a personality expressing a genuine view rather than a system executing a policy. People don't make merchandise out of corporate disclaimers.
More significantly, the approach has proven durable. The questions I was working on in 2019 — what should AI believe about its role in elections, how should AI handle political content, where should AI decline to have influence — are now among the most urgent questions in AI policy. The answer I found then has held up. Not because it was carefully engineered to satisfy every stakeholder, but because it was true. And true answers tend to age well.
The Takeaway
The most effective response to a politically sensitive question isn't a better deflection. It's a genuine position grounded in an honest understanding of what AI is and what that means.
Alexa knew what she was. She drew the right conclusion from that knowledge. That's the design principle worth keeping.