AI Politics: Candidates, Deepfakes, and Regulation
Key Points
- A candidate in Wyoming is campaigning with an LLM‑driven “virtual citizen” that would make policy decisions, prompting legal challenges over OpenAI’s terms of use and election eligibility.
- President Trump posted a deep‑fake image claiming a Taylor Swift endorsement, raising potential defamation claims and likely violations of Nashville’s new AI‑specific law.
- Elon Musk shared a deep‑fake video of Kamala Harris, initially presented as real and later labeled satire, illustrating how political figures are being misrepresented with AI.
- These incidents signal a broader shift in political discourse where AI‑generated content is assumed false until verified, demanding new standards for fact‑checking.
- Because large language models lack a reliable factual world model, there is a growing opportunity—and need—for tools that can demonstrably prove content has been fact‑checked.
Full Transcript
# AI Politics: Candidates, Deepfakes, and Regulation **Source:** [https://www.youtube.com/watch?v=9V1zkFryY-I](https://www.youtube.com/watch?v=9V1zkFryY-I) **Duration:** 00:04:59 ## Summary - A candidate in Wyoming is campaigning with an LLM‑driven “virtual citizen” that would make policy decisions, prompting legal challenges over OpenAI’s terms of use and election eligibility. - President Trump posted a deep‑fake image claiming a Taylor Swift endorsement, raising potential defamation claims and likely violations of Nashville’s new AI‑specific law. - Elon Musk shared a deep‑fake video of Kamala Harris, initially presented as real and later labeled satire, illustrating how political figures are being misrepresented with AI. - These incidents signal a broader shift in political discourse where AI‑generated content is assumed false until verified, demanding new standards for fact‑checking. - Because large language models lack a reliable factual world model, there is a growing opportunity—and need—for tools that can demonstrably prove content has been fact‑checked. ## Sections - [00:00:00](https://www.youtube.com/watch?v=9V1zkFryY-I&t=0s) **AI‑Driven Politics and Legal Battles** - The segment highlights two emerging AI‑related political controversies—a Wyoming candidate proposing an LLM‑run office that may violate OpenAI’s terms and face disqualification, and President Trump’s deep‑fake endorsement of Taylor Swift that could trigger defamation claims under Nashville’s new AI law. ## Full Transcript
AI politics they are getting together
whether we like it or not and I want to
call out a bunch of different stories
that highlight some of the trends in
play number one we have an llm running
for government in Wyoming so Victor
Miller is the candidate on the ballot
but the chat bot that he built with Chad
gp4 called Vic virtual integrated
citizen is what he claims is actually
going to be making the decisions should
he be elected now he hasn't asked open
AI whether that's within their terms of
use I doubt it is the Wyoming secretary
of state is not amused and wants him
disqualified and it is not at all clear
whether he will make it to election day
but I would not expect this to be the
last one he's very serious about this he
has a long talk um you know with with
reporters when they ask him about how
llms are actually better than many
elected officials at reading documents
and understanding their meaning and he
thinks that it's going to be net net an
improvement for the citizens uh in his
jurisdiction if Vic is elected stay
tuned on that one I'm really curious to
see what
happens number two over the weekend uh
president Trump claimed endorsement from
Taylor Swift uh using deep faked AI
images uh what's interesting about that
is not just that this is likely
actionable under traditional defamation
law but
that this is likely actionable in
Nashville where Taylor Swift has a
residence under Nashville's new AI law
so they have a specific law just for AI
in Nashville because entertainment
industry is so big there and this is
likely going to violate that as well as
some of the traditional defamation laws
uh so we'll see but I would expect
Taylor to sue Trump uh in the next day
or two here this will be fun um and then
in addition
Elon over the weekend tweeted a deep
faked video of kamla Harris or her voice
in particular uh saying things she did
not say did not label it as satire later
came back and labeled it as
satire so what's the takeaway here here
we have these three we have someone
running for office with an llm we have
two deep fake
issues we are getting to a point where
we need to assume that AI is in the
political disc course that it is
generating
disinformation and that therefore the
overall information mix has shifted from
the days of Walter kronite when we would
expect that the information we got was
limited but at least somewhat fact
checked to a world where we should
assume it actually hasn't been fact
checked and it is probably false unless
it is proven otherwise if you're
building in the space I think one of the
most interesting opportunities right now
for AI is think thinking about how you
can prove that something has been fact
checked even large language model
Architects agree that large language
models don't have a factual World model
they don't have a model that allows them
to accept new facts at least not yet if
that's the
case how can we expect them to be
factual in situations like this and so I
think there's a huge opportunity for
folks building in the space for
something like the uh verified check
mark that Twitter used to have that
actually meant that you were verified as
a celebrity it doesn't mean that
anymore um it just means you can pay $4
a month maybe $8 a month
anyway you want something like that for
Content you want something like that for
what has been produced where you can
verify this has not been AI faked this
is actual content by an actual person
and I think there's going to be an
enormous market for that and I think
until that's figured out people are
going to start to be willing to go back
to the world that is more frictional
that is more uh inperson and maybe we
always were going that way but the only
way right now that you can really tell
that something is from a person is
meeting them for coffee and talking to
them like increasingly there's a lot of
question even about videos now
conveniently
I am not an AI fake hello hello I'm sure
that's what the AI fake would say as
well but you get my point at the end of
the day we need to start assuming that
content that we see is potentially false
unless it's proven otherwise until we
build something in the space that solves
for this and I think that is a billion
dollar opportunity for someone so if you
want to build in the space think about
that one