Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124


Alex Bores, a New York Assembly member and Democrat currently running for Congress in Manhattan’s 12th District, says one of the most alarming uses of artificial intelligence — highly realistic deepfakes — is less an intractable crisis than a failure to deploy an existing solution.
“Can we look at deep fakes? Because it’s a solvable problem that I think most people are missing the boat on,” Bores said in a recent episode of Bloomberg’s Odd Lots podcasthosted by Joe Weisenthal and Tracy Alloway.
Rather than training people to spot visual flaws in fake images or audio, Bores said policymakers and the tech industry should rely on a well-established cryptographic approach, similar to the one that made online banking possible in the 1990s. At the time, skeptics doubted that consumers would ever trust financial transactions over the Internet. The widespread adoption of HTTPS, which uses digital certificates to verify the authenticity of a website, has been a game-changer.
“It was a solvable problem,” Bores said. “This technique works primarily for images, video and audio.”
Bores highlighted a “free open source metadata standard” known as C2PAshort for Coalition for Content Provenance and Authenticity, which allows creators and platforms to attach tamper-proof identifying information to files. The standard can cryptographically record whether a piece of content was captured on a real device, generated by AI, and how it was modified over time.
“The challenge is that the creator has to tie it in and so you have to get to a place where that’s the default option,” Bores said.
The goal, he says, is a world in which most legitimate media contains this type of provenance data, and if “you see an image and it doesn’t have that cryptographic proof, you should be skeptical.”
Bores said that thanks to the move from HTTP to HTTPS, consumers now instinctively know to be wary of a banking site that lacks a secure connection. “It would be like going to your bank’s website and loading HTTP only, right? You’d be instantly suspicious, but you can still produce the images.”
AI has become a central political and economic issue, with deepfakes becoming a particular concern for elections, financial fraud and online harassment. Bores said some of the most damaging cases involved nonconsensual sexual images, including those targeting school-aged girls, where even a clearly identified fake can have real-world consequences. He argued that national laws banning deepfake pornography, including New Yorknow risks being forced by a new federal push to anticipate state rules regarding AI.
Bores’ broader AI agenda has already attracted industry attention. He is the author of the Raise Act— a bill that aims to impose security and reporting requirements on a small group of so-called “frontier” AI laboratories, including Meta, GoogleOpenAI, Anthropic and XAI, whose law was just promulgated last Friday. The Raise Act requires these companies to publish security plans, disclose “critical security incidents,” and refrain from releasing models that fail their own internal tests.
The measure passed the New York State Assembly with bipartisan support, but also sparked a backlash from a pro-AI super PACreportedly backed by prominent investors and tech executives, who pledged millions of dollars to defeat Bores in the 2026 primary.
The boring ones, who previously worked as a data scientist and head of federal and civil affairs at Palantir, says his position is not anti-industry but rather an attempt to systematize protections that major AI labs have already endorsed in voluntary engagements with the White House and at international AI summits. He said that complying with the Raise Act, for a company like Google or Meta, would be the equivalent of hiring “an additional full-time employee.”
Regarding odd batches, Bores said authentication of cryptographic content should anchor any policy response to deepfakes. But he also emphasized that technical labels are only one piece of the puzzle. Laws that explicitly prohibit harmful uses, such as fake child pornography, are still vital, he said, especially while Congress has yet to pass comprehensive federal standards.
“AI is already integrated [voters’] lives,” Bores said, citing examples such as AI toys for children or robots that mimic human conversation.
You can watch Odd Lots’ full interview with Bores below:
This story was originally featured on Fortune.com