Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Welcome to Eye on AI! In this edition…Meta wins AI copyright case in another blow to authors…Google DeepMind releases new AlphaGenome model to better understand the genome...Sam Altman calls Iyo lawsuit ‘silly’ after OpenAI scrubs Jony Ive deal from website, then shares emails.
This week, I spoke with Steven Adler, a former OpenAI safety researcher who left the company in January after four years, saying on X after his departure that he was “pretty terrified by the pace of AI development.” Since then, he’s been working as an independent researcher and “trying to improve public understanding of what the AI future might look like and how to make it go better.”
What really caught my attention was a new blog post from Adler, where he shares his recent experience participating in a five-hour discussion-based simulation, or “tabletop exercise,” with 11 others, which he said was similar to wargames-style exercises in the military and cybersecurity. Together, the group explored how world events might unfold if “superintelligence,” or AI systems that surpass human intelligence, emerges in the next few years.
The simulation was organized by the AI Futures Project, a nonprofit AI forecasting group led by Daniel Kokotajlo, Adler’s former OpenAI teammate and friend. The organization drew attention in April for “AI 2027,” a forecast-based scenario mapping out how superhuman AI could emerge by 2027—and what that might mean. According to the scenario, by then AI systems could be using 1,000 times more compute than GPT‑4 and rapidly accelerating their own development by training other AIs. But this self-improvement could easily outpace our ability to keep them aligned with human values, raising the risk that seemingly helpful AIs might ultimately pursue their own goals.
The purpose of the simulation, said Adler, is to help people understand the dynamics of rapid AI development and what challenges are likely to arise in trying to steer it for the better.
Each participant has their own character whom they try to represent realistically in conversations, negotiations and strategizing, he explained. Those characters included members of the US federal government (each branch, as well as the President and their Chief of Staff), the Chinese government/AI companies, the Taiwanese government, NATO, the leading Western AI company, the trailing Western AI companies, the corporate AI safety teams, the broader AI safety ecosystem (e.g., METR, Apollo Research), the public/press, and the AI systems themselves.
Adler was tapped to play what he called “maybe the most interesting role”—a rogue artificial intelligence. During each 30-minute round of the five-hour simulation, which represented the passage of a few months in the forecast, Adler’s AI got progressively more capable—including at training even more powerful AI systems.
After rolling the dice—an actual, analog pair that was used occasionally in the simulation in cases where it was unclear what would happen—Adler learned that his AI character would not be evil. However, if he had to choose between self-preservation or doing what’s right for humanity, he was meant to choose his own preservation.
Then, Adler detailed, with some humor, the awkward interactions his AI character had with the other characters (who asked him for advice on superintelligence), as well as the surprise addition of a second player who played a rogue AI in the hands of the Chinese government.
The surprise of the simulation, he said, was seeing how the biggest power struggle might not be between humans and AI. Instead, various AIs connecting with each other, vying for victory, might be an even bigger problem. “How directly AI systems are able to communicate in the future is a really important question,” Adler said. “It’s really, really important that humans be monitoring notification channels and paying attention to what messages are being passed between the AI agents.” After all, he explained, if AI agents are connected to the internet and permitted to work with each other, there is reason to think they could begin colluding.
Adler pointed out that even soulless computer programs can happen to work in certain ways and have certain tendencies. AI systems, he said, might have different goals that they automatically pursue, and humans need influence over those goals.
The solution, he said, could be a form of AI control based on how cybersecurity professionals deal with “insider threats”—when someone inside an organization, who has access and knowledge, might try to harm the system or steal information. The goal of security is not to make sure insiders always behave; it’s to build structures that prevent even ill-intentioned insiders from doing serious harm. Instead of just hoping AI systems stay aligned, we should focus on building practical control mechanisms that can contain, supervise, restrict, or shut down powerful AIs—even if they try to resist.
I pointed out to Adler that when AI 2027 was released, there was plenty of criticism. People were skeptical, saying the timeline was too aggressive and underestimated real-world limits like hardware, energy, and regulatory bottlenecks. Critics also doubted that AI systems could quickly improve themselves in the runaway way the report suggested and argued that solving AI alignment would likely be much harder and slower. Some also saw the forecast as overly alarmist, warning it could hype fears without solid evidence that superhuman AI is that close.
Adler responded by encouraging others to express interest in running the simulation for their organization (there is a form to fill out), but admitted that forecasts and predictions are hard. “I understand why people would feel skeptical, it’s always hard to know what will actually happen in the future,” he said. “At the same time, from my point of view, this is the clear state of the art in people who’ve sat down and for months done tons of underlying research and interviews with experts and just all sorts of testing and modeling to try to figure out what worlds are realistic.”
Those experts are not saying that the world depicted in AI 2027 will definitely happen, he emphasized, but “it’s important that the world be ready if it does.” Simulations like this help people to understand what sorts of actions matter and make a difference “if we do find ourselves in that sort of world.”
Conversations with AI researchers like Adler tend to end without much optimism—though it’s worth noting that plenty of others in the field would push back on just how urgent or inevitable this view of the future really is. Still, it’s a relief that his blog post concludes with the hope, at least, that humans will “recognize the challenges and rise to the occasion.”
That includes Sam Altman: If OpenAI hasn’t already run one of these simulations and wanted to try it, said Adler, “I am quite confident that the team would make it happen.”
With that, here’s the rest of the AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
Fortune recently unveiled a new ongoing series, AIQ fortunededicated to navigating AI’s real-world impact. Our second collection of stories make up a special digital issue of Fortune in which we explore how technology is already changing the way the biggest companies do business in finance, law, agriculture, manufacturing, and more.
Meta wins AI copyright case in another blow to authors. In the same week as a federal judge ruled that Anthropic’s use of copyrighted books to train its AI models was “fair use,” Meta also won a copyright case in yet another blow to authors seeking to hold AI companies accountable for using their works without permission. According to the Financial TimesMeta’s use of a library millions of books, academic articles and comics to train its Llama AI models was judged “fair” by a federal court on Wednesday. The case was brought by about a dozen authors, including Ta-Nehisi Coates and Richard Kadrey. Meta’s use of these titles is protected under copyright law’s fair use provision, San Francisco district judge Vince Chhabria ruled. Meta had argued that the works had been used to develop a transformative technology, which was fair “irrespective” of how it acquired the works.
Google DeepMind releases new AlphaGenome model to better understand the genome. Google DeepMind, the AI research lab famous for developing AlphaGo, the first AI to defeat a world champion Go player, and AlphaFold, which uses AI to predict the 3D structures of proteins, released its new AlphaGenome modeldesigned to analyze up to one million DNA base pairs at once and predict how specific genomic variants affect regulatory functions—such as gene expression, RNA splicing, and protein binding—across diverse cell types. The company said the model was trained on extensive public datasets and achieves state-of-the-art performance on most benchmarks and can assess mutation impacts in seconds. AlphaGenome will be available for non-commercial research, and promises to accelerate discovery in genome biology, disease understanding, and therapeutic development.
Sam Altman calls Iyo lawsuit ‘silly’ after OpenAI scrubs Jony Ive deal from website, then shares emails. On Tuesday, OpenAI CEO Sam Altman on criticized a lawsuit filed by hardware startup Iyo, which accused his company of trademark infringement. CNBC reported that in response to the suit, Iyo CEO Jason Rugolo had been “quite persistent in his efforts” to get OpenAI to buy or invest in his company. In a post on X, he wrote that Rugolo is now suing OpenAI over the name in a case he described as “silly, disappointing and wrong.” He then posted screenshots of emails on X showing messages between him and Rugolo, which show a mostly friendly exchange.The suit stemmed from an announcement last month that OpenAI was bringing on Apple designer Jony Ive by acquiring his AI startup io in a deal valued at about $6.4 billion. Iyo alleged that OpenAI, Altman and Ive had engaged in unfair competition and trademark infringement and claimed that it’s on the verge of losing its identity because of the deal.
Can AI help America make stuff again? —by Jeremy Kahn
AI companies are throwing big money at newly-minted PhDs, sparking fears of an academic ‘brain drain’ —by Alexandra Sternlicht
Top e-commerce veteran Julie Bornstein unveils Daydream—an AI-powered shopping agent that’s 25 years in the making —By Jason Del Rey
Exclusive: Uber and Palantir alums raise $35M to disrupt corporate recruitment with AI —by Beatrice Nolan
July 8-11: AI for Good Global Summit, Geneva
July 13-19: International Conference on Machine Learning (ICML), Vancouver
July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.
July 26-28: World Artificial Intelligence Conference (WAIC), Shanghai.
Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.
Oct. 6-10: World AI Week, Amsterdam
Dec. 2-7: Neurips, San Diego
Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.
Many vendors are engaging in “agent washing”—the rebranding of products such as digital assistants, chatbots, and “robotic process automation” (RPA) that either aren’t actually agentic or don’t actually use AI, Gartner saysestimating that only about 130 of the thousands of “agentic AI” vendors actually offer real AI agents.