Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
From the moment when the CEO of Openai, Sam Altman, marked on stage, it was clear that it was not going to be a normal interview.
Altman and his chief of the farm, Brad Lightcap, were clumsily stood back from the scene in a San Francisco full -to -crack place which generally welcomes jazz concerts. Hundreds of people filled theater steep seats on Tuesday evening to watch Kevin Roose, a columnist from the New York Times, and Casey Newton from Casey Newton from their popular technological podcast, Hard Fork.
Altman and Lightcap were the main event, but they were released too early. Roose explained that he and Newton planned to – ideally, before the leaders of Openai were supposed to be out – listing several titles which had been written on Openai in the weeks preceding the event.
“It’s more fun than we are here for that,” said Altman. A few seconds later, the CEO of Openai asked: “Are you going to talk about where you are continuing us because you don’t like user confidentiality?”
A few minutes after the start of the program, Altman diverted the conversation to talk about the New York Times trial against Openai and its greatest investor, Microsoft, in which the publisher alleges that the Altman company Wrongly used its items to form large language models. Altman was particularly irritated by recent development in the trial, in which lawyers representing the New York Times asked OpenAi to keep data from Consumer Cat and API customers.
“The New York Times, one of the big institutions, really, for a long time, takes a position that we should have to preserve our users’ newspapers even if they discuss in private mode, even if they have asked us to delete them,” said Altman. “Always like the New York Times, but this one we feel firmly.”
For a few minutes, the CEO of Openai pressed the podcasters to share their personal opinions on the New York Times trial – they disassembled, noting that journalists whose work appears in the New York Times, they are not involved in the trial.
The impetuous entry of Altman and Lightcap lasted only a few minutes, and the rest of the interview continued, apparently, as expected. However, the push that felt indicative of the Silicon Valley inflection point seems to approach in its relationship with the media industry.
In recent years, several publishers have brought prosecution against Openai, Anthropic, Google and Meta to train their AI models on works protected by copyright. At a high level, these prosecution argues that AI models have the potential to devalue and even replace works protected by copyright produced by media institutions.
But tides can turn to technological companies. Earlier this week, competitor OPENAI Anthropic received a major victory in his legal battle against the publishers. A federal judge judged that the use of books by anthropic to train his IA models was legal in certain circumstances, which could have general implications for prosecution against other publishers against Openai, Google and Meta.
Perhaps Altman and Lightcap felt embarked by the industry’s victory before their live interview with journalists from the New York Times. But these days, Openai rejects threats from all directions, and this has become clear throughout the night.
Mark Zuckerberg recently tried to Recruit the best Talents of Openai by offering them compensation packages of $ 100 million To join the Meta AI superintendent laboratory, Altman revealed weeks ago on his brother’s podcast.
When asked if the Meta CEO really believes in the Superintelligent AI systems, or if it is only a recruitment strategy, Lightcap joked: “I think [Zuckerberg] believes that it is supentiligent. »»
Later, Roose questioned Altman about the OpenAi relationship with Microsoft, which would have been pushed to a boiling point in recent months when the partners negotiate a new contract. While Microsoft was once a major accelerator for Openai, the two are now in competition in corporate software and other areas.
“In any deep partnership, there are tension points and we certainly have them,” said Altman. “We are both ambitious businesses, so we find flash points, but I expect it to be something in which we find a deep value for both parties for a very long time.”
Today, Openai’s leadership seems to spend a lot of time reducing competitors and prosecution. This can hinder OpenAi’s ability to solve wider problems around AI, such as how to securely deploy very intelligent AI systems on a large scale.
At some point, Newton asked OpenAi leaders how they thought of the recent stories of Mentally unstable people using Chatppt to cross dangerous rabbit holesIncluding to discuss conspiracy theories or suicide with the chatbot.
Altman said OpenAi takes many steps to prevent these conversations, for example by cutting them early or directing users to professional services where they can get help.
“We do not want to slide into the errors that I think that the previous generation of technological companies made by not reacting quickly enough,” said Altman. To a question of follow -up, the CEO of Openai added: “However, to the users who are in a sufficiently fragile mental place, who are at the forefront of a psychotic break, we have not yet understood how a warning has passed.”