Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

![]()
“It will be a stressful job and you will dive into the deep end almost immediately,” Sam Altman, CEO of OpenAI. written the in its announcement of the “head of readiness” position at OpenAI on Saturday.
In exchange for $555,000 per yearAccording to OpenAI’s job posting, the Readiness Manager is expected to “expand, strengthen, and guide” the existing readiness program within OpenAI’s Security Systems Department. This side of OpenAI builds the guarantees that, in theory, allow OpenAI models to “behave as expected in real-world contexts.”
But hey, wait a minute, are they saying that OpenAI’s models now behave as expected in real-world settings? In 2025, ChatGPT continued to hallucinate in legal filingsattracted hundreds of FTC complaintsincluding complaints that it was triggering mental health crises in users, and obviously transformed photos of women dressed in bikinis into deepfakes. Sora must have had the ability to make videos of people like Martin Luther King, Jr. revoked because users were abusing the privilege of creating revered historical figures say practically anything.
When cases related to problems with OpenAI products come to court – as in the case of the wrongful death lawsuit brought by the family of Adam Raine, who allegedly received advice and encouragement from ChatGPT that led to his death – there is a legal argument to be made that users were abusing OpenAI products. In November, a file from OpenAI lawyers cited rule violations as potential cause of Raine’s death.
Whether you accept the abuse argument or not, it clearly plays an important role in how OpenAI makes sense of what its products do in society. Altman acknowledges in his Readiness Manager article X that the company’s models can impact people’s mental health and detect security vulnerabilities. We are entering, he says, “into a world where we need a more nuanced understanding and measurement of how these capabilities could be misused, and how we can limit these downsides both in our products and in the world, so that we can all reap the enormous benefits.”
After all, if the goal was simply to never cause harm, the quickest way to ensure that would be to simply remove ChatGPT and Sora from the market.
So the head of readiness at OpenAI is someone who will thread that needle, and “[o]wn OpenAI’s end-to-end readiness strategy”, determining how to assess models for undesirable capabilities and designing ways to mitigate them. The announcement states that this person will be expected to “evolve the readiness framework as new external risks, capabilities or expectations emerge.” This can only mean finding potential new ways for OpenAI products to could be capable of harming people or society, and provide a rubric to allow the products to exist, while demonstrating, presumably, that the risks have been sufficiently mitigated that OpenAI is not legally liable for seemingly inevitable future “downsides.”
It would be bad enough to have to do all of this for a company that is treading water, but OpenAI needs to take drastic measures to generate revenue and launch cutting-edge products as quickly as possible. In an interview last month, Altman strongly implied that he would take the company’s revenue from where it currently stands – apparently somewhere north of $13 billion a year – to $100 billion in less than two years. Altman said his company’s “consumer devices business will be a significant and important thing” and that “AI that can automate science will create enormous value.”
So if you want to oversee the “mitigation design” of new versions of OpenAI’s existing products, as well as new physical gadgets and platforms that don’t exist yet, but are supposed to do things like “automate science”, while the CEO breathes down your neck about the need to do approx. the same amount of annual income as Walt Disney the following year, enjoy being a preparation manager at OpenAI. Try not to ruin the whole world with your new job.