Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Join the event that trusts business leaders for almost two decades. VB Transform brings together people who build a real business AI strategy. Learn more
Only a few hours after the announcement A significant price drop for its O3 reasoning model, OPENAI Made of O3-Pro, an even more powerful version, available for developers.
O3-Pro is “designed to think longer and provide the most reliable answers” and has access to many other software tools integrations than its predecessor, which makes it potentially attractive for businesses and developers looking for high levels of details and accuracy.
However, this model will also be slower than what many developers are used to, having access to IT tools that OPENAI claims make the model more precise.
“Because 03-Pro has access to the tools, the answers generally take more time than O1-Pro to finish. We recommend that you use it for difficult questions where reliability is more important than speed, and waiting for a few minutes is worth compromise,” the company told journalists.
But how long? We asked Openai on the quantity of O3-Pro that is slower than O3 on average to produce the answers and we will update when we receive one of ours in the company.
On X, the co-founder of Hyerbolic Labs and CTO Yuchen Jin Published several screenshots From its O3-Pro use showing that it took 3 minutes and $ 80 of token to respond to the sentence: “Hi, I’m Sam Altman.”
Bindu Reddy, CEO of Abacus Ai, also said that O3-Pro has taken 2 minutes To answer “Hey there”.
https://twitter.com/bindureddy/status/1932562799772971295
Developers can access O3-Pro via the OPENAI API as well as for Chatgpt Pro and Team users. The new version of the model replaces O1-Pro in the model selector to pay chatgpt users.
OPENAI said that O3-Pro “has access to tools that make the Chatppt useful”, such as web search, file analysis, reason for visual entries, use of python and personality of the responses.
The mode, however, is expensive, which can give certain business developers a break. Based on OPENAI pricing pageO3-Pro costs $ 20 per entry and $ 80 for outings, compared to O3 itself, which has now fallen at $ 2 and $ 8, a tenth of the price.
OPENAI Launched O3 and O4-Mini In April, expanding its “O series” of models based on reasoning and can “think with images”. The new model, O3-Pro, uses the same Underlying model like O3.
OPENAI assessments have shown that O3-Pro can often surpass the basic model. Expert examiners have classified higher O3-Pro in fields such as science, education, programming, business and writing assistance. The company said that O3-Pro is more efficient, more complete and better follows the instructions.
The reasoning models have become a new battlefield for model suppliers, with competitors like Google,, AnthropicAnd Xaias well as rivals of China, as In depthGo out with their own models designed to think about answers.
Currently, O3-Pro is unable to generate images and Openai has disabled temporary cats to solve a technical problem. Canvas on the enlarged chatgpt workspace function is not yet accessible using O3-Pro.
Some first users claim This O3-Pro has worked remarkably, but it is still early, and the high cost of execution can dissuade certain developers from experimenting.
See some initial reactions below:
Like Ben Hyak, an former Apple Vision Pro interface designer and Co-founder of Startup Randrop AI Observability solutions have written in a blog article on its use of early access to the O3: “It is obviously better to discern what its environment is; Communicate with precision to which tools he has access, when to ask questions about the outside world (rather than claiming that he has information / access) and the choice of the right tool for work. ” The founder and CEO of Openai Sam Altman highlighted Hyak’s blog in a post X.
The launch also arrives at a time when Openai said he had reached Three million professional usersWith business users up 50% since February.