OpenAI has recently unveiled a more powerful version of its AI model, the o1-pro, offering enhanced performance for developers using its API. This new version is designed to provide "more reliable responses" for complex tasks, but comes at a premium price.
The cost of o1-pro is notably high, charging $150 per million tokens (~750,000 words) input into the model and $600 per million tokens generated by it. This is double the price of OpenAI's GPT-4.5 and 10 times more than the regular o1. Developers will need to consider whether the benefits justify these increased costs.
OpenAI justifies this pricing by promising that o1-pro will deliver better answers to the toughest challenges. An OpenAI spokesperson explained, “o1-pro in the API is a version of o1 that uses more computing to think harder and provide even better answers to the hardest problems.” The model is currently available to developers who have already spent at least $5 on OpenAI’s API services.
However, feedback on o1-pro has been somewhat mixed. While the model’s enhanced capabilities should theoretically make it more effective, some users have found that it struggles with simple tasks, like solving Sudoku puzzles or understanding optical illusion jokes. This has raised questions about whether the extra cost is truly justified for every developer.
Further internal tests from OpenAI, conducted late last year, revealed that o1-pro performed slightly better than the regular o1 when it came to solving coding and math problems. The difference was mostly in terms of reliability, as o1-pro answered problems more consistently, though the improvements weren’t drastic.
All things considered, the o1-pro model offers a powerful tool for developers willing to pay a premium for more dependable AI performance. However, given the mixed feedback and steep pricing, it remains to be seen whether this new model will be a must-have for developers or whether its performance can justify its high costs in the long run.
PHOTO: MPOST.IO
This article was created with AI assistance.
Read More