Concerns regarding OpenAI’s transparency and model evaluation practices have arisen due to a disparity between first- and third-party benchmark results for its o3 AI model. When OpenAI introduced o3 in December, they claimed it could accurately answer just over 25% of questions from the challenging FrontierMath problem set, greatly outperforming the competition, where the next-best model only managed about 2%.
During a livestream, Mark Chen, OpenAI’s chief research officer, stated, “Currently, all other products are below 2% on FrontierMath. Internally, we see o3 achieving over 25% under intensive testing conditions.” However, this percentage appears to represent an upper limit reached by a more robust version of o3 compared to the one OpenAI released publicly last week.
On Friday, Epoch AI, the organization behind FrontierMath, disclosed the outcomes of its independent benchmarking of o3, which showed a score around 10%, significantly below OpenAI’s reported peak performance.
This discrepancy does not necessarily imply that OpenAI was dishonest. The benchmarks published by the company in December align with the lower score observed by Epoch. Epoch also pointed out potential differences in their testing methodologies, and they utilized a more recent version of FrontierMath for their analysis. They noted, “The variance between our results and OpenAI’s may stem from their use of a stronger internal framework, more test-time computing resources, or variations in the subsets of FrontierMath used for evaluation.”
A post on X from the ARC Prize Foundation, which tested a pre-release version of o3, stated that the currently available o3 model “is a different model…designed for chat/product applications,” supporting Epoch’s findings. ARC Prize mentioned that “all released o3 compute tiers are smaller than the version we benchmarked,” indicating that larger compute tiers typically yield higher benchmark scores.
Wenda Zhou, a technical staff member at OpenAI, mentioned during a livestream that the production version of o3 is “more optimized for practical applications” and faster than the version presented in December, which may result in benchmark “disparities.” He remarked, “[W]e’ve made adjustments to enhance cost-efficiency and functionality. We still believe this is a superior model…answers will come faster, addressing a genuine issue with these models.”
Despite the public release of o3 not meeting OpenAI’s initial testing claims, it is somewhat overshadowed by the fact that their o3-mini-high and o4-mini models outperform o3 in FrontierMath. Additionally, OpenAI has plans to launch a more advanced variant, o3-pro, in the near future.
This situation underscores the importance of not taking AI benchmark claims at face value, especially from companies with vested interests in the outcomes. Benchmarking controversies are increasingly common within the AI sector as vendors strive to attract attention and recognition with their latest models.
In January, Epoch faced criticism for delaying the announcement of its funding from OpenAI until after o3’s release, leaving many academic contributors unaware of OpenAI’s involvement until it was publicly disclosed. More recently, Elon Musk’s xAI was accused of sharing misleading benchmark data for its latest AI model, Grok 3, and just this month, Meta acknowledged promoting benchmark scores from a version of a model that differed from what was ultimately made available to developers.