OpenAI Unveils o3 and o4-mini: The Pinnacle of Their Model Lineup with Complete Tool Functionality
In Brief
The latest additions to OpenAI's suite, o3 and o4-mini, showcase a major leap forward in the potential of its reasoning technology.

An organization focused on AI research OpenAI has announced the launch of its latest models, o3 and o4-mini, which signify a notable improvement in its reasoning capabilities.
For the first time, these models allow ChatGPT to autonomously and intelligently utilize and combine all available tools on the platform. This includes searching online, analyzing uploaded data with Python, processing visual content, and creating images. The o3 and o4-mini models are engineered to assess when and how to use each tool effectively, ensuring prompt and structured answers—often in less than a minute—making it easier to manage complex, multi-step questions. This development signals a shift towards a more self-sufficient assistant that's capable of performing tasks with minimal user input.
OpenAI claims that the o3 model is its most sophisticated reasoning engine to date. It's established new benchmarks across various fields like software development, mathematics, the sciences, and visual comprehension. Remarkably, it has outperformed previous records on competitive coding platforms such as Codeforces and SWE-bench, excelling in academic evaluations without additional fine-tuning. The model performs exceptionally in visually demanding tasks, such as interpreting graphs and charts. Independent evaluators have noted that o3 reduces major reasoning mistakes by 20% compared to its predecessor, particularly in fields requiring application, such as programming, consultancy, and ideation in scientific and technical areas. Feedback from early users highlighted the model’s robust analytical skills and its utility as a collaborative thought partner, especially in areas like biology, engineering, and mathematics.
On the other hand, the o4-mini model presents a streamlined option that balances robust performance with lower computational demand. Despite its reduced size, it has achieved stellar scores in competitive evaluations like AIME 2024 and 2025, performing better than the previous o3-mini in disciplines outside of STEM, such as data analytics and general knowledge. Its operational efficiency makes it ideal for large-scale tasks that still require sound reasoning.
Both of these models have undergone examination by third-party analysts, who have noted significant enhancements in their capacity to adhere to instructions and deliver useful, verifiable outputs. This improvement is partly attributed to superior integration of online resources and a better understanding of context. In comparison to prior versions, o3 and o4-mini also create a more conversational interface, leveraging memory and the context from past exchanges to offer responses that are both tailored and coherent.
Significantly, these models can now incorporate visual elements directly into their problem-solving processes. Rather than just identifying images, they integrate these visuals into their reasoning, facilitating a more refined approach to tackling problems that combines visual insights with textual reasoning. They have achieved cutting-edge results in multimodal tasks that require an understanding of both images and language.
Users are now empowered to upload a variety of visual inputs—from textbook illustrations and handwritten pages to photos of whiteboards—and the models can interpret these visuals, even under less-than-ideal conditions like poor resolution or flipped orientations. Coupled with tool access, the models can modify these images during their analysis, making adjustments to angles or zoom levels to extract the necessary details.
This enhancement in visual reasoning has broadened the scope of tasks that these models can tackle, delivering high precision in areas that have historically posed challenges for AI systems.
OpenAI Launches o3 and o4-mini Models Across Various ChatGPT Plans, Available Now for Plus, Pro, Team, and Free Users
OpenAI's newest models, o3 and o4-mini, come fully equipped with access to ChatGPT’s suite of internal tools and can also utilize user-defined tools through API function calling. These models are honed not only to perform tasks but to judiciously decide which tools to deploy and when, enabling them to craft responses to intricate prompts with well-organized and pertinent outputs, generally within a minute.
To exemplify their capabilities, imagine someone requesting a prediction for California's summer energy consumption compared to the prior year. The model can independently scour the internet for the latest utility statistics, use Python to build a forecasting model, create visuals to display trends, and elucidate the variables that affect the outcome. What makes this all possible is its ability to connect multiple actions fluidly, adapting in real-time as it retrieves information. If the initial search doesn’t yield sufficient data, the model can refine its query and continue its search, showcasing a flexible, iterative reasoning style that mirrors human thought processes. model This adaptable reasoning logic enables o3 and o4-mini to tackle requests that depend on current data or necessitate a combination of real-time information, thorough analysis, and multifaceted outputs—capabilities that previous models found challenging without explicit guidance.
Starting today, these models are being systematically rolled out across several ChatGPT subscription tiers. Users subscribed to Plus, Pro, and Team will see o3, o4-mini, and o4-mini-high in their model selection menus, phasing out earlier versions like o1 and o3-mini. Users in the Enterprise and Education sectors will gain access within a week. For those on the complimentary version of ChatGPT, o4-mini can be tried by selecting the “Think” option prior to submitting a prompt. It’s worth noting that existing rate restrictions will remain unchanged despite the model upgrades.
Please keep in mind that the information presented here is not intended to serve as legal, tax, investment, financial advice, or any other kind of guidance. It’s crucial to only invest what you can afford to lose and to seek independent financial counsel if you have any uncertainties. For additional details, we encourage you to review the terms and conditions, as well as the help and support resources provided by the issuer or advertiser. MetaversePost commits to delivering accurate and unbiased reports, but please be aware that market conditions can shift without prior notice.
Disclaimer
In line with the Trust Project guidelines Alisa, a committed journalist with Cryptocurrencylistings, focuses on the intricacies of cryptocurrency, zero-knowledge proofs, investments, and the expansive Web3 ecosystem. Her sharp insight into emerging trends and technologies ensures that she provides thorough coverage to keep readers informed and engaged in the dynamic realm of digital finance.