OpenAI unveils GPT-4o mini, a smaller and cheaper AI model
OpenAI's GPT-4o mini is here to revolutionize affordable AI development.
OpenAI has just unveiled GPT-4o mini, a smaller, faster, and more cost-effective version of its cutting-edge models. This new AI marvel is set to redefine what users can achieve on a budget.
Released on Thursday, GPT-4o mini offers developers the speed and efficiency needed for high-volume tasks, at a fraction of the cost of larger models like GPT-4. Priced at just 15 cents per million input tokens and 60 cents per million output tokens, it's over 98% cheaper than the previous GPT-4 which is priced at $10.00 per 1 million input tokens and $30.00 per 1 million output tokens.
Despite, the GPT-4o mini being significantly more affordable to run than its frontier models, the capabilities of the GPT-4o mini are about as impressive as the GPT-4o. It supports both text and vision, and OpenAI plans to expand this to include video and audio in the future. The model boasts a context window of 128,000 tokens, the same as the GPT-4 turbo model, allowing it to handle extensive data inputs efficiently.
Additionally, the GPT-4o mini is very fast, relative to comparable models, with a median output speed of 202 tokens per second. This is more than 2X faster than GPT-4o and GPT-3.5 Turbo.
According to benchmarks from Artificial Analysis, GPT-4o mini scores 82% on the MMLU, a reasoning benchmark, outperforming corresponding products from competitors such as Gemini 1.5 Flash and Claude 3 Haiku with 79% and 75% respectively, according to data from Artificial Analysis.
With over 180 million monthly users and about 100 million weekly active users using ChatGPT, OpenAI's user base is projected to rise with the introduction of the more affordable GPT-40 mini.
Looking ahead, OpenAI plans to continue enhancing GPT-4o mini, with future updates promising to expand its multimodal capabilities further.