Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks
OpenAI unveils GPT-4o, a faster and free iteration of its GPT-4 model
Photo by Growtika / Unsplash

OpenAI unveils GPT-4o, a faster and free iteration of its GPT-4 model

GPT-4o brings a pivotal evolution to the GPT models, transforming ChatGPT into a digital personal assistant, responding in real-time and observing the world around you.

Henry Chikwem profile image
by Henry Chikwem

Over the past week, there have been conflicting reports predicting OpenAI's plan to announce an AI search engine to rival Google and Perplexity, a voice assistant baked into GPT-4, or a launch the GPT-5 ahead of the Google IO event. 

Well, OpenAI has put those speculations to bed as it instead launches an improved iteration of its GPT-4 model called GPT-4o ("o" for omni) which is faster, free, and improves capabilities across text, vision, and audio, at its OpenAI event yesterday.

GPT-4o brings a pivotal evolution to the GPT models, transforming ChatGPT into a digital personal assistant, responding in real-time and observing the world around you.

Notably, the voice mode of ChatGPT receives a substantial enhancement as part of the GPT-4o rollout. Evolving beyond its previous limitations of responding to one prompt at a time and working with only what it can hear, the app now embodies characteristics akin to the intelligent voice assistant in the 2013 film "Her", offering real-time responsiveness and environmental awareness.

Its multimodal abilities will also allow it to seamlessly interact via text, and "vision," enabling it to interpret and engage in real-time spoken conversations about screenshots, images, documents, and charts uploaded by users.

The updated version is also getting memory capabilities, allowing it to learn from past interactions and provide more contextually relevant responses. Furthermore, GPT-4o facilitates real-time translation.

Meanwhile, the full potential of GPT-4o is still unfolding. For now, you can only use the text and image capabilities on GPT-4o as other features will be rolled out in the coming days.

While it is nice to know that this model iteration is free and does not require any form of subscription, it is still best to continue as a subscriber if you have been one before now.

This is because subscribers of ChatGPT Plus can use the new GPT-4o to get more prompts per hour than non-subscribers. This means you can send GPT-4o five times as many prompts before waiting or switching to a less powerful model.

Interestingly, this would be the first time OpenAI is unveiling a new language model without a subscription fee, unlike GPT-4, which was introduced in March last year and was made available to only subscribers of ChatGPT Plus and GPT-4 Turbo.

Going by what we know, this new voice mode feature will first be made available to subscribers of ChatGPT Plus, who currently stand at about 250,000 globally per data from Nerdynav, before it rolls out to non-subscribers. 

Henry Chikwem profile image
by Henry Chikwem

Subscribe to Techloy.com

Get the latest information about companies, products, careers, and funding in the technology industry across emerging markets globally.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More