Click to copy, then share by pasting into your messages, comments, social media posts and websites.
Click to copy, then add into your webpages so users can view and engage with this video from your site.
Report Content
We also accept reports via email. Please see the Guidelines Enforcement Process for instructions on how to make a request via email.
Thank you for submitting your report
We will investigate and take the appropriate action.
OpenAI introduces GPT-4o, its faster and free for all new AI model
OpenAI on Monday introduced a new AI model and a desktop version of ChatGPT. GPT-4o offers enhanced speed, multilingual support, and omnimodal functions, promising a new era in AI interaction and accessibility.
On Monday, OpenAI unveiled its latest flagship AI model, GPT-4o, alongside updates featuring a new desktop service and enhancements to its voice assistant capabilities. Mira Murati, the Chief Technology Officer, took the stage at OpenAI’s headquarters, presenting the new model as a significant advancement in AI. GPT-4o will now be available to free users, offering a faster and more accurate AI experience previously exclusive to paid customers.
“This is the first time that we are really making a huge step forward when it comes to the ease of use,” said Murati during the live demo. “This interaction becomes much more natural and far, far easier.”
The San Francisco start-up showcased a series of improvements to its GPT-4 model, including enhancements in its ability to interpret voice, video, images, and code within a unified interface. The update “provides GPT-4 level intelligence, but it’s much faster and improves on capabilities across text, vision, and audio”, stated Murati before demonstrating live voice translation across languages.
Key features of GPT-4o
The “o” in GPT-4o stands for omni, indicating its versatility. According to Murati, the new model enables ChatGPT to handle 50 different languages with enhanced speed and quality. Moreover, it will be accessible through OpenAI’s API, allowing developers to start building applications with the new model immediately. Murati mentioned that GPT-4o is twice as fast as and half the cost of GPT-4 Turbo.
During the presentation, OpenAI team members showcased the model’s audio capabilities by using it to help calm someone before a public speech. Mark Chen, an OpenAI researcher, highlighted the model’s ability to perceive emotions and handle interruptions from users. The team also demonstrated its capability to analyse facial expressions to discern the emotions of users.
In terms of interaction, ChatGPT’s audio mode greeted users with a cheerful message. OpenAI plans to test Voice Mode in the upcoming weeks, providing early access to paid subscribers of ChatGPT Plus. The company claimed that the new model can respond to audio prompts in a conversational time frame similar to human response times.
Chen demonstrated the model’s versatility by asking it to tell a bedtime story, adjust its voice tone to be dramatic or robotic, and even sing the story. Additionally, OpenAI stated that the new model can function as a translator, including in audio mode, as demonstrated by Chen conversing with Murati in different languages.
OpenAI GPT-4o launch and impact
Category | Anime & Animation |
Sensitivity | Normal - Content that is suitable for ages 16 and over |
Playing Next
Related Videos
2 days, 12 hours ago
4 days, 17 hours ago
6 days, 12 hours ago
CB Ninja: 100% Done For You ClickBank Affiliate Sites
1 week, 1 day ago
ZENDO - Your Ultimate AI-Powered Email Marketing Solution
1 week, 3 days ago
(PLR) Microsoft Copilot AI Expertise
1 week, 5 days ago
Warning - This video exceeds your sensitivity preference!
To dismiss this warning and continue to watch the video please click on the button below.
Note - Autoplay has been disabled for this video.