Home Tech OpenAI Unveils GPT-4o Small: Faster, Affordable Micro AI Model for Developers

OpenAI Unveils GPT-4o Small: Faster, Affordable Micro AI Model for Developers

OpenAI Unveils GPT-4o Tiny: Faster, More Affordable, and Superior in Reasoning Tasks

OpenAI has recently introduced its latest micro artificial intelligence model, GPT-4o Tiny, which promises faster performance and lower costs compared to its previous state-of-the-art AI models. The company announced that developers can now access GPT-4o Tiny through the ChatGPT mobile and web applications, with enterprise users gaining access in the following week.

The GPT-4o Tiny is said to outperform other micro AI models in reasoning tasks involving text and graphics. This is significant because developers are increasingly relying on smaller AI models due to their improved speed and affordability compared to larger models like Claude 3.5 Sonnet or GPT-4 Omni. For simple, high-volume activities that require the use of an AI model, the GPT-4o Tiny offers a viable alternative.

In terms of performance, OpenAI claims that GPT-4o Tiny achieves an impressive 82% score on the MMLU standard for reasoning, surpassing the scores of Claude 3 Haiku (75%) and Gemini 1.5 Flash (79%). Moreover, it scores 87% on the MGSM test for analytical mathematical skills, outperforming Flash (78%) and Haiku (72%). These results demonstrate the superior capabilities of the GPT-4o Tiny in handling complex tasks.

One of the key advantages of the GPT-4o Tiny is its affordability. OpenAI states that it is over sixty percent cheaper than the GPT-3.5 Turbo and more cost-effective to operate compared to earlier versions. This makes it a more accessible option for developers and organizations with budget constraints. The GPT-4o Tiny’s application programming interface (API) currently supports both vision and text, with plans to include audio and video capabilities in the future.

In an interview with TechCrunch, Olivier Godement, the director of Product API at OpenAI, emphasized the importance of making AI models more economical to enable widespread adoption. He believes that the GPT-4o Tiny represents a significant advancement in achieving this goal. With a cost of 15 cents per million input tokens and 60 cents per million output tokens (API), developers using OpenAI’s API can leverage the model’s capabilities at an affordable price.

The GPT-4o Tiny has also demonstrated impressive speed in comparison to similar models. According to George Cameron, Co-Founder of Artificial Analysis, the GPT-4o Tiny can generate an average of 202 tokens per second, making it remarkably fast. This makes it an attractive option for use cases where speed is critical, such as consumer applications and agent-based methods utilizing LLMs.

In addition to the GPT-4o Tiny, OpenAI has introduced new tools for ChatGPT Enterprise. These tools aim to assist businesses operating in highly regulated sectors, such as government, healthcare, legal services, and finance, in complying with recording and auditing standards. The Enterprise Compliance API enables administrators to audit and take action on their ChatGPT Enterprise data, including workspace interactions and contributed files. Administrators also have increased control over workspace GPTs, allowing them to create an authorized list of domains for communication.

Overall, the introduction of the GPT-4o Tiny and the new tools for ChatGPT Enterprise highlight OpenAI’s commitment to advancing AI technology while addressing the needs of various user segments. With its enhanced performance, affordability, and speed, the GPT-4o Tiny offers developers and businesses a powerful solution for a wide range of AI applications.

Exit mobile version