Advertising

OpenAI Launches GPT-4o Long Output Model with 16X Increase in Token Outputs

blankOpenAI, despite facing financial difficulties, continues to release new models and updates. Recently, the company announced a new variation of its GPT-4o model called GPT-4o Long Output. This new model offers a significant increase in token output, allowing users to receive responses up to 64,000 tokens long, compared to the original GPT-4o’s 4,000 token output. OpenAI made this change based on customer feedback, which indicated a need for longer output contexts.

The extended output capability of GPT-4o Long Output is especially beneficial for applications that require detailed and extensive responses, such as code editing and writing improvement. By providing more comprehensive and nuanced answers, the model can significantly enhance these use cases.

It’s important to note the distinction between context and output. The context window, which includes both input and output tokens, remains at a maximum of 128,000 tokens for both GPT-4o and GPT-4o Long Output. However, the number of output tokens has increased significantly from 4,000 to 64,000. This means that users can provide up to 124,000 tokens as input and receive a maximum output of 4,000 tokens from GPT-4o. Similarly, for GPT-4o Long Output, users can provide up to 64,000 tokens as input and receive a maximum output of 64,000 tokens.

OpenAI has priced the GPT-4o Long Output model aggressively and affordably. It costs $6 USD per 1 million input tokens and $18 per 1 million output tokens. This pricing strategy aligns with OpenAI’s goal of making powerful AI accessible to a wide range of developers.

Currently, access to the GPT-4o Long Output model is limited to a small group of trusted partners for alpha testing. OpenAI aims to gather feedback on how the extended output meets user needs. Depending on the outcomes of this testing phase, OpenAI may expand access to a broader customer base.

The ongoing alpha test will provide valuable insights into the practical applications and potential benefits of the extended output model. If the feedback from the initial group of partners is positive, OpenAI may consider making this capability more widely available. With the GPT-4o Long Output model, OpenAI aims to address a wider range of customer requests and empower applications that require detailed responses.