AMD Takes on NVIDIA in AI Chip Market with New Processors
Advanced Micro Devices (AMD) is gearing up to compete with NVIDIA in the artificial intelligence (AI) chip market over the next two years. During a presentation at the Computex technology trade show in Taipei, AMD CEO Lisa Su unveiled the company’s latest AI processors and outlined its goals for the industry.
The demand for powerful AI processors in data centers has been on the rise due to the growth of generative AI systems. Currently, NVIDIA holds an 80% market share in this space, but AMD aims to challenge its dominance with its new offerings.
To align with NVIDIA’s yearly release cycle, AMD has decided to shorten its own release cycle as well. This means that both companies will be introducing new products and capabilities more frequently, in response to market demands. Su emphasized that AI is AMD’s top priority and that the company is committed to developing state-of-the-art technology in this field.
During the presentation, AMD showcased the MI350 series, expected to deliver 35 times better AI inference performance than the current MI300 series. The MI350 series is projected to be released in 2025. Additionally, AMD announced the upcoming MI400 series, which will feature the “Next” architecture and is scheduled for delivery in 2026.
Industry experts, like Bob O’Donnell, Chief Analyst at Technalysis Research, believe that companies seeking alternatives to NVIDIA will be intrigued by AMD’s new solutions. O’Donnell noted that AMD is taking on NVIDIA head-on in this competitive market.
Su also shared a forecast that by 2024, revenue from AI chip sales is expected to reach around $4 billion, representing a $500 million increase from current estimates.
With its latest AI processors and ambitious plans for the future, AMD is positioning itself as a formidable competitor to NVIDIA in the AI chip space. As the demand for AI technologies continues to grow, both companies are striving to deliver innovative solutions to meet market needs and stay ahead of the curve.