Home ai Introducing Thunder: Lightning AI’s Next-Generation AI Compiler for Enhanced Model Training Speed

Introducing Thunder: Lightning AI’s Next-Generation AI Compiler for Enhanced Model Training Speed

Lightning AI, in collaboration with Nvidia, has recently announced the release of Thunder, a source-to-source compiler designed to accelerate the training of AI models. This new offering aims to improve efficiency by utilizing multiple GPUs, allowing for a 40% speed-up in training large language models (LLMs) compared to unoptimized code in real-world scenarios.

The Thunder compiler was unveiled at Nvidia GTC, where Lightning AI showcased its commitment to creating next-generation deep learning for PyTorch that is compatible with Nvidia’s suite of products. By optimizing the use of GPUs, Thunder offers a solution to the challenge of maximizing their potential without increasing their number.

Led by PyTorch core developer Thomas Viehmann, Lightning AI is confident that the Thunder compiler will serve generative AI models from multiple GPUs. The company’s CEO and founder, William Falcon, expressed his excitement to work with Viehmann, noting his expertise in PyTorch and his contribution to upcoming performance breakthroughs.

The model training process can often be time-consuming and costly, involving data collection, model configuration, supervised fine-tuning, and other factors such as technological expertise, management, and optimization. Lightning AI’s Thunder compiler addresses these challenges and offers a solution for organizations looking to expedite their workflows.

One of the key issues Thunder tackles is the underutilization of available GPUs. Many customers tend to throw more GPUs at the problem rather than fully utilizing the capacity of their existing ones. Luca Antiga, Lightning’s Chief Technology Officer, emphasizes the importance of performance optimization and profiling tools in scaling model training. By using Thunder in conjunction with Lightning Studios and its profiling tools, customers can effectively use GPUs and train LLMs at a faster and larger scale.

Thunder is now available for use following the release of Lightning 2.2 in February. Lightning Studios offers different pricing levels, catering to individual developers, engineers, researchers, scientists, startups, teams, and larger organizations.

In conclusion, Thunder, the new source-to-source compiler from Lightning AI, promises to enhance the speed and efficiency of AI model training by utilizing multiple GPUs. With its compatibility with Nvidia’s suite of products and the expertise of core developer Thomas Viehmann, Thunder offers a solution to the challenge of maximizing GPU potential. By optimizing the use of GPUs, Thunder enables organizations to expedite their workflows, saving time and resources in the process.

Exit mobile version