Shobhit Varshney, VP and senior partner at IBM Consulting, explains that enterprise technology leaders understand the strengths and limitations of various AI models for specific use cases. However, other C-suite leaders are still catching up and often think of one large AI model that can handle different tasks. Varshney advises enterprises to optimize AI cost efficiency by deploying large models for complex tasks while using niche models for specialized applications. The business impact and incremental cost of leveraging a large language model (LLM) should be quantified before embarking on a gen AI use case. Varshney also emphasizes the value of open models for enterprise AI deployment, as they allow for a wider community to review and fortify AI systems. Enterprises can adapt open models to their specific domain, data, and use cases. However, Varshney notes that companies should prioritize an AI strategy before focusing on the models themselves. IBM Consulting helps clients determine the areas where AI can have the biggest impact, such as customer service, IT operations, HR, and supply chain. After prioritizing a use case, the right technology, whether it’s automation, traditional AI, or generative AI, can be chosen based on factors like task complexity, cost, accuracy requirements, latency, auditability, and context. This approach ensures the right mix of models for an effective AI strategy.