Microsoft is introducing a new feature called “Correction” as part of its Azure AI Studio. This tool is designed to detect and fix inaccuracies in AI-generated content by cross-referencing outputs with customer-provided source material. Available in preview, the correction system aims to improve the reliability of AI by identifying, explaining, and rewriting incorrect information before it reaches the user.
How Correction Works
The “Correction” feature operates by scanning AI outputs for potential errors, using both small and large language models to compare the generated content with grounding documents. If an inaccuracy is detected, the system flags the mistake, provides an explanation, and rewrites the content accordingly. The entire process takes place automatically, ensuring that users are not exposed to incorrect information.
Read More: Iran seeks to meddle in US elections – Microsoft
This tool is part of the Azure AI Studio’s suite of safety measures, which also includes mechanisms to detect vulnerabilities, identify AI “hallucinations,” and block harmful prompts. The ultimate goal is to improve the accuracy of generative AI models in critical areas, such as medicine and business, where precision is vital.
Comparison with Google’s Vertex AI
Microsoft is not the first to introduce such a feature. Google’s Vertex AI, a cloud platform for AI development, also offers a grounding tool that checks AI outputs against sources like Google Search and customer datasets. However, while Google’s system identifies errors, it does not automatically correct them. Microsoft’s Correction tool takes this extra step by revising the inaccurate content, enhancing the trustworthiness of its outputs.
Read More: Major Microsoft IT outage hits Europe
Despite its promise, Microsoft’s Correction feature is not flawless. A spokesperson for Microsoft noted that the system aims to align AI-generated text with grounding documents, but does not guarantee 100% accuracy. Industry experts have also raised concerns about the potential limitations of this tool. Some argue that while it may catch certain mistakes, it could give users a false sense of security by making them believe that AI outputs are more reliable than they truly are.