Microsoft claims this feature can fix AI’s mistakes
Microsoft on Tuesday launched a new artificial intelligence (AI) capability that identifies and corrects instances when an AI model generates incorrect information. The feature, called “Correction,” will be integrated into Azure AI Content Safety’s groundedness detection system. Since the feature is only available through Azure, it’s likely aimed at the tech giant’s enterprise customers. The company is also working on other methods to reduce instances of AI hallucination. Notably, the feature can also provide an explanation of why a segment of text was flagged as incorrect.
Microsoft ‘Corrections’ feature launched
In a blog postthe Redmond-based tech giant detailed the new feature that reportedly combats cases of AI hallucination, a phenomenon in which AI responds to a search query with false information and fails to recognize its falsity.
The feature is available through Microsoft’s Azure services. The Azure AI Content Safety system has a tool called groundedness detection. This tool identifies whether a generated response is grounded in reality or not. While the tool itself works in many different ways to detect instances of hallucinations, the Correction feature works in a specific way.
For Correction to work, users must be connected to Azure’s grounding documents, which are used in document summarization and Retrieval-Augmentation-Generation-based (RAG) Q&A scenarios. Once connected, users can enable the feature. After that, whenever an unfounded or incorrect sentence is generated, the feature will trigger a correction request.
Simply put, the source documents can be thought of as a guideline that the AI system must follow when generating a response. It can be the source material for the query or a larger database.
The function then evaluates the statement against the base document and if it is found to be misinformation, it is filtered out. However, if the content is consistent with the base document, the function can rewrite the sentence to ensure that it is not misinterpreted.
Additionally, users also have the option to enable reasoning when they first set up the capability. If you enable this, the AI feature will add an explanation of why it thought the information was incorrect and needed to be corrected.
A spokesperson for the company told The Verge reported that the Correction feature uses small language models (SLMs) and large language models (LLMs) to align outputs with grounded documents. “It’s important to note that groundedness detection does not solve for ‘accuracy,’ but helps align generative AI outputs with grounded documents,” the publication quoted the spokesperson as saying.