Google Reveals What Really Went Wrong With Its AI Overview Feature
Google on Thursday (May 30) published an explanation for the debacle caused by its artificial intelligence (AI)-powered search tool – AI Overviews – which was seen generating incorrect answers for multiple searches. The AI Search feature was introduced at Google I/O 2024 on May 14, but reportedly came under fire shortly after for returning bizarre answers to searches. In a lengthy explanation, Google revealed the likely cause behind the issue and the steps it has taken to fix it.
Google’s response
In a blog afterGoogle began by explaining how its AI Overviews feature works differently than other chatbots and Large Language Models (LLMs). According to the company, AI Overviews simply doesn’t “generate output based on training data.” Instead, it’s supposedly integrated into its “core web ranking systems” and is meant to perform traditional “search” tasks from the index. Google also claimed that its AI-powered search tool is “generally non-hallucinating.”
“Because accuracy is paramount in search, AI overviews are designed to only show information supported by the top web results,” the company said.
What happened next? According to Google, one reason was the inability of its AI Overviews feature to filter out satirical and nonsensical content. Citing the search query “How many bricks should I eat,” which returned results suggesting the person eat one brick per day, Google said that before the query, “virtually no one was asking that question.”
This, the company said, created a “data void” where high-quality content is limited. This particular query also resulted in satirical content being published. “So when someone asked that question in Search, an AI Overview appeared that faithfully linked to one of the few websites that addressed the question,” Google explained.
The company also admitted that AI Overviews referenced forums, which while a “great source of authentic, first-hand information” can lead to “less than helpful advice,” such as using glue on pizza to make cheese stick. In other cases, the search tool also misinterpreted language on web pages, leading to incorrect answers.
Google said it “worked quickly to address these issues, either through improvements to our algorithms or through established processes to remove comments that don’t comply with our policies.”
Steps taken to improve AI overviews
Google has taken the following steps to improve the search query answers generated by the AI Overviews feature:
- Better detection mechanisms for nonsense searches have been developed, limiting the inclusion of satirical and nonsense content.
- The company also says it has updated systems to limit the use of user-generated content in comments that could give misleading advice.
- AI overviews for current news topics are not shown if ‘freshness and factuality’ are crucial.
Google also said it monitored feedback and third-party reports for a small number of AI Overviews responses that violated the content. However, it said the chance of this happening is “less than one in 7 million unique searches.”