Microsoft follows Google with generative search, but with a twist
Microsoft’s Bing search engine is increasingly leaning on artificial intelligence, with a new test feature that summarizes search results for users. The move comes months after Google slammed the brakes on its AI summaries after the feature spread conspiracy theories and dangerous health advice.
The new feature, which Microsoft Bing calls Generative Search, uses generative AI technology to create new results pages with summarized answers that are “easy to read and understand,” with links and resources listed alongside the generated text.
Read more: OpenAI Just Announced a Search Engine, SearchGPT, But There’s a Catch
“We’ve refined our methods to optimize accuracy in Bing,” Microsoft said as announcement of the new feature on Wednesday and emphasize that it is still in the testing phase. “We are rolling this out slowly and taking our time, gathering feedback, testing and learning, and working on creating a great experience before we make this more widely available.”
There is controversy surrounding AI search
Microsoft’s new Bing AI search feature comes months after Google faced criticism for the way it tried to integrate similar technology into its search results. Summarizing answers to search queries makes sense in theory, but it doesn’t always work as expected. After Google’s AI summaries launched in May, some users were quick to notice that the feature couldn’t distinguish between facts and racist conspiracy theories about Barack Obama’s religion and birthplace. Google’s summaries also took as fact articles that ridiculed health advice.
Microsoft appears to have taken these concerns into account with its new Bing AI feature, emphasizing that each of its summaries includes a source link that people can click through to confirm and learn more. Microsoft has also said that it will initially only make the results available to a “small percentage” of users’ search results.
While the future of Microsoft search remains uncertain, the stakes for getting it right are clear. AI-powered tools and content are flooding the internet, finding their way into everything from emails and text messages to documents and presentations. Social networks including Meta’s Facebook and Instagram, ByteDance’s TikTok and Google’s YouTube have begun building systems to detect and label posts created using AI tools. The tools don’t just force transparency on creators; these labels are also becoming an increasingly important anchor for helping the rest of us maintain our shared understanding of reality.
There’s still a long way to go. NewsGuard found that AI chatbots failed to provide “accurate information” nearly 57% of the time, a week after the assassination attempt on former President Donald Trump. Ultimately, NewsGuard’s AI Disinformation Tracker found that AIs “fell far short of the wave of conspiracy theories quickly launched by Trump critics and supporters, but also by hostile foreign state actors.”