Google explains why AI Overviews immediately got weird

Google explains why AI Overviews immediately got weird

Earlier this month, Google unveiled generative AI search results, which were met by dozens of viral (and mocking) memes.

That’s because the results were often erroneous or just plain strange. The search results told users all sorts of things: according to Google’s AI Overviews, a dog has played in the NBA, no country in Africa starts with the letter “K,” and you can add glue to pizza sauce “to give it more tackiness.” And that’s just a handful of examples from the many that took over social media posts and meme pages in the last week itself.

Now, Google has come back with a response to all the skeptics of its AI product. In a blog post published yesterday, Head of Google Search Liz Reid acknowledged the feedback being received, and revealed that Google is taking further steps to combat any results that are below standard.

Reid explains that the search function was tested “extensively” prior to its launch, but admitted that “some odd, inaccurate or unhelpful AI Overviews certainly did show up” in the case of specific queries that people would not ordinarily search for (such as “How many rocks should I eat?”). The blog post refers to these as “nonsensical new searches, seemingly aimed at producing erroneous results.” The post also underscored that a number of those wacky results, which inadvertently went viral, were “faked screenshots”.

Google also says that some results came from content sourced via “discussion forums”, the blog post read. Reid also added that “in a small number of cases,” AI Overviews misinterpreted language on webpages and therefore presented inaccurate information.

“We worked quickly to address these issues, either through improvements to our algorithms or through established processes to remove responses that don’t comply with our policies,” she wrote.

The company has said they are working on long-term improvements to this function, namely by building “better detection mechanisms for nonsensical queries” (by limiting the amount of satirical or sarcastic responses), and also limiting user-generated content that could be misleading.

“At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors,” writes Reid. “We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback.”

There may be a long road to go. In the meantime, you can turn off Google AI Overviews. Or you can just enjoy the potential memes, before results are truly refined.