Google AI Overviews were announced a couple of weeks ago at Google I/O, and they’ve already proven to be rather controversial. The aim to provide high-quality answers to your questions summarized from the web, but a series of recent X (formerly Twitter) threads show how big of a fail it’s already proven to be.
The response that went viral involves a very dubious pizza recipe. As reported, when prompting Google for an answer to the issue of “cheese not sticking to pizza,” the AI Overview suggests adding nontoxic glue to your pizza to prevent the cheese from sliding off. The exact words the AI overview gave are as follows: “You can also add about 1/8 cup of non-toxic glue to the sauce to give it more tackiness.” Where did the Google AI overview get the info as a source? An 11-year-old Reddit comment from this thread, in what was clearly a joke.
https://t.co/W09ssjvOkJ pic.twitter.com/6ALCbz6EjK
— SG-r01 (@heavenrend) May 22, 2024
The words “cheese not sticking to pizza” generated this unexpected and funny response, and the internet is having a field day with it. The Google AI Overview response has since gone viral, with someone even trying glue pizza just to make the point.
It should be noted that we’ve seen a massive uptick in Reddit and forum posts showing up higher in Google searches. It’s also worth noting that Reddit recently signed a $60 million deal to let Google train its models on Reddit content. It’s not hard to connect the dots on how this might have happened.
It’s not just Reddit though. Another AI Overview was posted online with an answer to “how many rocks should I eat each day,” which pulls information directly from The Onion.
her pic.twitter.com/FGbvO923gk
— Tim Onion (@oneunderscore__) May 23, 2024
Part of the problem is the absolute conviction in which AI Overviews delivers its answers. It doesn’t bring up a link to an Onion article and let you do the adjudication. Instead, it treats every source like it’s Wikipedia and delivers information in complete confidence.
Google claims that its AI Overviews give users high-quality information and that such errors are uncommon. Here is the official response provided to Digital Trends by Google: “The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback. We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”
It seems as if some of these AI Overviews will be hard to correct if they can’t be reproduced. The “broader improvements” will need to be the solution, and since Google says they’re already in the works, hopefully we’ll begin to see some better searches soon. We’ll have to see if Google responds to the situation further beyond that statement. After receiving negative feedback around its image generation in Gemini earlier this year, Google had to apologize and pull it down to fix its issues.
For now, though, these are both good reminders of how careful we need to be when trusting AI engines for information. Google AI Overview started rolling out to everyone in the U.S. earlier this month, and with more countries coming soon. But with answers like this, there may be more people reaching for a way to turn it off than Google expected.
Editors’ Recommendations