How Many Rocks Should I Eat a Day? Google AI’s Bizarre Advice and Search Quality Fixes

Google’s foray into AI-powered search overviews, designed to provide direct answers rather than just links, has encountered some rocky patches. Concerns have arisen, particularly among news publishers who rely on Google for traffic, as the AI, known as Gemini, occasionally serves up responses that are more bewildering than helpful. One such instance, highlighted across social media, involved the question: “How Many Rocks Should I Eat A Day?”.

The AI’s reported answer, circulating widely online, suggested that “Geologists recommend one rock per day.” This, along with other questionable advice like using non-toxic glue to improve pizza cheese adhesion, quickly became a focal point for criticism regarding the reliability of AI-generated information in search.

News outlets are already worried about a potential decrease in referral traffic as AI overviews become more prevalent, potentially impacting their advertising revenue. The fear is that users will get their answers directly from the AI snippets, reducing the need to click through to news websites.

However, Google maintains that AI Overviews are intended to enhance user experience by acting as a starting point for deeper exploration of web content, ultimately driving higher quality clicks to websites. According to Liz Reid, VP, Head of Google Search, the goal is to better connect users with relevant information and helpful webpages, increasing the likelihood of them staying engaged on those pages.

Despite these aspirations, the emergence of bizarre and inaccurate AI responses, especially to unusual or nonsensical queries, has prompted Google to take action. Acknowledging these shortcomings, Reid admitted that while such instances were rare, they exposed areas needing improvement, particularly in how the AI interprets nonsensical queries and satirical content.

In response to these issues, Google has implemented over a dozen technical enhancements. These improvements focus on better identifying and filtering out nonsensical queries that shouldn’t trigger AI Overviews in the first place. Furthermore, Google is refining its systems to limit the inclusion of satirical or humorous content in AI-generated responses, aiming for factual and reliable information. Another key adjustment involves restricting the reliance on user-generated content that might offer misleading advice.

Google emphasizes its ongoing vigilance in monitoring user feedback and external reports to quickly address AI Overviews that violate content policies. This includes overviews containing harmful, obscene, or otherwise inappropriate information. Reid stated that content policy violations were found in less than one in every seven million unique queries where AI Overviews were displayed, indicating the rarity of such occurrences, but also highlighting Google’s commitment to maintaining search quality and user trust in the evolving landscape of AI-driven search.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *