Navigating Market Trends, Personal Finance Tips, and Economic Insights
Popular

Google on Thursday admitted that its AI Overviews tool, which uses artificial intelligence to respond to search queries, needs improvement.

While the internet search giant said it tested the new feature extensively before launching it two weeks ago, Google acknowledged that the technology produces “some odd and erroneous overviews.” Examples include suggesting using glue to get cheese to stick to pizza or drinking urine to pass kidney stones quickly. 

While many of the examples were minor, others search results were potentially dangerous. Asked by the Associated Press last week which wild mushrooms were edible, Google provided a lengthy AI-generated summary that was mostly technically correct. But “a lot of information is missing that could have the potential to be sickening or even fatal,” said Mary Catherine Aime, a professor of mycology and botany at Purdue University who reviewed Google’s response to the AP’s query.

For example, information about mushrooms known as puffballs was “more or less correct,” she said, but Google’s overview emphasized looking for those with solid white flesh – which many potentially deadly puffball mimics also have.

In another widely shared example, an AI researcher asked Google how many Muslims have been president of the U.S., and it responded confidently with a long-debunked conspiracy theory: “The United States has had one Muslim president, Barack Hussein Obama.”

The rollback is the latest instance of a tech company prematurely rushing out an AI product to position itself as a leader in the closely watched space.

Because Google’s AI Overviews sometimes generated unhelpful responses to queries, the company is scaling it back while continuing to make improvements, Google’s head of search, Liz Reid, said in a company blog post Thursday. 

“[S]ome odd, inaccurate or unhelpful AI Overviews certainly did show up. And while these were generally for queries that people don’t commonly do, it highlighted some specific areas that we needed to improve,” Reid said.


How to use AI as a tool

06:23

Nonsensical questions such as, “How many rocks should I eat?” generated questionable content from AI Overviews, Reid said, because of the lack of useful, related advice on the internet. She added that the AI Overviews feature is also prone to taking sarcastic content from discussion forums at face value, and potentially misinterpreting webpage language to present inaccurate information in response to Google searches. 

“In a small number of cases, we have seen AI Overviews misinterpret language on webpages and present inaccurate information. We worked quickly to address these issues, either through improvements to our algorithms or through established processes to remove responses that don’t comply with our policies,” Reid wrote. 

For now, the company is scaling back on AI-generated overviews by adding “triggering restrictions for queries where AI Overviews were not proving to be as helpful.” Google also says it tries not to show AI Overviews for hard news topics “where freshness and factuality are important.”

The company said it has also made updates “to limit the use of user-generated content in responses that could offer misleading advice.”

—The Associated Press contributed to this report.

Share this article
Shareable URL
Prev Post
Next Post
Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Wendy’s sparks conversation about dynamic pricing in other industries Wendy’s sparks conversation…
U.S. Open moves to quarterfinal rounds U.S. Open tennis quarterfinals getting underway 02:29 For vodka maker…
Received a package in the mail you didn’t order? You could be a brushing scam victim Received a package in the…
The deadline for most people to file a 2023 tax return with the IRS is fast approaching; returns are due by…