Google’s reputation is built on its algorithms, which are increasingly being used to give answers out of the search engine’s index of results. But what used to be a list — the first page of results — has become simpler over the past couple of years: Sometimes, Google extracts one result that it thinks will best answer whatever it is you are asking, and puts that answer in a featured section right at the top of your results. That section is called a “snippet,” and it’s not always getting the answer right.
Here are a few of the questions that the Outline’s weekend report on Google snippets identified, along with the answers Google provided at the time:
Is Obama Planning a Coup? “According to details exposed in Western Center for Journalism’s exclusive video, not only could Obama be in bed with communist Chinese, but Obama may in fact be planning a communist coup d’etat at the end of his term in 2016!”
Who is the King of the United States? “Barack Obama.” Which, interestingly, was sourced to an article criticizing Google for initially pulling this answer from Breitbart.
Presidents in the Klan. Google listed four presidents — citing a dubious article whose headline indicated there were five — for this snippet. To be clear, there is no credible evidence to suggest that any of the listed presidents were Klan members.
The first of these three examples has since been corrected. However, we were able to replicate the results for “Who is king of the United States?” on Tuesday and “Presidents in the Klan” pulled a similar answer but from a different source.
These snippets are Google’s attempt to directly answer whatever question you might be asking, without making you search through results. It’s mostly great for, say, the date of a holiday or basic information. Overall, though, Google’s algorithms have a reputation for being reliable. That is perhaps why it has felt so jarring when would-be researchers discover that Google is capable of being very wrong.
One place where Google has a lot of trouble is when bad information on a topic is more widely discussed and shared than good information. In the last weeks of the election, the idea that Hillary Clinton was secretly a drunk became a popular right-wing meme. If you ask Google, “is Hillary Clinton an alcoholic,” the answer is a link that goes to a Gateway Pundit article that quotes a tweet that reads, “Sick Hillary Clinton is an Alcoholic. Source of health problems and falls?”
The claim that Clinton has a drinking problem is completely unproven. The evidence cited in support of this meme merely indicates that Clinton likes to have a drink, which is a really different thing to speculate.
There are lots of articles in Google’s search results about Clinton’s relationship to alcohol, however, that either nod to or explicitly spread unproven claims. There are few articles that discuss this question critically. So when Google looks for answers to this question, it gets a lot of bad ones.
A BBC journalist asked Google to verbally tell him the answer to, “Is Obama planning a coup?” It read that inaccurate snippet we quoted above, verbatim.
Like Facebook’s long-standing hoax problem, Google’s role in spreading bad info became a bigger story after the 2016 election, based on speculation that “fake news” played a role in influencing how Americans voted.
The company told news outlet Quartz that it removes incorrect snippet results manually, when they “feature a site with inappropriate or misleading content.” But the process of selecting these snippets in the first place is “automatic and algorithmic.”
Algorithms are written by humans and they depend upon the input of humans to get better. In other words, they can have a bias, too.