Used as reference material for Google’s thousands of human quality raters, Google’s Quality Rater Guidelines is a (currently) 167-page document that has typically changed in advance of algorithmic changes, making updates an important event in the industry.
Historically, changes to Google’s QRG have correlated with an eventual shift in the way Google algorithmically sorts search results and while we can’t assume this is causative, it’s reasonable to assume that the way Google informs its human raters is indicative of how it wants its ideal page one to look. That makes these changes among the most studied events in search (Search Engine Land has turned to the well-studied Lily Ray to discuss the changes if you want some excellent further reading).
What are the changes?
The main change has been the complete removal of categories to which the acronym YMYL (your money or your life) should be applied. Rather than focus on specific industries, such as finance, health and safety, and groups of people, the guidelines have shifted to harm reduction as a broader target. Google phrases it like this:
Pages on the World Wide Web are about a vast variety of topics. Some topics have a high risk of harm because content about these topics could significantly impact the health, financial stability, or safety of people, or the welfare or well-being of society. We call these topics “Your Money or Your Life” or YMYL.
While it does include ‘health, financial stability or safety, or the well-being of society’ in its list of impacts, new advice also provides more detailed advice on how to judge such potential for harm and how that harm could occur.
Experience, Expertise, Authority, Trustworthiness
Google has also significantly expanded its detail on EEAT, which it says should be applied at three levels – the author, the content and the website. Examples of how this applies are given as follows:
There are obvious relationships here between the two changes and the news and current affairs of the last few years (during which Google has taken some deserved criticism), but the changes also represent a possible huge expansion of the application of algorithmic gatekeeping as to the content that will rank well in search engines.
While this will also likely result in legal challenges and complaints from some sections of the media and politics, it also indicates what could be a pretty huge leap forward in Google’s ability to assess these factors (or its belief that such a leap is imminent).
What this means for Search
While the most obvious implication for search is that it’s going to remain difficult for brands to compete in certain sectors without paying attention to their EEAT signals, what is perhaps a more interesting implication of the continued expansion of YMYL as a definition and of EEAT as a group of signals is that it ties in with another increasingly important aspect of search – entity detection.
In each of the cases, the QRG asks its human users to assess the author, content and website for their capacity to write on topics without causing harm – we know that the various algorithms that make up Google’s core search algorithm are able to judge websites based on link distance from seed sites (among other things), that it can judge content based on an assessment of consensus, but how will those authors be judged?
Brands operating within YMYL sectors, or dealing with any of the topics more broadly, will need to ensure that they begin to build expert entities for their authors – or at least factor in the need to have expert input and review of their online content if they want it to outperform their competition.
While I can’t claim to be entirely impartial here, as this formed part of my 2019 Benchmark talk which included some discussion on the potential future of the algorithm, I think it’s becoming increasingly reasonable to infer that entity creation around contributors to onsite content will grow in importance over the next couple of years.
While brands can’t be expected to employ a rocket scientist for every article that they feature about rocket science (it’s not brain surgery), they will need to start to build the perceived expertise of their writers to create machine readable entities that accurately convey their expertise in a subject; for more complex subjects, it may then be necessary to add a secondary author or contributor to articles that require more extensive, formal expertise.
This poses huge problems for use of AI content at scale – an increasingly common ‘hack’ in the industry – but also for the ways Google may judge linking content in future, making it increasingly important for content marketing to create assets that generate links predominantly from relevant sources (this should already be the case, but often isn’t).
Overall, these changes are indicative of what Google sees as the current or near future abilities of its ranking algorithm – and that should cause brands to have a hard look at how they handle aspects of their onsite content.