Many SEOs agree that showing expertise, authority, and trustworthiness in your site content is important to ranking well. But why is that, exactly? Is it because Google E-A-T is an actual ranking factor, or is it something else? In this episode of Whiteboard Friday, Cyrus Shepard explores whether it can be considered a true ranking factor, making your E-A-T goals SMART, and how to communicate it all to curious stakeholders.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Howdy, Moz fans. Welcome to another edition of Whiteboard Friday, coming to you from my home where I am wearing a tuxedo, wearing a tuxedo in hope that it exudes a little bit of expertise, perhaps authority, maybe even trust.
Yes, today we are talking about Google E-A-T, expertise, authority, trust, specifically asking the question, “Is Google E-A-T actually a ranking factor?”
Now surprisingly this is a controversial subject in the world of SEO. Very smart SEOs on both sides of the debate. Some SEOs dismiss E-A-T. Others embrace it fully. Even Googlers have different opinions about how it should be communicated. I want to talk about this today not because it’s a debate that only SEOs care about, but because it’s important how we talk to stakeholders about E-A-T and SEO recommendations. Stakeholders being clients, website owners, webmasters.
Anybody that we give an SEO recommendation to, how we talk about these things is important. So I don’t want to judge. I don’t want to be the final say — that’s not what I’m attempting — about whether Google E-A-T is an actual ranking factor. But I do want to explore the different viewpoints. I talked to dozens of SEOs, listened to Googlers, read Google patents, and I found that a lot of the disagreement comes not from what Google E-A-T is — we have a pretty good understanding what Google E-A-T actually does — but how we define ranking factors.
Three ways to define “ranking factors”
I found that how we define ranking factors falls into roughly three different schools of thought.
1. Level 1: Directly measurably, directly impact rankings
Now the first school of thought, this is the traditional view of ranking factors. People in this camp say that ranking factors are things that are directly measurable and they directly impact rankings, or they can directly impact rankings.
These are signals that we’re very familiar with, such as PageRank, URLs, canonicalization, things that we can see and measure and influence and directly impact Google’s algorithm. Now, in this case, we can say Google E-A-T probably isn’t a ranking factor under this definition. There is no E-A-T score. There’s no single E-A-T algorithm. As Gary Illyes of Google says, it’s millions of little algorithms. So in this school or camp, where things are directly measurable and impactful, Google E-A-T is not a ranking factor.
2. Level 2: Modeled or rewarded, indirect effects
Then there’s a second school of thought, almost equal to the first school of thought, that says Google’s algorithm is sufficiently complex that we don’t really know all the direct measurements, and in these days it’s a little more useful to think of ranking factors in terms of what is modeled or rewarded, things with effects that are possibly indirect.
Now this really came about during the days of the Panda algorithm in 2012, when Google started using much more machine learning and eventual neural networks in its algorithm. To give you a brief overview and to grossly oversimplify, Panda was an algorithm designed to reduce low-quality and spammy results in Google search results.
To do this, instead of using directly measurable signals, instead they used machine learning. Again, to grossly oversimplify, Britney Muller has a great post on machine learning. I’m going to link to it if you’re interested. But what they did is they took sites that they wanted to see more of in Google search results, sites like New York Times, things like that, that based on certain qualifications, like did they think the site was well-designed, would you trust it with your credit card, does it seem like it’s updated regularly and written by authors, and they put these in a bucket.
Instead of giving the algorithm direct signals, they told the machine learning program, “Find us more sites like this. We want to reward these sites.” So in this bucket, ranking factors are things that are modeled or rewarded. People in this school of thought say, “Let’s just go after the same thing Googlers are going after because we know those things tend to work.”
Algorithms that fall in this bucket are like Panda, site quality, and E-A-T. In this school of thought, yes, E-A-T can be considered a ranking factor.
3. Level 3: Any quality or action, direct or indirect effects
Then there’s even a third school of thought that goes further than these two, and this school of thought says any quality or action that could increase rankings should be considered a ranking factor, even if Google doesn’t use it in its algorithm, direct or indirect.
An example of this might be social media shares. We know that Google does not use social media shares directly in its algorithm. But getting your content out in front of a large number of people can lead to links and shares and eventually more traffic and rankings as those signals roll downhill.
Now it may seem kind of crazy to think that anyone would consider something a ranking factor if Google actually didn’t consider it a ranking factor directly in its algorithms. But if you think about it, this is often the way real-world business scenarios work. If you’re the executive of a company, you don’t necessarily care if Google uses it directly or not. You just like seeing the end result.
An example might be, aside from social media, bounce rate, long clicks. TV commercials, excellent example. If you were in a Super Bowl commercial and you’re the CEO of a Fortune 500 company and you know that that’s going to lead to increased rankings and traffic, you don’t necessarily care if it’s a direct impact or an indirect impact.
So those are the schools of thought, and I’m not here to judge any of them. But what I think is important is how we communicate recommendations to stakeholders.
Use SMART goals to communicate SEO recommendations to stakeholders
When we give SEO recommendations in our audits or whatnot, the standard I like to use is I like to think of it in terms of goals.
A framework for goals that I like to use is the SMART system of goal making, meaning goals should be specific, measurable, actionable, relevant, and time-based. Now in the traditional view of ranking signals, yes, specific and measurable are great because we’re using direct impacts.
But with E-A-T, the signals get a little squishier, and it’s hard to translate those into specific, measurable signals, and I think that’s why people in this camp don’t like considering E-A-T a ranking factor. To illustrate this, Bill Slawski, the Google patent expert, recently shared a patent that he thought might be related to E-A-T or is possibly.
We don’t know if Google uses it or not. But the patent, it took website representation vectors to classify sites. Now that’s a mouthful. But basically what that means is the patent’s goal was to determine actual expertise of websites based on vectors. For example, it could determine, through machine learning and neural networks, if a website is written by actual experts, say medical doctors, or if it was written by medical students or laypeople or somebody else.
It can do that for any type of site, whether it’s medical, law, finance, and classify its expertise. In this sense, what Google is saying, if Google wants sites within the medical sphere to be like the Mayo Clinic and they are rewarding sites that are like the Mayo Clinic, that is really hard to fix, and it’s almost impossible to fake with these kinds of sophisticated algorithms. So it’s very hard to give SEO recommendations based on something like this.
What you really have to do, if you want to dive in, is start finding where the differences are between your site and those sites that are actually ranking. Marie Haynes, another SEO who thinks a lot about E-A-T, she says in an interview with Aleyda Solis, that I’m also going to link to, it’s an excellent video.
Thank you, Aleyda, for doing that. She says it’s about finding the gaps. But back to Lily Ray. I’m getting sidetracked here. Lily Ray is one of the few SEOs who has actually done really good research into E-A-T by comparing sites and seeing what some of the differences are of sites that have been rewarded and sites that have fallen in rankings. Some of her research has found some really interesting things.
For example, for medical queries, sites that lost had 433% more CTAs, calls to action, typically because they’re selling something, they’re trying to sign you up, a little bit of mixed intent. Where the expert sites had fewer CTAs. The winning sites were written 258% more by real experts as opposed to laypeople or people without advanced degrees.
The losing sites had 117% fewer affiliate links, and that could be something like this algorithm at work or something like that. But we start to identify what’s actually being rewarded. Again, this is hard to fix or fake, but we can start to fill in the gaps. So the question is, though, how do we make these specific, measurable, and actionable?
Measurable is especially hard when we’re talking about things like expertise and authority. Fortunately, a lot of these problems have already been solved back when Panda was released back in 2012. If you want to make these more nebulous, squishy things measurable and actionable, you have to start to measure them the same way Google does, and that’s using panels of people, like the quality raters that Google employs, thousands of quality raters across the globe to look at sites and rate them.
Those ratings aren’t used directly in Google’s algorithm. They’re used to sort of test the algorithm. But you can start to score sites on a certain deliberate scale. So you can use things like the Quality Rater Guidelines or the E-A-T questions that Google has released. It’s a list of questions that say things like: Is this site written by an expert?
Would you cite this site if you were writing an academic paper about it? Questions like that. You get a group of people — maybe it’s 5 people, 10 people or more — and you ask those questions about your client’s site and compare it to the expert sites that are winning, and you can start to see where the gaps are. Maybe this site only scored a 5 on a scale of 10 that it appeared to be written by experts.
By assigning values to it and using panels of questions and scoring, you can make it specific and measurable and actionable, and that’s how you do it. It doesn’t pay to give nebulous recommendations, such as improve your E-A-T. I know of one SEO consultant who says E-A-T is meaningless, and he is definitely in this camp here that the signals should be measurable.
E-A-T is meaningless because it could mean anything you want. If you tell your clients to improve E-A-T, you could be meaning anything, improve your links, write better content, hire some experts. Instead you’ve got to make it measurable, and you’ve got to make it actionable. I think no matter what camp you’re in that’s the way you want to go. All right.
I hope you enjoyed this Whiteboard Friday. Hopefully, it sparks some conversation. If you enjoyed it, please share. Thanks, everybody. Bye-bye.
Want more Whiteboard Friday-esque goodness? MozCon Virtual is where it’s at!
If you can’t get enough of Cyrus on Whiteboard Friday, don’t miss his top-notch emcee skills introducing our fantastic speakers at this year’s MozCon Virtual! Chock full of the SEO industry’s top thought leadership, for the first time ever MozCon will be completely remote-friendly. It’s like 20+ of your favorite Whiteboard Fridays on vitamins and doubled in size, plus interactive Q&A, virtual networking, and full access to the video bundle:
We can’t wait to see you there!