Demystifying SEO With Experiments

Julie Ahn, for Pinterest’s engineering blog:

Our SEO goal is to help billions of internet users discover Pinterest and find value in it as a visual bookmarking tool. Over time we’ve found the only way to verify if a change affects the user behavior positively is to run an A/B test. Unfortunately, we didn’t have similar tools to test search engine behavior, so we built an experiment framework and used it to turn “magic” into deterministic science.

The A/B testing methodology seems reasonably sound.

By their own admittance however:

The best strategy for successful SEO can differ by product, by page and even by season.

Other factors include content age, number of links, or authority of links.

I’m curious about one point though. When running a test, your working assumption is that it’s being run with all other things equal. But are they really?

Are you testing the results of the experiments you’re running, the results of the experiments being run by your competitors, the results of your site being updated or structured in some more or less usual way, the results of Google tweaking its ranking algorithm, or — as is most likely — a combination of all of these? If the latter, how do you separate the wheat from the chaff?

Maybe it’s just me, but I’ve always been skeptical about SEO experiments and data-driven proofs that this or that SEO trick works. You can validate that well-known tactics work. To name a few, use a title, structure your page so it’s easy for the crawler to identify what parts of your document is actual content, structure your site to give your salient content a lot of links, get in-content links from as authoritative sources as you can, make serving pages and then rendering them as fast as is economically viable, or prefer serving pre-rendered content rather than rendering it on the fly using javascript. There are others.

In the end, though, I’m suspicious of how much you can learn beyond the usual suspects by running SEO A/B tests experiments.