Julie Ahn, for Pinterest’s engineering blog:
Our SEO goal is to help billions of internet users discover Pinterest and find value in it as a visual bookmarking tool. Over time we’ve found the only way to verify if a change affects the user behavior positively is to run an A/B test. Unfortunately, we didn’t have similar tools to test search engine behavior, so we built an experiment framework and used it to turn “magic” into deterministic science.
The A/B testing methodology seems reasonably sound.
By their own admittance however:
The best strategy for successful SEO can differ by product, by page and even by season.
Other factors include content age, number of links, or authority of links.
I’m curious about one point though. When running a test, your working assumption is that it’s being run with all other things equal. But are they really?
Are you testing the results of the experiments you’re running, the results of the experiments being run by your competitors, the results of your site being updated or structured in some more or less usual way, the results of Google tweaking its ranking algorithm, or — as is most likely — a combination of all of these? If the latter, how do you separate the wheat from the chaff?
In the end, though, I’m suspicious of how much you can learn beyond the usual suspects by running SEO A/B tests experiments.