GoogleGuy Dumps, Google Leaks

Tweet

The current/recent Google update, dubbed “update Bourbon,” has created a bit of a stir, due to the length of the update and the significant shifts that have taken place in some search results. Google’s webmaster rep “GoogleGuy” posted some lengthy comments at Webmaster World, including the helpful note that using “id” as a variable in URLs can discourage the Googlebot from crawling a site. Apparently GG has had a lot on his mind lately, and decided to dump it all out at once.

File this under interesting but not really something you can act on: Google’s “secret” web interface for their quality control team has been revealed to the world by Henk van Ess’s Search Bistro. GoogleGuy confirms that it’s true.

Interesting and maybe worth some folks taking action: for what may be a limited time, you can actually download Google’s spam recognition guide in MS Word format: http://www.searchbistro.com/spamguide.doc. Plenty of comments can already be found on the Search Bistro blog, and there aren’t really any surprises, but it’s interesting reading. Mr. van Ess also snagged another document, a guide to rating searches.

While we’re crawling all over Henk’s blog anyway, he’s reporting that META tags are still being used to generate descriptions displayed on SERPs. This isn’t really news, but it’s interesting to see a post on his blog that doesn’t need to be slashdotted. I’m sure it’s been quite a week for him…

Free book: Jump Start HTML5 Basics

Grab a free copy of one our latest ebooks! Packed with hints and tips on HTML5's most powerful new features.

  • http://www.johnconde.net stymiee

    Basically everything that guy said is what we’ve been preaching here for quite some time. Thanks for the backup Googleguy. ;)

  • http://www.seoresearchlabs.com DanThies

    No kidding! I’ve encountered a lot of skepticism over the years about using absolute links instead of relative links, now we have a rep from the biggest SE telling us “yep, sometimes the crawler misses.” I guess nobody liked my little joke in the title.

  • Bob Gladstein

    That “spamguide” is interesting. There’s a section on screen scraper sites that very clearly describes what these sites do and why they’re “offensive.” But the document, according to its properties, is a little over a year old. How many scraper sites has Google banned in the past year?

  • http://www.seoresearchlabs.com DanThies

    They’ve probably dumped a few, but they really don’t try to fight hand-to-hand. Google’s strategy appears to be more long term, to find ways to eliminate spam from the search results with changes to their algorithms. Yes, that’s algorithms, plural.

    At the Consumer Reports Webwatch conference last Thursday, we saw at least one search result that was a wee bit questionable. The #2 Google result for “refrigerator” (www.healthyfridge.org/mainmenu.html) leads to a redirect, so that users can not escape the site by clicking the “back” button.

    Matt Cutts, director of search quality at Google, was on the panel. If he had wanted to execute a ban on that site, or remove that specific page (a more appropriate response), he could no doubt have done so with a phone call. That SERP remains the same today. I’m sure someone at Google will look at how that page makes it into the rankings, and what can be done to fix the problem.

    I should note that the ‘healthy fridge’ website does not trap users who visit their home page, so it’s probable that their intention with the redirect is simply to get visitors to the right page.

    I should also note that the apparent purpose of eval.google.com is to allow Google to test user satisfaction with new algorithms or tweaks, not to catch spammers.

  • Bob Gladstein

    I guess I misunderstood the purpose of the eval subdomain. I assumed that it was for those rare cases that would involve a manual edit.

  • http://www.seoresearchlabs.com DanThies

    That’s certainly the way it was portrayed in a lot of places, Bob. It’s interesting and encouraging to see that they don’t just measure user satisfaction, but actually have a process to identify what’s wrong with SERPs.