Matt Cutts Interviewed by Eric Enge

Matt Cutts Interviewed by Eric Enge

This is a pretty interesting interview. A few really crucial issues get a mention:

  1. “… the number of pages that we crawl is roughly proportional to your PageRank”

  2. (on duplicate content:) “What we try to do is merge pages, rather than dropping them completely”

  3. Eric Enge: Can you talk a little bit about Session IDs? Matt Cutts: Don’t use them.

  4. (on affiliate links:) “… we usually would not count those as an endorsement”

  5. We do have the ability to execute a large fraction of JavaScript when we need or want to. :eek:

YOUR ATTENTION PLEASE

This is the most significant interview Cutts has given for a while with a lot of very useful SEO information in it. Please don’t clutter up this thread with useless ‘thankx for the info’ type replies.

Let’s talk about the interview itself.

You were shocked/surprised by any of these clarifications?

It was good to see things explained with a bit more clarity and examples rather than leaving recommendations open to interpretation. Really this was a lot of what Google has published in different areas of the web into a more succinct summary.

We do have the ability to execute a large fraction of JavaScript when we need or want to.

I find this particularly interesting… does this mean that Google now actively process JavaScript on the page and therefore could perhaps take advantage of AJAX or content swapping or injection? It certainly makes sense they might attempt it as more sites implement JavaScript but it’s a very interesting idea to think about. :slight_smile:

Yes. In particular the ‘crawl budget’ associated with PR. It’s just going to re-invigourate the PR chasing crowd.

Also, as Alex said, the JS thing was a bit of a surprise. I knew they could extract URLs from JS but I didn’t know they could action most types of JS.

The depth/speed of crawling/indexation has been closely associated with PR for some time and has been stated a few times by Matt in his Webmaster videos. I agree that this close association may re-invigorate the PR chasing.

We (my company) have a couple of large sites that use a lot of JavaScript/Ajax and have been seeing for some time both attempts at crawling links inside that JS and also attempts at executing on-page JS.

I like the clarification on duplicate content, even though I dont really understand the methodology of merge page.

I’m also not following on how you can merge pages together that have duplicate content. What about sites like Zimbio that always seem to scrape our blog creating duplicate content, I don’t want to be merged with anyone!