This blog post might interest all you link-building fanatics
There’s a problem with this thesis. His idea is that traffic and links acquired should be correlated, that Google would notice if a site is suddenly being linked to very often but hasn’t had a corresponding increase in traffic.
The problem is that Google does not know how much traffic an arbitrary website is receiving. They are not omniscient, this isn’t public information and they have no more idea about it than you or I. Nobody knows exactly how much traffic my blog receives except me, it’s only recorded on my server, and Google has no access to my server.
They can’t be looking at Alexa/Quantcast/etc. because they’re domain wide, while Google operates on a page basis, and they also have little or useless data on all but the top 20,000 sites or so.
So how can Google be using that as an indicator?
I would imagine that the huge number of sites using Google Analytics has given them some very useful indicators/relationships between site type, traffic volumes (and sources) and link acquisition patterns.
It’s entirely possible that they’re using some kind of model extrapolated from that GA data in an algo.
How can you apply a model to know if an arbitrary site or page is seeing a spike in traffic, though? It may let them estimate something about the normal traffic to a site, but gaining a large amount of links in a short time isn’t unusual if a site has a sudden surge in traffic. More people reading and potentially linking to the content.
I can see that they might find a way to use the Analytics data without breaking their privacy policy (still shady), but seems difficult to apply to this particular task.
Since Google are experts at identifying types of site, I don’t see how anthying can be considred ‘arbitrary’ to them, everything will fall into one category or another otherwise they wouldn’t be able to immediately rank fresh content from sites where that’s justfied, for example.
I’ve read the Analytics Privacy policy, I’m no lawyer but I didn’t see anything where they promise not to draw their own conclusions from what they’re seeing in our data or use it to devise profiles that would help them police the SERP. I notice that neither Analytics or Adwords are included in the list of Google Privacy policies although that probably means nothing because Analytics has [URL=“http://www.google.com/analytics/tos.html”]it’s own policy.
Sure it would be difficult to apply but that’s why they employ so many PHDs, if anyone can do it, Google can. This is pure speculation on my part of course.
Im pretty sure they can not even get close to estimating the traffic on a random site, theres simply no way of doing it.
Thats a good link you provide. Getting backlinks fast is also important to achieve keyword ranking immediately.
But in some ebook i was read, getting link too fast me be made your website suspicious spammy by Google.
It cna’t be that simple, some types of website get backlinks very fast, like blogs or news sites and they don’t get penalised, they actually rank fast. So it must depend on what kind of site you have.
Substitute the word ‘arbitrary’ for the word ‘given’ and you’ll have your answer. Unless you’ve plugged your site into Google Analytics, they have no way of knowing how much traffic a given site receives. It might be possible for them to extrapolate (from their pool of Analytics subscribers) a ‘normal’ amount of traffic a site ‘should’ receive based on amount of content, length of time in operation, etc., but this is an unlikely scenario given how much time it would take to create such a model and how little return they would receive from such a feature (not to mention how inaccurate it could potentially be).