Web crawlers

I really didn’t understand the concept of crawlers what are they exactly…can anyone care to put some light on that topic…

A web crawler (or spider) is a robot that reads web pages and follows links on them. That’s how search engines like Google build up their database of the whole web, by these little machines that spend their whole time reading pages and going to the pages they link to and reading those, and so on. For a more detailed answer, as always, Wikipedia has everything you could possibly want to know! http://en.wikipedia.org/wiki/Web_crawler