I see there’s a snippet of Google code I can insert between my Head tags so the Googlebots can crawl my site.
My question is… is it really necessary to do this, seeing as how sophisticated the Internet is set up these days? That is, don’t the bots crawl all over the Internet (and my site) anyway?
What code is it you are referring to. I hacve never heard of any code that does what you describe.
Googlebot will follow every link it can get its tentacles on, you don’t need to add anything specific to allow or encourage it to do so. What’s the code that you’ve seen?
It’s something like (writing it from memory):
<meta name=“robots” content=“index,follow”>
I came upon it when setting up a site many years ago. So my question is, is it really necessary to insert something like this in one’s code, or is it now a moot point, what with the level of sophistication the Internet has reached?
No you don’t need it, by deafult Google will crawl your site. Unless you don’t want Google to crawl a certain page of your site then you use a no follow meta to prevent it from crawling your site.
That isn’t specific to Google. It has applied to most search engines since search engines were first invented.
You only need to use that if you want to prevent a page being indexed and are not able to do it via the robots.txt file.
You don’t need it. Google’s default behaviour is to crawl and index everything it can find on your site anyway, you only need to use Tags like that if you want to limit bots really.
As always… great answers to my queries.
Thanks to all.
That’s not a Google crawler code . It’s your instruction for crawler for the way it should crawl your webi