Google webmaster tool crawl error "Restricted by robots.txt ‎(473)‎"

Hi every one i am new in and want to ask something about google webmaster tool

i am geting this crawl error “Restricted by robots.txt ‎(473)‎” and i think it is slowing down my traffic

here is my robot.txt file is there anything wrong with it

User-agent: Mediapartners-Google

User-agent: *
Disallow: /search
Allow: /

Sitemap: [noparse][/noparse]

Hi waleed and welcome to the forums.

User-agent: Mediapartners-Google

You can delete the whole section, as it’s not necessary. It’s saying “Disallow nothing”, which is the same as “Allow everything”, which is the default position.

User-agent: *
Disallow: /search

If you want to tell all user-agents to ignore a directory called “search”, this should end in a /:

Disallow: /search/
Allow: / 

is not valid in a robots.txt file and won’t be recognised. By default, bots will spider the whole site, so you only need to specify directories or files you want to disallow. You should delete this line.

The link you have given does not go to a sitemap.

You can find more detailed information about robots.txt files here.

Hope that helps.

Thanx for your quick response sir but i am a blogger user and i can not access my robot.txt file

but i searched for this isuu and i found this code from weblog

[I]If you want every new page to be crawled by the bots, include the following code to head section of your blogger

meta name=”robots” content=”index, follow”

is that it should i include this? and if i include this should i mention the url of the post which i want to be followed or indexed
like this

<meta name=”robots” content=”index, follow” [url of the post]>

Search engines will crawl every page by default, so you don’t need to include that line. It’s only telling them to do what they would do anyway. You only need to include the tag if you don’t want the page indexed, or the links followed. You can find more information from Google here.


Thanks for starting this thread as I am facing same issues. I have matched the robots.txt code with the link you provided and it appears correct. Still webmaster is showing errors what should be done in this case. I can’t remove these URLs as they are opening as they should. Please suggest.

Welcome to the forum, charletteaus.

If Google has already crawled your site, and then you block part of it with a robots.txt file, Google will show an error message for those pages that it has already indexed and which you are now blocking. It’s really doing this to warn you, in case you’ve blocked these pages by accident. If everything is correct as you want it to be, and the only pages showing the error are pages you don’t want indexed, then that’s fine. You don’t need to do anything. Google will eventually stop showing the error message.

Do you have any idea how much time will Google webmaster tool take to remove these errors as I have used remove URL feature of webmaster tool 4 months back for one of my website. Those URLs are still shown in webmaster as not found errors. Do they ever get updated? Things are more worst with e-commerce website errors keeps on increasing… robots.txt error has crosse 3000 mark.