Google Webmaster Says Blocking Via Robots.. but Im not?

So here is my robots:
User-agent: *
Allow: /
Disallow: /cgi-bin/
Disallow: /readme.html

I recently added the “allow: /” and got a boost of a couple more pages but it is still showing many not able to crawl.
Its wordpress, I have the check unchecked for " dissalow search engines from crawling"

Not sure what to do from here?

Thanks so much for you help!

The # of files blocked by robots.txt is falling. Keep in mind google does not re-index everything at once. They will recrawl your content and it will likely get re-indexed. Alternative explanation is that you have other robots.txt files which are higher or lower in the directory structure. Check that no other robots.txt files exist anywhere.

1 Like

ok i checked and ya just one robots in the files… what you said makes sence though.

So its just a waiting game after you accedently block them to be able to be 100% back in business?

You can request a re-index to speed things up… you can do it through your webmaster tools console.

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.