So here is my robots:
User-agent: *
Allow: /
Disallow: /cgi-bin/
Disallow: /readme.html
I recently added the “allow: /” and got a boost of a couple more pages but it is still showing many not able to crawl.
Its wordpress, I have the check unchecked for " dissalow search engines from crawling"
The # of files blocked by robots.txt is falling. Keep in mind google does not re-index everything at once. They will recrawl your content and it will likely get re-indexed. Alternative explanation is that you have other robots.txt files which are higher or lower in the directory structure. Check that no other robots.txt files exist anywhere.