Robots.txt Block Subdomain

Hey everyone,
I have a robots.txt file and I need to block the search engine spiders from crawling the “dev” subdomain.

Here is the code I am using in my robots.txt file in the /public_html/ folder:


User-agent: *
Disallow: /dev/
Allow: /

Does this look right? Or does it need tweaking do disallow the dev subdomain? (The “dev” subdomain is located in the “dev” directory.)

Thanks!

Are there any links to the dev sub-domain? If not then the search engines will not find it even without a robots.txt and the only effect will be to tell the spambots that the sub-directory exists.

I have not gone out of my way to link anything to the “dev” subdomain, but Google seems to have found it and I have noticed a few dev subdomain pages on the SERPs by accident…

If I use this code that I posted above, will it ban the “dev” subdomain from Google? And will that mean that the “dev” subdomain pages will eventually be taken down from Google too?

Here it is again, I just want to make sure I have the robots.txt code correct.

Here is how my files are structured on my server:
/public_html/robots.txt
/public_html/dev/

And here is my robots.txt code:


User-agent: *
Disallow: /dev/
Allow: /

Thanks,

The code seems ok
but you can test it on google webmaster tool / crawler access

It seems to work according to Google Webmaster Tools when I have this robots.txt file in the /public_html/dev/ folder:


User-agent: *
Disallow: /

Not even sure if I need the robots.txt in the /public_html/ root directory after all to block access to the dev subdomain…
Oh well, I guess I shall just have to wait and see how it goes.

Dev is the sub folder is not sub domain.