My targeted website is example.com when I checked indexing status of my targeted website example.com in Google I found one sub domain got indexed with my domain name xyz.example.com. Here I don’t have any domain/folder/pages related to xyz.example.com in my server but still that sub domain got indexed in Google.
So anyone in this group can solve my problem. How can I remove/block that sub domain from search engines.
As far as I’m aware, the only ways to do this are to add <meta name="robots" content="noindex"> or <meta name="robots" content="noindex, nofollow"> to every page in your sub-domain, or to password-protect those directories/files.
There is no way that I know of to exclude the sub-domain using a robot.txt file. In any case, Google recommends using one of the above methods to be certain of preventing indexing of particular files or directories.
[quote=“mikemiller7, post:3, topic:266771”]
I think the best way will be to redirect that through your .htaccess file
[/quote]Redirect what to where? I don’t see how a redirect is supposed to help here.
What rule do you suggest adding which will block that sub-domain? I’m not aware of one which will work in this situation. In any case, as I’ve already said, Google makes it clear that you should not rely on robots.txt to prevent indexing of a page or pages.
Thanks for the reply, here I have one issue I don’t have any files / pages / folders related to my sub domain in server. If we have pages then we can add meta noindex tag. All those pages are virtually created.
So the pages are there, available to read and hence being indexed by search engines. I’m not sure what he means by “virtually created”, but if it were simply a case of removing them altogether, I have no doubt he could do that.
What he wants is a solution which will prevent search engines indexing them, but still permit others to access the pages as required.