Hi all
When it comes to duplicate content problems between web pages on the same site, would using a robots meta tag (like below) on the pages I don’t necessarily mind not being indexed be an effective way to solve the problem?
The noindex tag is one possibility, you can also use the rel canonical tag to resolve potential duplicate content issues. The rel canonical tag is a better solution. If you use noindex nofollow than you are blocking all links on the page also.
That’s all very well, but where are you putting it? Because in a lot of cases, the problem comes about when multiple URLs resolve to effectively the same page. If you noindex/nofollow them, you’ll be cutting out a lot of links and content from your site, which is a pretty drastic way to resolve the problem of duplicate content. The canonical tag is a much better way to do it.
Hi,
It really matters if you are using other sites content,
any how very search engine will come to know that and they will ban or put your site in sandbox means your site will not come in
search result pages…
If its Google you are trying to impress then its Google’s rules you should play by. They reccommend the cannonical tag so use that. Otherwise you are in the wrong playground.