Results 1 to 2 of 2
Oct 14, 2011, 21:08 #1
- Join Date
- Oct 2011
- 0 Post(s)
- 0 Thread(s)
Comparison solutions to make AJAX crawable
I'm about to develop a heavily AJAX based website (this is because the website should look more like a brochure/leaflet).
All of this should happen without harming it's SEO performance.
I saw some solutions (but most articles were actually quite outdated).
Please correct me where necessary, this is how I see it:
- One way to do it is by using a hash URL. something like www.website.com/#!/category/product
-- with jQuery you can make the hash change however you want
-- because of the '!', Google will ask for a HTML snapshot. Everyone is happy in that way.
-- Question: will these clean urls (so the /#!/... ones) have the same value for Google as the traditional ../category/product urls?
- Another way is to have standard links: like www.website.com/category/product
-- Google bot can just visit normal page instantly, for him this is just a normal site
-- Only one page has to be created, and not 2 views as with the previous solution...
-- Question: I haven't seen how this is technically possible. So far I only saw /#/ websites.
What do you think about these 2 solutions? Which one is preferable? Do you have experience with one of them?
Oct 15, 2011, 10:42 #2
- Join Date
- Jul 2006
- Augusta, Georgia, United States
- 16 Post(s)
- 3 Thread(s)
Most browsers these days support the history API. The history API is a "HTML5" feature that makes it possible to change the URL and maintain history not to mention manipulate the back button. There is also a wrapper for it out there that provides back-ward capability with #! technique you described.
The whole URL manipulation thing is really just a decorative touch. It by itself does not make for physically pages that can be crawled. It needs to be combined with what I mentioned above. In that you build a site that works properly with refreshes. Than add the AJAX layer on top and manipulation for the URL.
edit: Decorative isn't the right word to use considering it does make the state of the application restoreable in terms of book marking and navigating to the URL… when done right.The only code I hate more than my own is everyone else's.