Making sure JavaScript doesn't change location

Hey guys,

Is there anyway to have JavaScript enabled, but not allow it to change the location? I only ever want <a> links to be my source of navigation, and disable things like onClicks that change location manually if clicked. I hate when sites use <a onclick=“location=blah”> like what is the point if what you want it to do doesn’t even require javascript (e.g. you don’t need javascript to direct someone to a new page, thats the <a> tag’s job). So if you have a string of HTML text, how can you make sure javascript does not redirect .location, using PHP or JavaScript to do this i guess.

Thanks guys.

It’s almost never a good idea to try to modify/ disable default functionality in JavaScript, unless you have a really good specific reason for it, not just that it’s annoying. (And there are good reasons why a programmer might want to use a javascript link instead of an HTML one)


Are you scraping html content from other websites and displaying it as your own then?

Do you want to strip all javascript from this html?

It does sound like you are scraping content, of which you could run a preg_match_all on the content gathered each time to filter things out, e.g. your onclick actions etc.

Is scraping bad? I am not doing it, per se. Thanks for the tips.

What are some reasons to use onclick on <a> tags instead of standard functonality using href

You will realize that it is, but only when you’re responsible for creating your own content, and others have scraped it away from you.

To help prevent scraping.

Also, some onclick techniques are used to report back to the company about which ones have been clicked.

Ah I see. Thanks Paul!

P.S. Love the quote “Ham is to Hamster as Java is to JavaScript”

I got one! Sand is to Sandwich as Java is to JavaScript :smiley:

Scraping is not always bad, it depends what you are doing it for, if you are doing it to indirectly show off other people’s hard work and content then yes it is bad, but if you are writing a search engine crawler for example, then no, it can be a good thing if used correctly. So all depends on your end uses.

That’s why one is called scraping, and the other is called crawling or spidering.

Whenever someone mentions scraping, there’s a good strong chance that it’s not going to be useful for those being scraped. If the person doing the scraping is doing something appropriate, they should be able to defend their case themself.


Sometimes scraping goes on in order to collect public data and display it in a better fashion (eg without needing Javascript for links).

In fact a lot of this goes on in the UK because the authorities holding the data will not make it available to the public as data streams, even though the public pays for it.

Hence it gets scraped and redisplayed properly, or mashed/meshed with other data for the greater good.