Easy URL Parsing With Isomorphic JavaScript
Most web applications require URL parsing whether it’s to extract the domain name, implement a REST API or find an image path. A typical URL structure is described by the image below:
You can break a URL string into constituent parts using regular expressions but it’s complicated and unnecessary…
Server-side URL Parsing
Node.js (and forks such as io.js) provide a URL API:
// Server-side JavaScript
var urlapi = require('url'),
url = urlapi.parse('http://site.com:81/path/page?a=1&b=2#hash');
console.log(
url.href + '\n' + // the full URL
url.protocol + '\n' + // http:
url.hostname + '\n' + // site.com
url.port + '\n' + // 81
url.pathname + '\n' + // /path/page
url.search + '\n' + // ?a=1&b=2
url.hash // #hash
);
As you can see in the snippet above, the parse()
method returns an object containing the data you need such as the protocol, the hostname, the port, and so on.
Client-side URL Parsing
There’s no equivalent API in the browser. But if there’s one thing browsers do well, it’s URL parsing and all links in the DOM implement a similar Location interface, e.g.:
// Client-side JavaScript
// find the first link in the DOM
var url = document.getElementsByTagName('a')[0];
console.log(
url.href + '\n' + // the full URL
url.protocol + '\n' + // http:
url.hostname + '\n' + // site.com
url.port + '\n' + // 81
url.pathname + '\n' + // /path/page
url.search + '\n' + // ?a=1&b=2
url.hash // #hash
);
If we have a URL string, we can use it on an in-memory anchor element (a
) so it can be parsed without regular expressions, e.g.:
// Client-side JavaScript
// create dummy link
var url = document.createElement('a');
url.href = 'http://site.com:81/path/page?a=1&b=2#hash';
console.log(url.hostname); // site.com
Isomorphic URL Parsing
Aurelio recently discussed isomorphic JavaScript applications. In essence, it’s progressive enhancement taken to an extreme level where an application will happily run on either the client or server. A user with a modern browser would use a single-page application. Older browsers and search engine bots would see a server-rendered alternative. In theory, an application could implement varying levels of client/server processing depending on the speed and bandwidth capabilities of the device.
Isomorphic JavaScript has been discussed for many years but it’s complex. Few projects go further than
implementing sharable views and there aren’t many situations where standard progressive enhancement wouldn’t work just as well (if not better given most “isomorphic” frameworks appear to fail without client-side JavaScript). That said, it’s possible to create environment-agnostic micro libraries which offer a tentative first step into isomorphic concepts.
Let’s consider how we could write a URL parsing library in a lib.js
file. First we’ll detect where the code is running:
// running on Node.js?
var isNode = (typeof module === 'object' && module.exports);
This isn’t particularly robust since you could have a module.exports
function defined client-side but I don’t know of a better way (suggestions welcome). A similar approach used by other developers is to test for the presence of the window
object:
// running on Node.js?
var isNode = typeof window === 'undefined';
Let’s now complete our lib.js code with a URLparse
function:
// lib.js library functions
// running on Node.js?
var isNode = (typeof module === 'object' && module.exports);
(function(lib) {
"use strict";
// require Node URL API
var url = (isNode ? require('url') : null);
// parse URL
lib.URLparse = function(str) {
if (isNode) {
return url.parse(str);
}
else {
url = document.createElement('a');
url.href = str;
return url;
}
}
})(isNode ? module.exports : this.lib = {});
In this code I’ve used an isNode
variable for clarity. However, you can avoid it by placing the test directly inside the last parenthesis of the snippet.
Server-side, URLparse
is exported as a Common.JS module. To use it:
// include lib.js module
var lib = require('./lib.js');
var url = lib.URLparse('http://site.com:81/path/page?a=1&b=2#hash');
console.log(
url.href + '\n' + // the full URL
url.protocol + '\n' + // http:
url.hostname + '\n' + // site.com
url.port + '\n' + // 81
url.pathname + '\n' + // /path/page
url.search + '\n' + // ?a=1&b=2
url.hash // #hash
);
Client-side, URLparse
is added as a method to the global lib
object:
<script src="./lib.js"></script>
<script>
var url = lib.URLparse('http://site.com:81/path/page?a=1&b=2#hash');
console.log(
url.href + '\n' + // the full URL
url.protocol + '\n' + // http:
url.hostname + '\n' + // site.com
url.port + '\n' + // 81
url.pathname + '\n' + // /path/page
url.search + '\n' + // ?a=1&b=2
url.hash // #hash
);
</script>
Other than the library inclusion method, the client and server API is identical.
Admittedly, this is a simple example and URLparse
runs (mostly) separate code on the client and server. But we have implemented a consistent API and it illustrates how JavaScript code can be written to run anywhere. We could extend the library to offer further client/server utility functions such as field validation, cookie parsing, date handling, currency formatting etc.
I’m not convinced full isomorphic applications are practical or possible given the differing types of logic required on the client and server. However, environment-agnostic libraries could ease the pain of having to write two sets of code to do the same thing.