Cross-browser JSON Serialization in JavaScript

Contributing Editor

JavaScript JSON serializationIn this article we will examine the benefits of object serialization, the current browser implementations, and develop some code that could help your Ajax-based projects.

Assume we have a fairly complex JavaScript object defined using literal notation:


var obj1 = {
	b1: true,
	s1: "text string",
	n1: 12345,
	n2: null,
	n3: undefined,
	a1: [ 1,1,2,3,5,8, [13, 21, 34] ],
	o1: {
		a: [3, 2, 1],
		b: {
			c: 42,
			d: [ 3.14, 1.618 ]
		}
	}
};

We can access any of the object properties in a variety of ways:


obj1.s1;				// returns "text string"
obj1["n1"];				// returns 12345
obj1.a1[6][1];			// returns 21
obj1["o1"]["b"]["c"];	// returns 42

This object can also be passed to JavaScript functions and methods rather than specifying individual arguments. Useful stuff.

However, what if we need to store this object in a cookie? What if we need to pass the object to a web services via an Ajax request? What if that web service wants to return a modified version of the object? The answer is serialization:

  • Serialization is the process of turning any object into a string.
  • De-serialization turns that string back into a native object.

Perhaps the best string notation we can use in JavaScript is JSON — JavaScript Object Notation. JSON is a lightweight data-interchange format inspired by JavaScript object literal notation as shown above. JSON is supported by PHP and many other server-side languages (refer to json.org).

There are two JSON methods in JavaScript:

  1. JSON.stringify(obj) — converts an JavaScript object to a JSON string
  2. JSON.parse(str) — converts a JSON string back to a JavaScript object

Unfortunately, very few browsers provide these methods. To date, only Firefox 3.5, Internet Explorer 8.0 and Chrome 3 beta offer native support. Some JavaScript libraries offer their own JSON tools (such as YUI) but many do not (including jQuery).

However, all is not lost — JavaScript is flexible and we can implement the JSON stringify and parse methods whenever a browser requires them.

At the top of our code, we will create a JSON variable that points to the native JSON object or an empty object if it is unavailable:


var JSON = JSON || {};

The JSON.stringify code is a little more complex:


// implement JSON.stringify serialization
JSON.stringify = JSON.stringify || function (obj) {

	var t = typeof (obj);
	if (t != "object" || obj === null) {

		// simple data type
		if (t == "string") obj = '"'+obj+'"';
		return String(obj);

	}
	else {

		// recurse array or object
		var n, v, json = [], arr = (obj && obj.constructor == Array);

		for (n in obj) {
			v = obj[n]; t = typeof(v);

			if (t == "string") v = '"'+v+'"';
			else if (t == "object" && v !== null) v = JSON.stringify(v);

			json.push((arr ? "" : '"' + n + '":') + String(v));
		}

		return (arr ? "[" : "{") + String(json) + (arr ? "]" : "}");
	}
};

If JSON.stringify is not available, we define a new function that accepts a single obj parameter. The parameter can be a single value, an array, or a complex object such as obj1 above.

The code examines the object type. Single values are returned immediately and only strings are modified to put quotes around the value.

If an array or object is passed, the code iterates through every property:

  1. Strings values have quotes added.
  2. Child arrays or objects are recursively passed to the JSON.stringify function.
  3. The resulting values are added to the end of a json[] array as a “name : value” string, or just a single value for array items.
  4. Finally, the json array is converted to a comma-delimited list and returned within array [] or object {} brackets as necessary.

If your brain’s aching, you’ll be pleased to know that the JSON.parse code is much simpler:


// implement JSON.parse de-serialization
JSON.parse = JSON.parse || function (str) {
	if (str === "") str = '""';
	eval("var p=" + str + ";");
	return p;
};

This converts a JSON string to an object using eval().

Before you rush off to implement JSON serialization functions in all your projects, there are a few gotchas:

  • This code has been intentionally kept short. It will work in most situations, but there are subtle differences with the native JSON.stringify and JSON.parse methods.
  • Not every JavaScript object is supported. For example, a Date() will return an empty object, whereas native JSON methods will encode it to a date/time string.
  • The code will serialize functions, e.g. var obj1 = { myfunc: function(x) {} }; whereas native JSON methods will not.
  • Very large objects will throw recursion errors.
  • The use of eval() in JSON.parse is inherently risky. It will not be a problem if you are calling your own web services, but calls to third-party applications could accidentally or intentionally break your page and cause security issues. If necessary, a safer (but longer and slower) JavaScript parser is available from json.org.

I hope you find the code useful. Feel free to use it in your own projects.

Resource files:

Related reading:

Coming soon: a useful application of JSON serialization…

Free book: Jump Start HTML5 Basics

Grab a free copy of one our latest ebooks! Packed with hints and tips on HTML5's most powerful new features.

  • James Sumners

    Ack! I wrote this exact thing a couple of weeks ago. I ran into the deep recursion problem you note at the end of the article. When I saw the article title I had hoped you were going to provide a solution :(

  • http://www.optimalworks.net/ Craig Buckler

    @James Sumners
    You’ve got to have a seriously large object for that to happen. For example, passing a DOM node would also serialize all the properties and methods – that would certainly cause a recursion problem. But why would you need to do that?

  • Breton

    Using eval could be a problem even if you’re calling your own webservices. What kind of webservice nowadays doesn’t have some kind of user generated content? Not only are you trusting user input without santizing it, but you are re-inventing the wheel, except you’ve made it square.

    The new browser API’s are based exactly on http://www.json.org/json2.js and designed to be compatible with that. Conversely, json2.js is designed to fall back to native browser implementations if they exist. What json2.js does, that your poor re-implementation doesn’t do, is sanitise the inputs using four carefully constructed REGEXES that were developed over the course of ~5 years of hard experience. Please, don’t follow the advice of this article. Just use json2.js, it will save you time and anguish.

  • Breton

    I just noticed this in the article “a safer (but longer and slower) JavaScript parser is available from json.org.”

    The parser at json.org uses the eval technique exactly as you do, but it sanitises the inputs, which is actually necessary in most situations. So it’s exactly as fast as it needs to be. Err on the side of caution on this one, even if calling your own webservices.

    What bothers me a bit about the statement is the lack of evidence to back up the claim. Have you run benchmarks comparing your method to the json2.js? I would expect json2.js to be slightly slower, but in your tests, have you found it to be so significantly slower to throw security out the window?

  • http://www.optimalworks.net/ Craig Buckler

    @Breton
    I don’t agree that ‘most’ web service calls have user-generated content. Many do, for sure, but that content should be sanitized before it’s passed around. Handling it after an Ajax call is not efficient and far too late!

    Remember that JavaScript Ajax calls must (normally) occur via a web server on the same domain. Calling a remote service is possible using a proxy — and sanitization would be better handled by that proxy than JS. json2.js will not help if you’re using DOM <script> injections to get remote data.

    With regard to benchmarking, it would depend on the call frequency, data size, the connection speed, the browser, and the JS engine. Using json2.js will always take longer, but it should have a negligible effect on small, infrequent Ajax calls. Large, frequent calls on slower browsers would be another matter.

    Finally, don’t assume that json2.js provides a security guarantee. It’s safer than the solution above, but grabbing data from a foreign web service will always have a risk. If you’re erring on the site of caution, you’d be better off forgetting Ajax altogether!

  • James Sumners

    I am using it to print to a custom log window when console.log() is not available. It’s not really important that I solve the recursion problem, but it would be nice.

  • http://www.optimalworks.net/ Craig Buckler

    @James Sumners
    The recursion issue will only occur with deeply-nested data. Long lists of data should be fine.

    If you want to limit the recursion, create a variable outside the function that is initialized to 0 (e.g. var rlimit = 0;). The first line of stringify should have “rlimit++;” and then you need “rlimit–;” just before BOTH return statements.

    Now change the recursion line to:

    else if (t == "object" && v !== null && rlimit < 20) v = JSON.stringify(v);

    The recursion will be limited to 20 internal calls (you might need to adjust that number). Some data will be missed, but the JS won’t stop.

  • http://www.brothercake.com/ brothercake

    Strictly speaking, you can’t even trust your own input – you can’t be absolutely sure that it wasn’t modified between your request and its delivery, even if that modification was accidental caused by major packet loss, it could still be enough to create problems, and especially so if it was maliciously intercepted.

    The rule of thumb is one that PHP coders all live by – don’t trust *any* input, sanitize *everything*.

  • http://www.brothercake.com/ brothercake

    http://xkcd.com/327/ as a case in point :)

  • Anonymous

    @Craig Butler
    Brothercake already handled your other points quite well, so I won’t bother with those.

    I will take issue with this:

    “With regard to benchmarking, it would depend on the call frequency, data size, the connection speed, the browser, and the JS engine. ”

    Exactly! which is why I think it irresponsible that you’re giving advice that one should choose the more dangerous option “most” of the time. It’s an optimization. Optimizations should be reserved for only when you absolutely need it, not as a default choice. Especially considering that it seems to be having quite a bit of trouble with deep recursion. (i’ve never heard of this problem with the standard json2 lib)

    “Using json2.js will always take longer, but it should have a negligible effect on small, infrequent Ajax calls. Large, frequent calls on slower browsers would be another matter.”

    Based on what evidence, exactly? Do you have a specific project where this was a real problem?

    “Finally, don’t assume that json2.js provides a security guarantee. It’s safer than the solution above, but grabbing data from a foreign web service will always have a risk. If you’re erring on the site of caution, you’d be better off forgetting Ajax altogether!”

    So our options are either complete and total recklessness, or not doing anything at all. Nice logic there. Thanks.

  • http://www.optimalworks.net/ Craig Buckler

    @Breton (Anonymous?)
    Er, hold on … I have not said use this solution ‘most’ of the time. I specifically mentioned json2 and said that it should be used if security is an issue. In a post coming shortly, I will provide a situation where security is not a problem so the code above is fine.

    Of course json2 parsing is slower — it’s doing far more work and ultimately runs the same eval() function. Whether that produces a problem for your web app is another matter. Every situation is different and a benchmark won’t prove or disprove anything either way.

    With regard to deep recursion, you are obviously unaware that json2 uses it. Even the comments tell you that! Recursion is the most efficient solution for travelling tree-like structures and json2 will have exactly the same issues with large, deeply-nested data sets. However, serializing objects like that is rarely efficient or necessary — you should never encounter a problem.

    However, what concerns me is that you’re willing to blindly use a library you don’t understand for the sake of “security”? How reckless is that?! ;^)

    If security is your overriding concern, use XML or another data format that does not rely on eval(). Alternatively, sanitize the JSON data according to known data rules rather than using a generic parser.

  • ZenPsycho

    (I am Breton/anonymous, and pretty inept at this comment system, obviously)
    “Whether that produces a problem for your web app is another matter. Every situation is different and a benchmark won’t prove or disprove anything either way.” I’m not asking for a benchmark, I’m asking for evidence for this claim:

    “Using json2.js will always take longer, but it should have a negligible effect on small, infrequent Ajax calls. Large, frequent calls on slower browsers would be another matter.”

    It doesn’t really matter to me whether this evidence is a benchmark or a case study or whatever. The main thing that bothers me is this offering of “truths” without really anything at all to substantiate them. So to begin with, I don’t think large frequent ajax calls are a very good idea, regardless of which library you’re using. There are issues with the fact that the browser can only have 2 request threads going at once, and if one or both of them take longer than the request frequency, you can either hang the application, or, if you’re more proactive, rapidly time them out and end up with dropped messages.

    So it strikes me as a bit of a red herring, because the solution to that problem is not to switch to using a bare eval, It’s to figure out some way to accomplish what you’re doing without large and frequent calls. Perhaps small and frequent calls, or large and infrequent calls?

    However, if I forget about that little problem for a moment, it seems like you’re saying that if you had large frequent calls, json2.js would be a poor choice, while a bare eval would be a good choice. I’m simply asking… where’s the evidence for this claim?

    “With regard to deep recursion, you are obviously unaware that json2 uses it. Even the comments tell you that!”

    Of course I’m aware that json2.js uses recursion to stringify. Perhaps I insult you prematurely, I was just baffled by James’ complaint because according to my tests, you’d have to nest something at least ~100 levels deep before you cause a stack overflow in JS. I’d assumed that there must have been some ridiculous error in your code to cause this. I apologise. You’d probably only get up to 100 with a circular reference. As a sidenote, the proprietary mozilla “toSource” method handles circular references using a rather intriguing (and proprietary) notation. JSON.stringify just throws an error.

    “However, what concerns me is that you’re willing to blindly use a library you don’t understand for the sake of “security”? How reckless is that?! ;^)

    Have you read the source code to the operating system you use lately? How can you possibly trust it if you don’t understand how it works? How can you possibly trust your own mind if you don’t understand how that works? The answer is, the world is too complicated to really understand fully, but we get by the best we can anyway. We trust that operating system authors know what their doing, because the operating system seems to perform well, and doesn’t crash too much, and we recieve timely security patches if something goes wrong. We trust our own minds because well, we really don’t have much of a choice! Though, science can help tell us where we’re likely to make mistakes.

    json2.js isn’t all that complicated, I understand well enough how it works despite your accusations. I might not know in detail what’s in those regular expressions, and I don’t really want to know. That doesn’t really matter. I don’t really use it for “Security”. It’s more like how I don’t have a habit of jumping in front of speeding traffic. It doesn’t gaurantee I won’t get hit by a car, but at least I’m not actively encouraging it.

    When you say, basically “It’s okay to run out in front of traffic if you’re just really really in a hurry!” it might sound like I’m being a security knob if I express skepticism, but really I think I’m just being kind of reasonable. Perhaps I’m wrong.

    I trust Douglas Crockford (and there are many good reasons to do so, I think). If he says that the extra stuff that he put into json2.js is important, I tend to believe it. However, even if he was just some random shmoe programmer, he’s been debugging his json routine since 2001. How long have you been working on yours?

    So, no, I can’t say I agree with your assessment that I’m using json2.js blindly, and without understanding how it works.

    “In a post coming shortly, I will provide a situation where security is not a problem so the code above is fine.”

    I look forward to it. However, you must understand you have earned my skepticism.

  • http://www.optimalworks.net/ Craig Buckler

    @Breton/anonymous/ZenPsycho
    json2 analyses the JSON for ‘correctness’ then does an eval. My code just does the eval. Do you really need proof that json2 will be slower?!? Whether it’s noticeable will depend on your application.

    My apologies if you do understand json2/recursion, but you did say: “…it seems to be having quite a bit of trouble … i’ve never heard of this problem…” The recursion also occurs within stringify rather than the parse, which appears to be your main problem.

    My (purposely brief) solution above is insecure — I explicitly mention that and recommend json2 if security is an issue. But if you’re really worried about security, don’t use JSON/eval — XML or a verifiable string format would be better than relying on json2.js to protect you.

    However, just for you, I’ve re-written the article. Here you go …

    JSON can be used for object serialization. Visit json.org for instructions and code. (Please ignore everything above!)

  • http://mrclay.org/ mrclay

    Can y’all hug and make up? The article give a fine intro for serialization and rolling your own JSON object. And once you understand it, just use json2.js instead.

    Do you want to revisit the security and native compatibility implications of your JSON use every time to chose between two implementations (and risk creating a vulnerability when changing that code later)? I’d rather not.

  • ZenPsycho

    “json2 analyses the JSON for ‘correctness’ then does an eval. My code just does the eval. Do you really need proof that json2 will be slower?!? Whether it’s noticeable will depend on your application.”

    I don’t debate that it will be slower. What Im questioning is whether it will ever be so significantly slower that the only solution is to roll your own version of (closest javascript has to) a standard library. And the related question of whether that would actually be a particularly useful solution, if it ever gets that bad.

    However, I’ll admit I’ve been a bit hard on you. Sorry for that. Consider it good practice for the really tough critics!

  • Flash

    I have put together a little JSON sample that iterates over a JavaScript object and posts the property values to a cross domain server that is hosts by a DotNet.aspx page that then converts a C# object to a JSON string that is then posted back to the browser and converted back to a JavaScript object without having to use Window.Eval()

    The resultant JavaScript object is then finally past back to a call-back function that is ready to uses and the code does not need 3rd party libraries, works in net framework 2.0 and upwards and has been tested with IE6-IE9, Firefox plus it’s lightweight.

    See http://www.flashinvader.com/developers_corner/simple_json_objects_using_javascript_and_aspx.html

  • http://madeforall.org nicolae

    Thank you for this very useful and elegant implementations of these functions.