By Kevin Yank

Google Closure: How not to write JavaScript

By Kevin Yank

At the Edge of the Web conference in Perth last week I got to catch up with Dmitry Baranovskiy, the creator of the Raphaël and gRaphaël JavaScript libraries. Perhaps the most important thing these libraries do is make sophisticated vector graphics possible in Internet Explorer, where JavaScript performance is relatively poor. Dmitry, therefore, has little patience for poorly-written JavaScript like the code he found in Google’s just-released Closure Library.

Having delivered a talk on how to write your own JavaScript library (detailed notes) at the conference, Dmitry shared his thoughts on the new library over breakfast the next morning. “Just what the world needs—another sucky JavaScript library,” he said. When I asked him what made it ‘sucky’, he elaborated. “It’s a JavaScript library written by Java developers who clearly don’t get JavaScript.”

For the rest of the day, to anyone who would listen, Dmitry cited example after example of the terrible code he had found when he went digging through Closure. His biggest fear, he told me, was that people would switch from truly excellent JavaScript libraries like jQuery to Closure on the strength of the Google name.

“I’ll make you a deal,” I told him. “Send me some examples of this terrible code and I’ll publish it on SitePoint.”

The Slow Loop

From array.js, line 63:

for (var i = fromIndex; i < arr.length; i++) {

This for loop looks up the .length property of the array (arr) each time through the loop. Simply by setting a variable to store this number at the start of the loop, you can make the loop run much faster:

for (var i = fromIndex, ii = arr.length; i < ii; i++) {

Google’s developers seem to have figured this trick out later on in the same file. From array.js, line 153:

var l = arr.length;  // must be fixed during loop... see docs
for (var i = l - 1; i >= 0; --i) {

This loop is better in that it avoids a property lookup each time through the loop, but this particular for loop is so simple that it could be further simplified into a while loop, which will run much faster again:

var i = arr.length;
while (i--) {

But not all of Closure Library’s performance woes are due to poorly optimized loops. From dom.js, line 797:

switch (node.tagName) {
  case goog.dom.TagName.APPLET:
  case goog.dom.TagName.AREA:
  case goog.dom.TagName.BR:
  case goog.dom.TagName.COL:
  case goog.dom.TagName.FRAME:
  case goog.dom.TagName.HR:
  case goog.dom.TagName.IMG:
  case goog.dom.TagName.INPUT:
  case goog.dom.TagName.IFRAME:
  case goog.dom.TagName.ISINDEX:
  case goog.dom.TagName.LINK:
  case goog.dom.TagName.NOFRAMES:
  case goog.dom.TagName.NOSCRIPT:
  case goog.dom.TagName.META:
  case goog.dom.TagName.OBJECT:
  case goog.dom.TagName.PARAM:
  case goog.dom.TagName.SCRIPT:
  case goog.dom.TagName.STYLE:
    return false;
return true;

This kind of code is actually pretty common in Java, and will perform just fine there. In JavaScript, however, this switch statement will perform like a dog each and every time a developer checks if a particular HTML element is allowed to have children.

Experienced JavaScript developers know that it’s much quicker to create an object to encapsulate this logic:

var takesChildren = {}
takesChildren[goog.dom.TagName.APPLET] = 1;
takesChildren[goog.dom.TagName.AREA] = 1;

With that object set up, the function to check if a tag accepts children can run much quicker:

return !takesChildren[node.tagName];

This code can be further bulletproofed against outside interference using hasOwnProperty (see below for a full explanation of this).

return !takesChildren.hasOwnProperty(node.tagName);

If there’s one thing we expect from Google it’s a focus on performance. Heck, Google released its own browser, Google Chrome, primarily to take JavaScript performance to the next level!

Seeing code like this, one has to wonder if Google could have achieved the same thing by teaching its engineers to write better JavaScript code.

Six Months in a Leaky Boat

It would be unfair to suggest that Google has ignored performance in building Closure. In fact, the library provides a generic method for caching the results of functions that run slowly, but which will always return the same result for a given set of arguments. From memoize.js, line 39:

goog.memoize = function(f, opt_serializer) {
  var functionHash = goog.getHashCode(f);
  var serializer = opt_serializer || goog.memoize.simpleSerializer;
  return function() {
    // Maps the serialized list of args to the corresponding return value.
    var cache = this[goog.memoize.CACHE_PROPERTY_];
    if (!cache) {
      cache = this[goog.memoize.CACHE_PROPERTY_] = {};
    var key = serializer(functionHash, arguments);
    if (!(key in cache)) {
      cache[key] = f.apply(this, arguments);
    return cache[key];

This is a clever performance trick employed in a number of major JavaScript libraries; the problem is, Google has not provided any means of limiting the size of the cache! This is fine if a cached function is only ever called with a small collection of different arguments, but this is a dangerous assumption to make in general.

Used to cache a function’s results based on, say, the coordinates of the mouse pointer, this code’s memory footprint will rapidly grow out of control, and slow the browser to a crawl.

In Dmitry’s words, “I’m not sure what this pattern is called in Java, but in JavaScript it’s called a ‘memory leak’.”

Code in a Vacuum

In his talk on building JavaScript libraries, Dmitry compared JavaScript’s global scope to a public toilet. “You can’t avoid going in there,” he said. “But try to limit your contact with surfaces when you do.”

For a general-purpose JavaScript library to be reliable, it must not only avoid interfering with any other JavaScript code that might be running alongside it, but it must also protect itself from other scripts that aren’t so polite.

From object.js, line 31:

goog.object.forEach = function(obj, f, opt_obj) {
  for (var key in obj) {, obj[key], key, obj);

forin loops like this one are inherently dangerous in JavaScript libraries, because you never know what other JavaScript code might be running in the page, and what it might have added to JavaScript’s standard Object.prototype.

Object.prototype is the JavaScript object that contains the properties shared by all JavaScript objects. Add a new function to Object.prototype, and every JavaScript object running in the page will have that function added to it—even if it was created beforehand! Early JavaScript libraries like Prototype made a big deal of adding all sorts of convenience features to Object.prototype.

Unfortunately, unlike the built-in properties supplied by Object.prototype, custom properties added to Object.prototype will show up as an object property in any forin loop in the page.

In short, Closure Library cannot coexist with any JavaScript code that adds features to Object.prototype.

Google could have made its code more robust by using hasOwnProperty to check each item in the forin loop to be sure it belongs to the object itself:

goog.object.forEach = function(obj, f, opt_obj) {
  for (var key in obj) {
    if (obj.hasOwnProperty(key)) {, obj[key], key, obj);

Here’s another especially fragile bit of Closure Library. From base.js, line 677:

goog.isDef = function(val) {
 return val !== undefined;

This function checks if a particular variable has a value defined. Or it does, unless a 3rd party script sets the global undefined variable to something else. This single line of code anywhere in the page will bring Closure Library crashing down:

var undefined = 5;

Relying on the global undefined variable is another rookie mistake for JavaScript library authors.

You might think that anyone who assigns a value to undefined deserves what they get, but the fix in this case is trivial: simply declare a local undefined variable for use within the function!

goog.isDef = function(val) {
  var undefined;
  return val !== undefined;

Typical Confusion

One of the most confusing aspects of JavaScript for developers coming from other languages is its system of data types. Closure Library contains plenty of bloopers that further reveal that its authors lack extensive experience with the finer points of JavaScript.

From string.js, line 97:

// We cast to String in case an argument is a Function. …
var replacement = String(arguments[i]).replace(…);

This code converts arguments[i] to a string object using the String conversion function. This is possibly the slowest way to perform such a conversion, although it would be the most obvious to many developers coming from other languages.

Much quicker is to add an empty string ("") to the value you wish to convert:

var replacement = (arguments[i] + "").replace(…);

Here’s some more string-related type confusion. From base.js, line 742:

goog.isString = function(val) {
  return typeof val == 'string';

JavaScript actually represents text strings in two different ways—as primitive string values, and as string objects:

var a = "I am a string!";
alert(typeof a); // Will output "string"
var b = new String("I am also a string!");
alert(typeof b); // Will output "object"

Most of the time strings are efficiently represented as primitive values (a above), but to call any of the built-in methods on a string (e.g. toLowerCase) it must first be converted to a string object (b above). JavaScript converts strings back and forth between these two representations automatically as needed. This feature is called “autoboxing”, and appears in many other languages.

Unfortunately for Google’s Java-savvy developers, Java only ever represents strings as objects. That’s my best guess for why Closure Library overlooks the second type of string in JavaScript:

var b = new String("I am also a string!");
alert(goog.isString(b)); // Will output FALSE

Here’s another example of Java-inspired type confusion. From color.js, line 633:

return [
  Math.round(factor * rgb1[0] + (1.0 - factor) * rgb2[0]),
  Math.round(factor * rgb1[1] + (1.0 - factor) * rgb2[1]),
  Math.round(factor * rgb1[2] + (1.0 - factor) * rgb2[2])

Those 1.0s are telling. Languages like Java represent integers (1) differently from floating point numbers (1.0). In JavaScript, however, numbers are numbers. (1 - factor) would have worked just as well.

Yet another example of JavaScript code with a whiff of Java about it can be seen in fx.js, line 465:

goog.fx.Animation.prototype.updateCoords_ = function(t) {
  this.coords = new Array(this.startPoint.length);
  for (var i = 0; i 

See how they create an array on the second line?

this.coords = new Array(this.startPoint.length);

Although it is necessary in Java, it is entirely pointless to specify the length of an array ahead of time in JavaScript. It would make just as much sense to create a new variable for storing numbers with var i = new Number(0); instead of var i = 0;.

Rather, you can just set up an empty array and allow it to grow as you fill it in. Not only is the code shorter, but it runs faster too:

this.coords = [];

Oh, and did you spot yet another inefficient for loop in that function?

API Design

If all the low-level code quality nitpicks above don’t convince you, I defy you to try using some of the APIs Google has built into Closure Library.

Closure’s graphics classes, for example, are modeled around the HTML5 canvas API, which is about what you’d expect from a JavaScript API designed by an HTML standards body. In short, it’s repetitive, inefficient, and downright unpleasant to code against.

As the author of Raphaël and gRaphaël, Dmitry has plenty of experience designing usable JavaScript APIs. If you want to grasp the full horror of the canvas API (and by extension, Closure’s graphics API), check out the audio and slides from Dmitry’s Web Directions South 2009 talk on the subject.

Google’s Responsibility to Code Quality

By this point I hope you’re convinced that Closure Library is not a shining example of the best JavaScript code the Web has to offer. If you’re looking for that, might I recommend more established players like jQuery?

But you might be thinking “So what? Google can release crappy code if it wants to—nobody’s forcing you to use it.” And if this were a personal project released by some googler on the side under his or her own name, I’d agree with you, but Google has endorsed Closure Library by stamping it with the Google brand.

The truth is, developers will switch to Closure because it bears the Google name, and that’s the real tragedy here. Like it or not, Google is a trusted name in the development community, and it has a responsibility to that community to do a little homework before deciding a library like Closure deserves public exposure.

  • Paul Annesley

    You ought to reformat this blog post as a patch and submit it :)

    On one hand, it’s great that they’ve open sourced it, warts and all. On the other hand, it’s true that their powerful brand demands greater responsibility than from most.

  • PCSpectra

    I couldn’t agree more, thank you for this article. I hate to admit but I “might” have switched from jQuery to Closure strictly because of the Google brand. One would assume everything from Google is as good as it gets, but clearly this is not the case.


  • Ben

    You ought to reformat this blog post as a patch and submit it :)

    On one hand, it’s great that they’ve open sourced it, warts and all. On the other hand, it’s true that their powerful brand demands greater responsibility than from most.

    Being that it’s open-source — yes, this should be a patch. Instead of using it as an opportunity to dis someone else’s just-released-to-the-public library based on a few examples.

    There’s some real meat to the above and it’s worth discussing — or, again, patching. But railing against them checking “undefined”? That’s just petty. Let them save a line of code at the expense of someone who felt the need to redeclare a global property.

  • Cog

    The problem here is not Google’s implementation of a JavaScript library, but the baroque nature of JavaScript itself. It’s a language foisted on developers due to the prevalence of the browser as UI, not by any virtues of the language itself.

  • Anonymous

    “Rather, you can just set up an empty array and allow it to grow as you fill it in. Not only is the code shorter, but it runs faster too.”

    How is that possible? It undoubtedly uses some sort of amortized doubling technique for growing the array. That’s quite fast… But I don’t see how it could be faster than just initializing the array to the correct size from the start. Unless perhaps it always grows in powers of 2 and so allocating the memory is faster… At a certain point though, initializing it at the start has got to overtake it…

  • Mark

    I dont understand why you sit here and complain like a child instead of fixing these errors up and submitting them back into it so every one can benefit. Closed source software is never going to be perfect because the company needs it to work and spends times adding features not performance tuning it. Thus it will still have some of these issues when open sourcing it but then others will come along and fix it helping every one.

  • david

    Google is full of shmucks.

  • Andy Kish

    It’s easy to nitpick style and find places to apply micro-optimizations, but where’s the beef?

    I agree that low level style issues tend to indicate larger issues, but that’s only touched on in the short blurb on the canvas API–and there’s no further explanation! “…it’s repetitive, inefficient, and downright unpleasant to code against.” How? Explain! Give examples!

  • Josh

    Not only is

    var arr = new Array(5);

    less efficient then

    var arr = [];

    but new Array(5) actually fills the first 5 members of the array with undefined. In other words,

    var arr = new Array(5);

    gives you

    var arr = [undefined, undefined, undefined, undefined, undefined];

  • krdr

    Google bears responsibility for quality of their products. We all expecting that from MS, Apple, IBM… The bigger company is, the bigger is their responsibility to publish quality products. Yeah, Closure is open-source, but it drives number of Google sites. Shaggy experience of some Google services is caused by Closure, for sure. It is not fair that multi-billion business relies on work of independent developers to fix their code. Maybe, Closure is written in Java then recompiled by GWT.

  • Leo

    Good points, I admit. I learnt something from you.

    However, you could increase your credibility by fixing a JavaScript error on this page (shows up in IE 7 which I’m currently being forced to use):

    Line: 19
    Char: 5
    Error: Unknown Runtime Error
    Code: 0

  • Aaron Brethorst

    What exactly does ‘performs like a dog’ even mean? Don’t get me wrong, Dmitry’s graphics libraries are awesome, and I love using them, but qualitative measurements of performance aren’t particularly meaningful. Real benchmarks on real browsers would be far more interesting to me than a snarky examination of loop construction.

  • Anonymous

    This for loop looks up the .length property of the array (arr) each time through the loop. Simply by setting a variable to store this number at the start of the loop, you can make the loop run much faster:

    It seems that most Javascript interpreters and compilers are junk, and can’t figure a basic optimization out. Does it mean that developers have to write shitty, explicit and verbose code ? If performance is key, yes, but on the long term, a better Javascript engine will yield more benefits. I’m assuming that’s the Google approach.

  • Dan

    I see nothing wrong with bringing these issues up in a public forum. I think this is far more valuable to the community than simply submitting a patch to Google. This post is probably going to be seen by hundreds if not thousands of developers, who can learn something from these mistakes. Only submitting a patch would be seen by far fewer people — probably mostly Google’s employees — so the benefits aren’t as great.

    I do alot of open source development, and one of my primary motivations is to get peer review of my code so I can further improve my craft. You develop a thick skin pretty quickly, so I can’t see Google’s developers being anything but happy to see this discussed in public.

  • seablue

    This is becoming typical of many Google products. A bunch of Java engineers, without focus and a clear understanding of the problem domain and best practices, pushing out yet another half-baked product.

    When all you have is a hammer, every problem looks like a nail.

  • Julien Royer

    Why would String(arguments[i]) be slower than arguments[i] + ""? The former is a built-in conversion function, the latter is a hack… Have you got a benchmark?

  • Andreas

    I would have liked to see some numbers of how much your proposed changes improve performance instead of just bashing on some missed micro-optimizations, e.g. how much faster is that while (i--) than the equivalent for-loop really?

    I admit, I prefer easily understandable code to code that runs 0.1ms faster when the moon turns red.

  • Andreas

    PS: You may find this rebuttal on ycombinator interesting.

  • ll

    Dimitry seems to be unaware of how modern JS JITS work when he points at the supposed performance bugs: if a loop is really small enough to be slowed on such an access, it’s probably traceable.

    Furthermore, these optimizations only matter when they’re the bottleneck — that sort of thing is revealed after usage; optimizing them beforehand is one of the worst things you can do when designing the initial lib. In reality, typically well over 90% of your CPU time is in native C++ code. Except for hotpaths, focusing on Dmitry-style optimizations mostly just makes your code illegible.

    Finally.. I was under the impression this is just the public release and it was already being used internally (and therefore potentially in deployed code). If so… its utility has already been demonstrated.

  • Petr Stanicek

    Nice article, thanks. Rather than a Google criticism I would understand it as a useful list of possible leaks a Javascript programmer should be aware of. Thanks.

    Anyway, the loop performance is not as big deal as you present. Of course, it’s nicer and more professional to preassign the array length instead of testing it again in every step. But the performance is not such problem. The Javascript basic operations are very fast, making the difference quite unnoticable. Those two loops performance will differ only in miliseconds for a million-iterations loop in today’s average computer. This means the same loop travesing an array containing a million items will take about 15 ms if the array.length is tested every time, and about 10 ms if the length is preassigned. So, we are talking about saving nanoseconds per a thousand-iterations loop. This makes no sense from the performance point of view.

    Additionally, the while loop as shown above is in fact slower than the for loop. Yes, it’s faster than the for loop with array.length testing, but slower than a simple for loop. It can be verified by a simple test.

  • will

    @Cog: Couldn’t agree more.

    It’s funny to see this kind of complaint since JavaScript itself is the root couse of “inefficient” code. This is gonna show up again and again and again until JavaScript itself gets _fixed_.

  • GS

    Incidentally, Java does support primitive string types via autoboxing, and has done for years.

  • Matteo Caprari

    Google’s has indeed a (perceived) responsability to release quality code.

    There is also a developer responsability to know what he is using, and why. This specially true if said developer is nitpick about performance and implementation details.

    Also, I think it’s misleading to suggest jQuery as an alternative to google’s closure. I love jQuery as the next guy but it’s not the right tool to implement and maintain a ‘complex’ app (say gmail-like). More ‘structured’ libs like YUI, closure and extJs are better tools for that job.


  • Joe

    Good catches!

    And I don’t agree with the comments of some colleagues above: Nobody is obliged to contribute. If Dmitry were obliged to patch all the shitty code out there, he’d never have time to work on his own projects again.

    It’s the project starter/maintainer who sets the standards. I don’t say Closure is bad, but the original author obviously lacks some experience, knowledge or time. Or all of it.

  • Matthew W

    Although he spots some genuine issues, on the whole this seems to be a bunch of relatively minor (and slightly snobby) style nitpicks.

    I was hoping for a nuanced critique of their API design.

  • Dawn

    Kevin, (and Mark) running just a couple of the library’s files through JSLint will give you some idea of the quantity of the problems. A bit more than just a patch required here.

  • danieljames

    He’s wrong about switch statements. A switch statement with integer cases (remembering that the closure compiler replaces constants with their value) has performs well. It’s also clearer code, and doesn’t risk problems with members added to the object prototype. Maybe he could try learning something from other languages instead of just being snooty about them?

  • Kristoffer Lawson

    To be honest, some of these seem like issues that possibly could be corrected by optimising and tweaking the implementation of Javascript itself. The faster code is not always clearer and I prefer to keep code clear and concise and to work on techniques which make it run faster behind the scenes.

    When the complaint was made about how Google wasn’t using JS in a way that understood the language, I was hoping for a more for a more philosophical argument. Javascript is an incredibly flexible and powerful language, but almost nobody uses that power in interesting ways: objects tend to be designed in a very typical class-based Java-like manner.

  • danieljames

    Oh, and it looks like presizing an array is marginally faster on some browsers.

  • Anomynomly

    I’m pretty sure the compiler will optimize the loops.

  • jdolan

    You’ve got the premature optimization bug! Did you do any profiling to validate these optimizations you identified? Anyone who attempts to optimize a body of code without actually profiling it to find hot spots doesn’t know what they’re doing, sorry. I’d be absolutely astounded if some of these “performance enhancements” you’ve called out can actually even be seen in a profiler or benchmark.

    Here’s an example of how to actually optimize a body of code. Hint: it’s usually a bit more involved than changing the style of a loop.

  • Joe Blow

    I really would like to see a patch containing all your suggestions and a performance benchmark. I doubt it will make a big difference for a regular web application. Most of this stuff you are complaining are just personal preference. It is not all about performance, you may want to create a framework that is easy to maintain instead of using obscure javascript features to have a insignificant performance improvement.

  • Ryan Kearney
  • combray

    When most people get excited about closure, they aren’t talking about the javascript library. The main thing that is cool about is it the optimizing compiler. The third part — the templating language which can render the same templates on the server in java and on the client in javascript is also pretty cool. While it sounds like the javascript code of the library can be cleaned up — again, the library is not the main point, not what’s even especially interesting about the project — this is by far the most sophisticated javascript tool-chain that we’ve seen.

  • Paul Smith

    I found some of your claims questionable, so I cherry-picked one to test for myself.

    This code converts arguments[i] to a string object using the String conversion function. This is possibly the slowest way to perform such a conversion, although it would be the most obvious to many developers coming from other languages.

    Much quicker is to add an empty string (“”) to the value you wish to convert:

    I compared the time it took to run 100,000 of each of String(fn) and fn + "", and found that the results were more ambiguous and less definitive than your assertion. In fact, fn + "" was 6.6% faster on Firefox 3.5, but 15.8% slower on a recent Chromium build. In any case, the difference was, even over 100,000 runs, a few milliseconds, so it hardly seems like this particular case would be an area to focus on for optimization.

    I wonder how many of your other strong claims would stand up to similar testing.

    My test code and results:

  • Matt Todd

    Hey, great writeup! I’m a competent JavaScript developer, but definitely learned some things here. Thanks for enumerating these issues and explaining them so simply. I will definitely refer to this in the future!


  • wonder

    Yeah and when you shorten function names and strip all whitespaces and newlines it even loads faster…….. sigh

  • mikes

    Most of your complaints on Google Closure are about micro-optimizations, and you suggestions would in most cases make the code more ugly. Since when did it became a good coding practice to prioritize micro-optimizations over clean code?

    The problem here is that most commonly used Javascript implementations are terribly inefficient, and thus in need for those silly micro-optimizations. Similar micro-optimizations has been unnecessary in Java and other serious programming languages for at least 10 years due to reasonably efficient compilers and/or JIT-powered interpreters.

    Google is actually trying to do something about this with its Chrome browser and V8 Javascript engine.

  • Mark T. Tomczak

    Assuming, of course, that the accusation is true, I think it’s obvious why the author hasn’t submitted these issues as bugs to be resolved in the library: he has no vested interest in using the library, as he’s already happy with jQuery. But of course, that doesn’t stop those who are interested in improving Closure’s quality from submitting these issues.

  • Adam

    I love your for(var i = l - 1: ...) (spoken: for var eye equals ell minus one), please don’t do that again.

  • Arnold

    I seem to remember them saying something like Closure is beta-quality and not ready for production…

  • Getify

    @Ben — that has to be one of the most close-minded statements I’ve ever heard. You clearly have no clue what JS offers to suggest that it’s only around because of the browser, and that it’s been “foisted” unwillingly upon otherwise pure developers. Why are you even commenting on a JS related blog post with that kind of delusion?


    Let’s all remember that Google has reportedly been working on (at least parts of) Closure for a long time, maybe even 3 years. This is not a quick little library they threw out to the open-source community for help. They positioned it as their best effort and christened it with Google brand AND said “oh, btw, we use it on a bunch of our sites/apps”. This means they are representing it as a pretty strong option (not perfect, but certainly not amateur/alpha).

    I think we’re all dancing around something kind of obvious. This JavaScript code LOOKS so much like Java because it IS Java. Google simply used some undisclosed tool to convert their Java to JS.

  • John Farrell

    Great article.

    I love how every piece of text on the internet which criticizes something inevitably has a comment criticizing that criticism as being unhelpful or whiny.

  • Mojo

    I recommend more established players like jQuery?

    Why not just use the father of Closure? Dojo

  • anon

    A brilliant list of some of the gotchas you can fall into unless you really get to know javascript. This shouldn’t just be folded into the code base and is not a whinge. The more these articles are published the better the quality JS will eventually be.

  • Sean

    Many of these points are so minor, it hardly matters. Honestly, optimizing a for loop by caching the length is minuscule. The only benefit would be when you’re getting the length from node lists, since they’re “live” lists, and calculate length every time.

    Plus, if we’re talking about performance, why in the world would you suggest jQuery as a shining star? Pop open it’s source, and you’ll find similar warts. More shocking, is that the designers of jQuery work mostly in Javascript, not Java.

  • Chevalric

    Can we please note that prototype.js doesn’t extend the Object.prototype anymore, and hasn’t done so for several years now. The way it’s written in this article suggests this is still the case.

    Having said that, I agree with the previous posters that complaining about this seems a bit “meh” when you could just apply those fixes and create a patch, helping them out. But I do agree with the notion that a company like Google should put out better code than this.

  • John Reeves

    Maybe Dmitry Baranovskiy writes a mean JS library, but this article is full of myths and nit picky complaints.

    for(i=0; i < someArray.length; i++)...
    // vs
    for(i=0; i < len; i++)...

    These are nearly *Identical* in performance. Even if the loop does nothing, so that the time spent in the loop is all spent incrementing i and checking its value, it is STILL hard to detect the performance difference. And if you do anything of substance in this loop, the time spent checking against someArray.length is dwarfed by the loop body. This is a stupid complaint.

    Also, same goes for the difference between x=String(foo) and x=foo+”; They are nearly the same because regardless of whether the String function is called, foo still needs to be converted which can be time consuming.

    The complaints about the for/in loop are valid, but this is likely just a bug! You know, like all code has. More eyes on the code finds more of the bugs. So like others have said, fix the problem instead of just complaining.

    I love Javascript and I don’t like seeing Javascript written like Java, because prototype inheritance and closures are awesome. But complaining just to complain doesn’t help anyone.

  • Evan

    This like Google Go are a bit underwhelming. The weight of the name Google hould mean more. Right now “Google” means a smart person worked on it. If was done right “Google” would mean and it revolutionized the way we do things. This seems like more of the same.

  • Bob

    How is that possible? It undoubtedly uses some sort of amortized doubling technique for growing the array. That’s quite fast… But I don’t see how it could be faster than just initializing the array to the correct size from the start.

    As it turns out, arrays in Javascript are just objects with numbers for member names. And objects are hashtables.

  • thathaloguy

    It’s faster to create an empty array than to create a pre-sized array, but if you’re going to fill it up anyways, you’re probably (marginally) better off pre-sizing it, like Google.

    To me, it seems like this article brings up a lot of micro-optimizations that likely have a negligible effect on performance, or no effect at all, and complains about them instead of fixing them.

    If you found a major security hole, or you found errors, that’s something to complain about.

  • uhh

    You criticize Closure for not doing hasOwnProperty checks, and claim jQuery to be “excellent”, though it fails just as violently in the presence of augmented Object.prototype stuff. I’d say most libraries currently skip the hasOwnProperty check, as it is hugely expensive and 9/10 times unneeded.

  • acidboy

    Someone said, “Incidentally, Java does support primitive string types via autoboxing, and has done for years.”

    For the record, this is completely false. Strings have always been represented as objects in Java. Forever. Perhaps you are confusing string interning with the concept of primitives.

  • onerob

    “Since when did it became a good coding practice to prioritize micro-optimizations over clean code?”

    Who cares about clean code in a third-party library that you don’t need to pry into?

  • Gytis

    Ok, so could anyone provide me a link with a library, that is
    1) So modular
    2) Has a constant approach
    3) has unit tests for its code, api browser
    4) Has so much advanced functions (eg. like browserchannel)

    jQuery is an superb library.. but only for a fraction of things. It is good for dom manipulation, and effects, and stuff. But if you go into real enterprise development, and want maintainable code, and OO principles, it sucks at that. It does not provide no “guidelines”, no “path to follow” how to build large JS applications.

    I have been using jQuery in my project for a long time. And the more code is added, the harder it gets. Yahoo YUI video presentations and slides help. Recently i’ve plugged in low-pro for jquery – it helps a lot in separating pieces, and building it like a lego, just the way it is supposed to be.

    This is a nice article, but lets not forget some facts:
    1) jQuery was first released in January 2006. I has come a loooong way since.
    2) Maybe jQuery code quality is superb .. until we get down to plugins (and me must, cause there is not much functionality other than dom/ajax). Then the nightmare begins…:)

    what google is missing is extensive wiki and documentation . But lets’ give it a chance.Aside from bashing it for performance reasons, which i’m sure will be fixed. And … maybe google is going other way – writing good javascript engine, that does it’s optimizations, and you are not required to write non readble JS code, so that it would be optimal :)

    It’s nice to hear that there is such a buzz arround google closure, hopefully google will hear the good thoughts :)

  • John Reeves

    Who cares about clean code in a third-party library that you don’t need to pry into?

    I’ll tell you who cares. The people who expect updates and bug fixes to come fast and reliably. Clean code is way more important than micro optimizations even if the average user of the library won’t read the code.

  • Wyatt

    Isn’t the proper way to check for an undefined var `typeof v === ‘undefined’;`? Defining `var undefined;` in a local scope seems like a hack.

    • Defining ‘var undefined;’ in a local scope is a safety measure to assure that the variable is actually ‘undefined’, as ‘undefined’ could have been redefined in the global scope.

  • sherod

    If the arcane rules remarked upon here are truly necessary in order to write or use JavaScript for the average developer, then this language needs to die.

    This post is rant that people don’t understand how crap JavaScript is and how many hacks you need to know to get it to behave.

    No wonder it keeps getting abstracted away in library after library.

    Why is this ‘platform’ around… because it’s ‘everywhere’? Is that the measure of how things are successful? *Sigh*

  • The world doesn’t really need another javascript library.

  • Ren

    Atleast it has a seemingly complete unit test suite, so various issues can be refactored with some confidence.

    Far better to be accurate then fast, though wether the so called optimizations do provide any meaningful improvement is in serious doubt.

    Reads more like Google has peed on someones parade, than a serious article on Closure.

  • Google always has many different ideas, but some looks nice.
    I like this code style!

  • SamGoody

    Saying that the above should be applied as a patch misses the point.

    He never said that these are the only problems with the library – he said these are examples. Dmitry has no reason to spend weeks going through the code only to have the patch rejected. If someone else on Sitepoint wishes to, they surely could.

  • Name

    Thanks a lot, through your criticism I’ve learned a lot, and I don’t consider myself a beginner.

  • DaveC

    Really… I expect better of a site as large and as popular as Sitepoint…
    Does Kevin or Demitry really think that Google (who use this library for GMail, Docs, etc) wouldn’t have spent a significant amount of time “profiling” the code the compiler generates and optimising any bottle necks??
    As many have pointed out, micro-optimising code in isolation (because you read it somewhere) without profiling/testing is just plain stupid.

    I can only hope that this rubbish appeared on Sitepoint to do exactly what it appears to have done – create lots of interest/traffic – and that it wasn’t intended as an informative article.

  • Pete B

    I think you’ll find that this Closure library makes the same kind of assumptions that most of the JavaScript libraries do.

    For the sake of code size and performance, most libraries will assume you don’t use the String constructor (because it’s pretty pointless), overwrite the undefined variable or mess with the core Object prototype.

    There is only so much you can shelter your libraries users from the incompetence of other code. I mean if someone wanted to they could destroy all the native methods and you wouldn’t be able to protect against it:

    for ( var meth in Array.prototype ) {
    Array.prototype[meth] = null;

  • Anonymous

    Don’t be sad because nobody uses your library, Dmitri. This sort of nitpicking just makes you sound like a sore loser. And as many people have already mentioned, I very much doubt that implementing your suggestions would make any real difference to execution time. It’s easy to pick holes in other people’s code.

    Reads more like Google has peed on someones parade, than a serious article on Closure.

  • Anonymous

    “This single line of code anywhere in the page will bring Closure Library crashing down:”

    var undefined = 5;

    Well, this single line of code will bring *your* library crashing down:

    Raphael = null;

    Don’t really see what you are trying to achieve here, unless it’s just to garner some publicity for your own library with strawman arguments.

  • TJ Holowaychuk

    Im not impressed with the library. Could do much better myself.

  • rv0 soft

    Thank you very much for this article.. Seriously, I don’t consider myself a novice javascript programmer, but there were some real eye-openers in your article.
    In fact, I’m gonna be optimising loops this weekend because of this. I feel a bit like “how come nobody told me this earlier” on some of the points you made. Thanks, I bow in respect!

  • jsnoob

    This was enlightening for me, I wouldn’t have learned these valuable javascript tricks without reading this blog. And Dmitry’s work looks outstanding.

  • Graham Bradley

    Some fair points here, although I don’t agree with your problems with Object.prototype or undefined. Anyone writing or including JS modifying the Object prototype or predefined globals deserves to get those errors.

    Oh, and Wyatt:

    No wonder it keeps getting abstracted away in library after library.

    JS libraries are about abstracting other layers away, such as the DOM – not the language itself. With some flaws aside, JS is a fantastic lightweight, flexible language. Most of the complaints in this page are either dealing with micro-optimisation or really are simple things that a JS developer – Google employee or not – should be aware of.

  • Juan Mendes

    Sorry I didn’t read all the posts (so I’m probably repeating something). I think all the points you make are valid. They seem to need better code reviews. I initially loved the article, when it showed some carelessness like the for loop checking the length every time and adding props to Object.prototype. However, I got tired when you started nit-picking.

    As a googler, though, I would suggest you offer this a patch (like many others here). I was not considering moving to this library myself, I already love Ext. I write web apps (as opposed to dynamic websites), and closure is not a framework to support components, widgets, layouts. But I do plan to use their templating tool instead of Ext’s XTemplates since the generation of the templates happens on the server.

  • Anonymous

    I am not a sufficiently strong programmer to fairly judge the technical criticisms of Closure Library or their rebuttals. Nevertheless, I’ve decided to use Closure in my project. Citing for the following reasons (amongst others):
    – Google is doing a great deal to improve JS script performance across the board: participating in Web standards; releasing a browser whose primary advantage is speed; including a compiler with Closure.
    – Google has been eating this dogfood for some time
    – I am confident that if there should emerge demonstrable performance improvements, these will be included in subsequent releases of Closure
    – The structure that Closure enforces is intuitive to me

    On the paranoid side, what concerns me is whether all this might constitute, for Google, the middle E in Microsoft’s classic Embrace, Extend, Extinguish strategy.

  • Fredrik

    Oh dear.. Dmitry & co have only embarrased themselves with this article. For loops, 1.0?! Cmon.. I dare you to read this post in couple months time without blushing.. ;)

  • Thanks for bringing Closure into my line of sight. I’ve been using JQuery for heavy lifting for a while and have no intension of stopping but I’m going to look at Closure to see what I can do with it. The great thing about these libraries is that they just get better over time.

    Kev, maybe you didn’t intend it but I don’t understand the Google bashing attitude of the article and some of the comments. It’s not the style I expect.

  • Felix Pleșoianu

    “In short, it’s repetitive, inefficient, and downright unpleasant to code against.”

    Dunno about the Clojure graphics API, but I’m using <canvas> and I fail to see these problems. Sure, it could be better, but it’s definitely usable as it is. And I’d like to see a 2D graphics API that’s significantly different in concept.

  • netsi1964

    Very interesting article! I did one test related to the convertion to string under “Typical Confusion” and found that I cannot prove that convertion to string by adding an empty string is NOT faster…

    var a = new Date();
    var c = function() {};
    for(var i=0; i<100000; i++) {
    var b = c+'';
    alert(new Date()-a);
    RESULT: 129 ms

    var a = new Date();
    var c = function() {};
    for(var i=0; i<100000; i++) {
    var b = String(c);
    alert(new Date()-a);
    RESULT: 110 ms.

  • Anonymous

    I’m ashamed to admit it but I’m guilty of slow looping! Thanks for that, we are all listening!

  • chasbeen

    Guilty of the “slow loop”. Thanks.

  • Nate

    He lost me at: “His biggest fear, he told me, was that people would switch from truly excellent JavaScript libraries like jQuery“.

    Seriously, has he even bothered to look at the jQuery source code? You want to talk about a place in need of serious revamp, take a look at the jQuery constructor. Every time you call $(…) it reinstantiates itself multiple times, creating a glut of jQuery objects that are ultimately not used.

    So while Dmitry is nitpicking minor details, the “excellent library” that he holds up in comparison is riddled with areas that have real performance tradeoffs.

    Not only that, but it’s written to mask javascript, to make it easier to use, but it’s thereby keeping people from “getting” javascript.
    And in the end, who cares? For all of it’s flaws, it just goes to show you that poorly written code and imperfect libraries can get the job done and help solve people’s problems.
    The problem with this, and it’s been mentioned above is that Dmitry doesn’t bother to backup any of these claims with quantifiable tests. Sure it sounds good to criticize (and super easy), but let’s do the same to one of his examples of “better” code.

    He claims that doing String(arguments[i]) is real slow performance-wise and they should instead do: arguments[i]+””.
    Let me put on my Dmitry hat for a minute and say obviously he doesn’t get javascript OR how it’s implemented by the most popular engine, JScript in IE.
    IE will assign a block of memory to every string created, and he’s recreating a new string every time he converts his variable.
    It’s soooo much faster to instead do the standard unary operator and instead do: (+arguments[i]) since that avoids creating an unneeded memory allocation.
    That sounds all good (and is technically accurate), but does it pan out for this case? I have no idea, I didn’t bother testing it. The point is that it’s really easy to throw around theoretical nitpicking about edge cases without actually investing any time into seeing if it’s accurate.

    And I’m sure Dmitry has heard the expression “Premature optimization is the root of all evil”. Yet he has micro-optimized where there was no proven need for it, claims it is superior because it’s different, and then tries to slam Google engineers over it.
    Seems a bit self-serving to try to tear down a project using ad-hominem attacks on the authors, especially when those attacks have so little to bear on the important aspects of a library.

    It’s not hard to find micro-optimized code around that on the macro level runs like a dog and is hard to maintain. It’s a pattern that shows up with very junior developers who want to impress the world (or their boss) with their cleverness and optimizations. These optimizations usually show up in the form of saving space (“Look, I saved 0.56k by only using ternary operators and no curly braces”) or in the form of supposed speed enhancements (“When you instantiate this variable 3,000,000 times it runs 40ms quicker, and I only spent 4 hours setting up and running the tests in all of the environments”).

    Just a word for anyone who reads this and is enticed by the promise of these micro-optimizations, there is an expression: “Make it run, make it right, make it fast”.
    Do that, in that exact order, and you’ll be fine. If you change the order of those priorities, you’ll pay for it.

    And, no, I don’t work for Google, nor do I use Closure, nor am I a Java developer. It has some interesting ideas and lots to absorb, but what gets my goat is Dmitry’s snobbish superiority to ideas and methods different from his own (especially when there is plenty to criticize in what he advocates).

  • In my opinion the usage of browsersniffing instead of feature detection is the biggest flaw in Closure.

  • jqueeery

    Open source? Truly Google is not as open source as they make everyone want to believe. Just look at the whole Cyanogen cease and desist thing. Why doesn’t Google hire some excellent programmers rather than releasing badly coded software in hopes that the open source community will fix it for them?

  • bobince

    Yeah, OK, the loops are not quite optimal. Not ideal, but a trivial change to make and very very little real-world impact. But complaining about isString not detecting the String wrapper almost no-one knows exists, and is most likely to be a mistake if ever actually used? Really, not a big deal.
    And ‘undefined’ might have been redefined? OK, I personally would use the ‘in’ operator in preference to anything involving ‘undefined’, but this *is* JavaScript, you can redefine anything at all. Must Google avoid using ‘String’ or ‘Math’ too, just because someone might have shadowed them? Absurd.
    Whilst I don’t doubt that when I start reading the Closure source I will find stuff I really don’t like, a lot of this article is just staggeringly petty. There is much, *much* worse than anything talked about here in all the currently-popular JS frameworks. I mean, look at jQuery’s abuses of regex-over-HTML and the hilarious brokenness of its ‘remove’.

  • Larry Battle

    The Google programmers need to read “Object-Oriented JavaScript” by Stoyan Stefanov. He’s a Yahoo web devoloper and does a Damn good job at teaching javascript.
    Well at lease they tried. jQuery 1.4 should be out by Jan 2010. Can’t waiittt…

    – Larry

    Simple javascript quiz “Here”

  • Jose Fernandez

    There’s nothing in this article that would put me off using Closure. It’s valid to complain about a library because of its lack of features or complexity or poor overall performance, not because the last 2% of optimizations haven’t been done. Optimizations are easy compared to writing a useful interface.

  • Simon

    @ Gytis:

    Possibly the least *cool* library out there. But is the answer to your question; and has been for a good two or three years.

  • Paul Smith

    It’s hard to believe that this garbage was put up. I hope that these bad arguments don’t prevent anyone from using the closure tools. The Closure Compiler and Closure Templates are great ideas (maybe not completely original, but potentially useful anyway). The Closure js library isn’t perfect, but the points in this article are bogus and should not dissuade someone from using it. Is Dmitry jealous? Afraid his libraries will become obsolete? I don’t know…

    The Slow Loop
    – This is not going to be what makes your js application run slowly. It’s people that aren’t conscious of performance and don’t do performance testing. Most likely it’ll be poor usage of the dom or bad high-level algorithms.

    Six Months in a Leaky Boat
    – Same thing. Anyone who’s thinking about using goog.memoize enough to cause memory issues should probably review the code and most likely write something that’s more efficient in their specific use-case.

    Code in a Vacuum
    – This is the most ridiculous point yet. I can do any number of stupid, arbitrary things that will completely break any javascript library. Redefining undefined? The only reasons anyone would do that are: trying to break stuff, stupidity, accident.
    – There are some common modifications to the global namespace. When using a js library that does modify object’s prototype, you need to be aware of it. There may be incompatibilities between google closure and other libraries because of this. It’s not really google closure’s fault that the other libraries muck with the global namespace. If it really offends you, submit a patch.

    Typical Confusion
    – Don’t use the js String. It’s pointless.
    – Of all of the nitpicks, the 1.0 thing is the worst.

    API Design
    – I agree with google’s design. Even the playing field with a standard api. If there are other libraries built on the HTML5 canvas API, then they can run on top of that.

    Maybe I’ve been trolled and just wasted my time… No, I don’t work for google, but I’d hate to see bad reasoning mar a js library.

  • loba

    Dense Array (using the array constructor and specifing the size) is internaly optimized and faster. So :

    var x = [] is much slower than var x = new Array();

  • Ralph Holzmann

    Couldn’t your fourth example better protect variable scope by doing something like:

    for(var i = arr.length;i–;) {
    // Do stuff

    This would encapsulate the i variable and bind it to the scope of the loop.

    You can leave the last argument of a for loop blank ;)

  • ZenPsycho

    “Closure’s graphics classes, for example, are modeled around the HTML5 canvas API, which is about what you’d expect from a JavaScript API designed by an HTML standards body. In short, it’s repetitive, inefficient, and downright unpleasant to code against.”
    How easily we forget that the Canvas api was designed by Apple, not a standards body. Though, “designed” here is a bit of a strong word. The reason it looks kind of weird is because it’s a simple javascript wrapper around Quartz 2D, which in turn is a graphics api designed to be compatible with Display Postscript, which in turn, is a stack based programming language with a graphics model based around a global state object. In essense using the canvas API is a little like calling out to some other programming language, with its own hidden global variables and state. Despite its awkwardness though, it’s a design that has withstood the test of time, lasting over 25 years. It’s a design that holds a particularly huge advantage that I see only google has taken advantage of: It’s supremely easy to implement the canvas API as a front end to ANY kind of graphics drawing API.
    The excanvas library is only one instance of this, but nobody has yet realised, I see, that you could write a context for generating PDF’s, SVG, or PS. Nobody I see but myself has noticed that it would be nearly trivial to implement the Canvas API outside of the browser, and build a really fast game engine using OpenGL as a back end, with programming in javascript. Nobody has noticed that we could tap the vast resources built up around PostScript and port it 1:1 over to javascript with little effort (since the api calls are all exactly the same)- For instance by writing a canvas backend to the ghostscript project. If someone did that, we’d have an efficient way of displaying PDFs on the web without plugins.
    But no, I guess we’d all rather whine about how ugly it looks.

  • ZenPsycho

    “This would encapsulate the i variable and bind it to the scope of the loop.”

    javascript doesn’t have block scope. Only function scope.

  • digital-ether

    I’m in disbelief. Why don’t we just do away with all the constructs in modern day programming and start writing machine code. For goodness sake, the for construct is there for a purpose. Why would you worry about using for() over while() or any other construct which does not affect the overall performance of your code.

    If you’re a web developer and start thinking that you should optimize every single construct in your code, you obviously have not learned about the dreaded “premature optimization”. What you are doing is essentially sanding your chair while it is still a tree. Everyone else will have a rough but usable chair while you’re still sanding away wasting sandpaper and time. In the end you have a very crooked piece of wood to sit on. It may look smooth up close, but overall it’s a piece of shit that hurts your ass.

  • Nice to read an article like this for a change, though jQuery also shows the signs of having been written by people whose first language is not JavaScript. For those who are arguing over the merits of the optimisations that are discussed, the point of these was mainly to point out that the library has clearly been written by people who are not experts in JavaScript – and that is rather worrying from as large a group as Google.

  • @omnicity: It seems to me that this is an article damning closure based on micro-optimizations… The optimizations are not JavaScript oriented. Any code Java, JavaScript, C#, PHP, ASP, etc… can benefit from the adjustments to slow looping or optimizing the use of strings but this won’t make or break a library and as pointed out by some comments, what works well in one browser’s JavaScript engine, doesn’t necessarily work in all.

    I suggest rather than slagging Closure because Google’s all “big and bad” that enlightened developers take it for a spin. The source is available via SVN. It includes unit testing and all sorts of documentation. More is available at Google Tools. I don’t know if I’ll use it in place of JQuery but it won’t hurt to see what it can offer.

  • Kelzer

    I’ve always hated the idea of writing highly optimized (and typically far less readable and obvious) code as a workaround for poor performance of a language or compiled code. This has been the norm for decades. In the 80’s Intel assembly coders were writing XOR AX,AX instead of the more obvious MOV AX,0 just to save a couple of clock ticks. Twenty five years later and we’re still writing less readable code rather than fixing the root cause of performance problems.

    Sorry, but this:

    var replacement = String(arguments[i]).replace(…);

    is far more obvious than:

    var replacement = (arguments[i] + “”).replace(…);

    If it’s less efficient, then fix the freaking JavaScript engine(s)!

    Oh, by the way, when the 80286 came out, the MOV and XOR operations both ran in 1 clock tick, so all the less readable XOR code no longer had any performance advantage over the more obvious MOV instruction.

    • Keef

      These examples produce different results. You must have missed the part about “String” and “string” being different types in JavaScript.

      Most web programmers don’t have the option of fixing or even choosing JavaScript engines.

  • Daniel

    Like a few who have commented, this premature optimisation at bored nitpicking at best, yeah the guys knows stuff, but really, some actual data would have been the best, and most respectable way to do this.

  • Pacoup

    Obviously Google will not go blind about this, and although their developers might be going “OMG JavaScript really *****” (I mean, this + “” is faster than String(this)? Ugh), clearly they will fix their code based on the really great comments from the community.

    If they don’t, it’d be almost as if they were purposely bringing down slower browsers like Internet Explorer with slow JavaScript in order to get market share for Chrome.

  • Mike

    Surely any tests of optimisations should be made against a very slow javascript browser.. like IE6.

    The tests people made here in Firefox or Chromium, are bound to show negligible differences, because the javascript engines in these browser optimise the code for you. They are designed to find items like slow for loops, and compile them in an optimal way for you. The browser compensates for bad coding if you like… doesn’t mean it wasn’t bad coding.

    If the optimisations make minimal difference in IE6, then fair enough, waste of time doing them (the point people made about hot paths & premature optimisation etc). But if they make a big difference in IE6 etc then potentially well worth doing, since they are so slow already, and don’t do these same smart compiling!

    Isn’t one of the reasons people use a javascript library to make canvas features available on older browser that have poorer support for such technologies? (as well as making things easier to code!). The library writers should bear this in mind, and make the slowest browsers run faster if they can…

  • Justen

    While these are entirely fair criticisms (and I have a huge amount of respect for Dmitry Baranovskly, besides) most of them come down to performance issues, not capability issues. To be honest, very few framework users *understand* let alone care about how fast a for loop executes. It is important to some of us in some circumstances – an instance that comes to mind for me is a simulation I wrote using Raphael (:D).

    In standard cases however javascript performance issues are trumped by quality and quantity of support (plugins, community), ease of use, and personal preference. Performance is something that improves with iteration – compare any library today with its ancestors.

    That said, I’ll be sticking with jQuery (or whatever my client is already using). Thanks to Dmitry for his insight. :)

  • Justen

    @ZenPsycho: if canvas API is an ugly pain in the ass to use, in what way are all these “cool things” better than equivalent functionality in something that is not an ugly pain in the ass to use? Maybe a facade is in order. Don’t get me wrong, I’m not personally invested in the argument, I’m just not convinced on yours alone.

  • ZenPsycho

    @justin, I don’t feel that the canvas api *is* “a pain in the ass to use”. only ugly. The C programming language is ugly AND it’s a pain in the ass to use, so is Javascript itself in a lot of ways. Yet here they are, two of the most used programming languages on the planet. How could that possibly be? The features I pointed out trump beauty, because it’s trivial to make a “pretty” front end to it if that’s what you want, but it’s not trivial to go the other way round by getting a “pretty” api to have the advantages and features I pointed out. New and pretty api’s are not easy to create, not easy to implement, do not have a time proven design, and do not have years of training and experience and resources available to them. They are not quite so trivial to just implement on top of cairo (Another postscript work alike graphics library for C. Since firefox already used cairo, the canvas tag showed up nearly instantaneously in firefox after apple added it to safari).
    Just as an example, I can’t create a clipping path, or even a clipping rectangle in Raphael, and yet this is a standard part of any Postscript based api. It would not be possible to draw a lot of PDFs using raphael. (to be fair, I don’t think excanvas supports it either, but I’d have to double check)
    Aside from that, Postscript evolved over the years to encompass a reasonably large set of requirements. When you throw away something like that, and decide to start from scratch, you’re effectively dedicating yourself to a new 10 year long effort to perfect a new api. Why not actually learn from the past instead of dooming yourself to repeat it?
    Have a look at some postscript files, like, for example, an illustrator v9.0 file. Straight at the head of most, you will find a bunch of function definitions, which are designed at making the rest of the file more compact, and more like a data format than a scripting language. This practice started out as a timesaver in handwritten postscript files, and evolved into a kind of postscript standard library.
    What canvas has inherited is the base postscript api, not the “libraries”. We don’t yet know what kind of standard graphics library really suits javascript, and it’s impossible to design something like that in advance of actually trialing it in the big bad wide world over the course of many years.
    jQuery wasn’t the first javascript library, it wasn’t the last and it wasn’t created by Brenden Eich as a core part of javascript. When jQuery was first designed, its creators did not really know that it was going to be extremely popular, or that it was going to succeed at all, really. That happened a long time after it was designed and made, and there was no gaurantee that it was going to happen at all. Making the next jQuery, but for graphics, is not something that you can just decide to do.
    Being beautiful and popular is not what core standardised apis are for. They are there to just work, using designs that have been proven to work in the past, not experimental new “pretty” designs whose weaknesses we haven’t been discovered yet, like Raphael and its non existent clipping paths. Standards bodies should not try to innovate. When they try, we get crap like CSS3, and XHTML2, and the DOM api. The canvas api is a supermodel compared to var temp = document.getElementById(“myelement”); temp.parentNode.removeChild(temp); At least we know that the api Canvas is based on is capable of rendering the Mac OS X desktop without breaking a sweat- And it gives all Mac OS X applications PDF output and printing virtually for free (just the cocoa equivalent of canvasel.getContext(“print”)). I think that’s a genius design, personally. Utterly beautiful. But maybe you’re just interested in putting pie charts on websites.

  • ZenPsycho

    It kind of looks like I’ve contradicted myself up there in the first paragraph. What I meant was, if you just want to make a pretty front end to canvas for your own usage, go for it, it’s easy. On the other hand, making that pretty front end good enough for a vast number of people on a variety of platforms, trying to a myriad of different things from simple pie charts, to realtime video filtering, 3d rendering, document display, interactive interfaces, etc… then getting all the browser vendors to agree on it… Well that’s actually pretty hard, and it’s not something you want to start from scratch if you can help it. You have to be able to prove to a lot of people that it’s a great idea, and that’s really hard if it’s something nobody’s ever seen before, and there’s no evidence that it works.

  • chill

    Don’t you have a database to optimize or something? There is only one reason anyone would spend this much time and effort publicly denouncing code they could just fix. ( OS code is always eventually the best because a whole lot of people pour through the source and find and fix all of these little things until their gone, not because of some inherent superiority of the project initiators, remember. ) The reason must be that the writer works for Microsoft or their ilk. If I’m wrong and he’s not getting paid but just doing this to soothe his fragile ego then I just wonder why mommy didn’t love him.

  • Sameeer

    Hello, I have realized lately that most of Google’s sites including Groups, Youtube, et al, crash my browsers: Opera and IE 8. I am quite sure, Google is not writing proper JS code these days just to push its Chrome. I find slow pages on Google site very annoying. I never find this happening on any other website even if I load a 5MB of Flex App or 1.5 MB of Ext JS framework page.

    Is anyone else facing this issue?

  • langsor

    I see valid arguments in both perspectives…
    Mainly that experienced Javascript programmers should know how to write optimized code from their previous experience — e.g. use a static length variable on ‘for’ loops, as when working with DOM structures // of course when working with dynamic data, sometimes you must reevaluate the length — so that these so called “nit picks” are actually very telling and could point to more serious issues upon perusal of the actual code base. On the other hand, we are always students of the ever fluctuating programming language arts, or we are dinosaurs within a year.
    Also that it is important to have readable code — above all else — especially when working with code libs, and anyone who doesn’t know why this is so has no right posting on this topic, but for edification purposes, I’ll end with this thought, obviously not my own.
    “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” — Brian Kernighan

  • Sreekanth Choudry

    Too Good:

    “This single line of code anywhere in the page will bring Closure Library crashing down:”

    var undefined = 5;

    Well, this single line of code will bring *your* library crashing down:

    Raphael = null;

  • Fabrice

    The first example (loop optimization) is inappropriate for two reasons:

    1) Don’t optimize early
    2) They have their own javascript compiler (read COMPILER not minfier) which can optimize these loops very easily.

    Why on earth would you optimize early when tools can do that for you? You are making the job of the compiler more difficult by obfuscating simple code, and making it harder for other programmers to read it.

  • Juan Mendes

    Here’s a for loop that doesn’t use length, so it’s fast. It’s also got some syntactic sugar, you don’t need an extra line to assign the value being iterated. It’s not suitable for arrays that contain 0s, booleans, nulls/undefined/empty string. But it’s very useful for most cases.

    var list = ["dog", "cat", "bull", "frog"];
    for (var i=0, animal; animal = list[i]; i++){
    // Here you can use the animal variable without the need for a separate assignment
    console.log("Animal #", i, animal);

  • Gary Mawdsley

    Maybe I’m reading too much into the name Google have given it but it strikes me that Google could be trying to treat javascript as a DSL and clearly from the numerous comments the pattern has some teething troubles – assuming my guess is correct would be a nice idea if it worked – and whilst we are on the subject why not enshrine the principles of jquery in the next release of the language itself – or maybe in fact Google could do that via its DSL pattern – if that’s the intention that is

  • saunders0

    For a hacker like me JavaScript is the easiest language to write. In the past, when 16KB was a lot of memory, I played with several machine codes, Assemblers, FORTRAN, Cobol, RPG, Basic, REXX etc. Later, ‘C’, Perl and Java proved to be beyond my comprehension.

    My initial reaction to this article was to give up trying to write my own rubbish JavaScript. Having read the above comments I will happily use Closure, YUI or whatever; the code works and it is better than I could write. I manage only seven, fully tested, lines per day.

    JavaScript is (too?) flexible; a Hacker can easily write working code whilst the expert may produce concise, ‘OO’, bomb-proof code, just as readable as ‘C’. The libraries/frameworks are for users like me, not at programmers who knock out hundreds of lines of tested code per hour ;-).

    Client-side code is unimportant when we are expecting most computing to be server-side. It is just a stop-gap solution that compensates for the inadequacy of the browsers; it is not worth the effort to fully optimise client-side code.

    Dimitry’s Raphaël is nice but even I, a mere hacker, can do images, graphs etc. server side with PHP.


  • Fabrice

    Another bad example:
    “(…) Java-inspired type confusion. From color.js, line 633:”

    A simple 1 min test on the closure compiler service ( ) will show you that the copmpiler turns those 1.0 into one’s, even better, it will rewrite the expressions.


    var f = 4 – 1.0;


    var f = 3;


    var f = foobar + 1.0;


    var f=foobar+1;

    I love Dmitry’s work and have made good use of Raphael myself. But perhaps he spoke a little too soon, maybe he didn’t like the java-inspired code style. Nevertheless the article completely overlooks the fact that Google provided their own javascript COMPILER.

  • Anonymous

    Thanks for the insightful review of Closure Library code quality.

  • Merf

    The article is short on actual mistakes, but long on optimizations-that-ought-to-be-handled-by-the-engine and this-looks-like-Java-go-back-to-your-own-language.

  • Davide Zanotti

    Dmitry Baranovskiy is a JavaScript genius, but he (and most people here) doesn’t understand that the power of Closure is not represented by its algorithms but instead by the EXCELLENT framework which allow us to write JavaScript as never before: really object oriented, type checked, organized, clear and under control code! We can avoid dead code and a lot of errors by using the compiler and the python script provided! Please, read my article on insideRIA:

    …I’m not saying Closure is better than jQuery or other libraries, the point is that it’s unique and its goal is not a fast DOM selector or the latest cool animation effect, but something more ambitious and concrete!

  • David Mark

    It’s crap. But then so is jQuery. See the recent write-ups on CLJ. There’s a lot more wrong with it than inefficient and unfiltered loops.

    And no, JS is not inherently broken. The specific JS in these scripts is though.

  • jabbaugh

    The first statement regarding using the Array.length property is false. I have bench marked using both the array.length and set variable method and found no increase in speed. I did the test using 20,000 iterations in the for loop. To view the test and run it for yourself go to

  • langsor

    Not to be a poop, but using 20,000 iterations referencing only seven nodes of an HTML document using node-collection.length, vs. pre-referencing that same array…makes an observable difference in the numeric data (not a difference subjectively). This is observable both in Firefox 3.5.5 and IE 7 (I haven’t bothered with 8 yet or any other).

    You can look for yourself here: benchmark

    Check out the source, it’s very simple.

    In most situations, however, this difference would be negligible at best.

  • jabbaugh

    I ran your test multiple times and I got the same results as my test. Sometimes using the Array.length property was slower and sometimes it was faster (using Firefox 3.5.5). In my test I display the results from running the code 10 times. On my initial bench mark test I ran the test 100 times which still showed no significant difference than the results from 10 tests.

    I do agree with you that the difference is negligible. This contradicts the author’s statement that this method “[makes] the loop run much faster.”

  • JS Expert.

    Hmm, referincing .length on a HTML DOM collection (e.g. returned from getElementsByTagName) is very slow. Painfully so. You don’t want to do it in a loop. Because HTML DOM collections are ‘live’, referencing .length will go away and inspect the DOM, every time, you could even be doing DOM manipulation in your loop (nasty, usually).

    If your ‘array’ in your loop is not a HTML DOM collection, the performance difference is negligible, readability might be more important. Otherwise the performance difference is significant. Perhaps you would never worry if your documents were small, but a library has to be scalable, right?

  • tomg

    the switch statement is not slow:

  • Anonymous

    the while loop is actually the slowest:

  • tang also

    i think this is a brilliant and funny article, but it seems so hard for me, as a Chinese, to read..

  • Some people stick to their mantras and do not like any change (namely if they learned something for years, and now they are obsolete experts).

    Change is inevitable. It hurts, but it is and will be always here. E.g., for me, Closure is first library, which can change my opinion on JavaScript as a whole.

  • Anonymous

    To me this is way too much over analysis and issues that should not bother 95-99% of developers using Closure
    Screaming fast performance is useless to developers without flexibility maintainability supportability and abundance of features. Bigger Picture stuff. Not everyone wants to ride a two wheeler that goes at 300 MPH.
    I’m also not convinced that these minor transgressions will have an overwhelming impact on closure’s usability, or that these types of misses are restricted only to closure. I would like to take a look at MS windows code someday, and that’s not stopped at it being the most successful piece of software ever.

    Finally I’m sure the closure department at google will continue to optimize it over iterations.
    My biggest concern with google is that they choose to dump or stop supporting products without much warning. That’s why I’m leery of adopting their stuff. Doesn’t seem like this will happen with closure though.

  • philip georgiev

    coming from as3 and obj-c javascript is very disappointing language, is there any way to make a function that checks if a variable is null/undefined etc ? using typeof === “undefined” is not very good …

  • bindesh singh

    Its 1 month i entered into web games from native code based games. I am not hating JS, but its really joke to learn multi-threading, high performance coding in native language like C/C++ and then coming to HTML5 gaming and getting comments from JS experts that this-that coding method is poor and slow :p.

    What i have noticed is performance is not mostly affected by these loops until they are cross huge number of elements. Like happens in Emulator development where we have to handle 100s of OpCodes in switches-if-elses. Performance 99.0% drops because of drawing bitmaps, communications (http requests), dom changes, other data handlers.

    But every optimization should be welcome because JS is scripting language and 10-to-100 times slower than natives and even a single statement requires to run lot of native code inside. However optimization post processors are there to handle such situations.

Get the latest in JavaScript, once a week, for free.