Unifying Touch and Mouse with Pointer Events

Share this article

I often get questions from developers like, “with so many touch-enabled devices on phones and tablets, where do I start?” and “what is the easiest way to build for touch-input?” Short answer: “It’s complex.” Surely there’s a more unified way to handle multi-touch input on the web – in modern, touch-enabled browsers or as a fallback for older browsers. In this article, I’d like to show you some browser experiments using MSPointers – an emerging multi-touch technology and polyfills that make cross-browser support, well … less complex. The kind of code you can also experiment with and easily use on your own site. First of all, many touch technologies are evolving on the web – for browser support you need to look the iOS touch event model and the W3C mouse event model in additional to at MSPointers, to support all browsers. Yet there is growing support (and willingness) to standardize. In September, Microsoft submitted MSPointers to the W3C for standardization and we’ve already reached the Last Call Draft: https://www.w3.org/TR/pointerevents. The MS Open Tech team also recently released an initial Pointer Events prototype for Webkit. The reason I experiment with MSPointers is not based on device share – it’s because Microsoft’s approach to basic input handling is quite different than what’s currently available on the web and it deserves a look for what it could become. The difference is that developers can write to a more abstract form of input, called a “Pointer.” A Pointer can be any point of contact on the screen made by a mouse cursor, pen, finger, or multiple fingers. So you don’t waste time coding for every type of input separately.

The Concepts

We will begin by reviewing apps running inside Internet Explorer 10 which expose the MSPointer events API, and then solutions to support all browsers. After that, we will see how you can take advantage of IE10 gestures services that will help you handling touch in your JavaScript code in an easy way. As Windows 8 and Windows Phone 8 share the same browser engine, the code and concepts are identical for both platforms. Moreover, everything you’ll learn on touch in this article will help you do the very same tasks in Windows Store apps built with HTML5/JS, as this is again the same engine that is being used. The idea behind the MSPointer is to let you address mouse, pen & touch devices via a single code base using a pattern that matches the classical mouse events you already know. Indeed, mouse, pen & touch have some properties in common: you can move a pointer with them and you can click on an element with them, for instance. Let’s then address these scenarios via the very same piece of code! Pointers will aggregate those common properties and expose them in a similar way to the mouse events. clip_image001 The most obvious common events are then: MSPointerDown , MSPointerMove & MSPointerUp which directly map to the mouse events equivalent. You will have the X & Y coordinates of the screen as an output. You have also specific events like: MSPointerOver , MSPointerOut , MSPointerHover or MSPointerCancel. But of course, there could be also some cases where you want to address touch in a different manner than the default mouse behavior to provide a different UX. Moreover, thanks to the multi-touch screens, you can easily let the user rotate, zoom or pan some elements. Pens/stylus can even provide you some pressure information a mouse can’t. The Pointer Events will aggregate those differences and will you build some custom code for each devices’ specifics. Note: it would be better to test the following embedded samples if you have a touch screen (of course) on a Windows 8/RT device or if you’re using a Windows Phone 8. Otherwise, you do have some options:
1. Get a first level of experience by using the Windows 8 Simulator that ships with the free Visual Studio 2012 Express development tools. For more details on how this works, please read this article: Using the Windows 8 Simulator & VS 2012 to debug the IE10 touch events & your responsive design . 2. Have a look at this video, which demonstrates all the below samples on a Windows 8 tablet supporting touch, pen & mouse. 3. Use a virtual cross-browsing testing service like BrowserStack to test interactively if you don’t have access to the Windows 8 device. You can use BrowserStack for free for 3 months, courtesy of the Internet Explorer team on modern.IE.

Handling simple touch events

Step 1: do nothing in JS but add a line of CSS

Let’s start with the basics. You can easily take any of your existing JavaScript code that handles mouse events and it will just work as-is using some pens or touch devices in Internet Explorer 10. Indeed, IE10 fires mouse events as a last resort if you’re not handling Pointer Events in your code. That’s why you can “click” on a button or on any element of any webpage using your fingers even if the developer never thought that one day someone would have done it this way. So any code registering to mousedown and/or mouseup events will work with no modification at all. But what about mousemove? Let’s review the default behavior to answer to that question. For instance, let’s take this piece of code:
<!DOCTYPE html>
<html>
<head>
    <title>Touch article sample 1</title>
</head>
<body>
    <canvas id="drawSurface" width="400px" height="400px" style="border: 1px dashed black;">
    </canvas>
    <script>
        var canvas = document.getElementById("drawSurface");
        var context = canvas.getContext("2d");
        context.fillStyle = "rgba(0, 0, 255, 0.5)";

        canvas.addEventListener("mousemove", paint, false);

        function paint(event) {
            context.fillRect(event.clientX, event.clientY, 10, 10);
        }
    </script>
</body>
</html>
It simply draws some blue 10px by 10px squares inside an HTML5 canvas element by tracking the movements of the mouse. To test it, move your mouse inside the box. If you have a touch screen, try to interact with the canvas to check by yourself the current behavior:

Default Sample : Default behavior if you do nothing

Result : only MouseDown/Up/Click works with touch. i.e. You can only draw blue squares with touch when you tap on the screen, not when you move your finger on the screen.

You’ll see than when you’re moving the mouse inside the canvas element, it will draw a series of blue squares. But using touch instead, it will only draw a unique square at the exact position where you tap the canvas element. As soon as you try to move your finger in the canvas element, the browser will try to pan inside the page, as that’s the default behavior being defined. You therefore need to specify you’d like to override the default behavior of the browser and tell it to redirect the touch events to your JavaScript code rather than trying to interpret it. For that, target the elements of your page that shouldn’t react anymore to the default behavior and apply this CSS rule to them:
-ms-touch-action: auto | none | manipulation | double-tap-zoom | inherit;
You have various values available based on what you’d like to filter or not. You’ll find the values described in this article: Guidelines for Building Touch-friendly Sites. The classic use case is when you have a map control in your page. You want to let the user pan & zoom inside the map area but keep the default behavior for the rest of the page. In this case, you will apply this CSS rule (-ms-touch-action: manipulation) only on the HTML container exposing the map. In our case, add this block of CSS:
<style>
    #drawSurface
    {
        -ms-touch-action: none; /* Disable touch behaviors, like pan and zoom */
    }
</style>
Which now generates this result:

Step 1: just after adding -ms-touch-action: none

Result: default browser panning disabled and MouseMove works but with 1 finger only

Now, when you move your finger inside the canvas element, it behaves like a mouse pointer. That’s cool! But you will quickly ask yourself this question: “why does this code only track one finger?” Well, this is because we’re just falling in the last thing IE10 is doing to provide a very basic touch experience: mapping one of your fingers to simulate a mouse. And as far as I know, we’re using only one mouse at a time. So one mouse == one finger max using this approach. Then, how to handle multi-touch events?

Step 2: use MSPointer Events instead of mouse events

Take any of your existing code and replace your registration to “mousedown/up/move” by “MSPointerDown/Up/Move” and your code will directly support a multi-touch experience inside IE10! For instance, in the previous sample, change this line of code:
canvas.addEventListener("mousemove", paint, false);
to this one:
canvas.addEventListener("MSPointerMove", paint, false);
And you will get this result:

Step 2: using MSPointerMove instead of mousemove

Result: multi-touch works

You can now draw as many squares in the series as touch points your screen supports! Even better, the same code works for touch, mouse & pen. This means for instance that you can use your mouse to draw some lines at the same time you use your fingers to draw other lines. If you’d like to change the behavior of your code based on the type of input, you can test that through the pointerType property value. For instance, let’s imagine that we want to draw some 10px by 10px red squares for fingers, 5px by 5px green squares for pen and 2px by 2px blue squares for mouse. You need to replace the previous handler (the paint function) with this one:
function paint(event) {
    if (event.pointerType) {
        switch (event.pointerType) {
            case event.MSPOINTER_TYPE_TOUCH:
                // A touchscreen was used
                // Drawing in red with a square of 10
                context.fillStyle = "rgba(255, 0, 0, 0.5)";
                squaresize = 10;
                break;
            case event.MSPOINTER_TYPE_PEN:
                // A pen was used
                // Drawing in green with a square of 5
                context.fillStyle = "rgba(0, 255, 0, 0.5)";
                squaresize = 5;
                break;
            case event.MSPOINTER_TYPE_MOUSE:
                // A mouse was used
                // Drawing in blue with a squre of 2
                context.fillStyle = "rgba(0, 0, 255, 0.5)";
                squaresize = 2;
                break;
        }

        context.fillRect(event.clientX, event.clientY, squaresize, squaresize);
    }
}
And you can test the result here:

Step 2b: testing pointerType to test touch/pen or mouse

Result: You can change the behavior for mouse/pen/touch but since 2a the code now works only in IE10+

If you’re lucky enough to have a device supporting the three types of inputs (like the Sony Duo 11, the Microsoft Surface Pro or the Samsung Tablet some of you had during BUILD2011), you will be able to see three kinds of drawing based on the input type. Great, isn’t it? Still, there is a problem with this code. It now handles all type of inputs properly in IE10 but doesn’t work at all for browsers that don’t support the MSPointer Events like IE9, Chrome, Firefox, Opera & Safari.

Step 3: do feature detection to provide a fallback code

As you’re probably already aware, the best approach to handle multi-browsers support is features detection. In our case, you need to test this:
window.navigator.msPointerEnabled
Be aware that this only tells you whether the current browser supports MSPointer. It doesn’t tell you if touch is supported or not. To test support for touch or not, you need to check msMaxTouchPoints. In conclusion, to have a code supporting MSPointer in IE10 and falling back properly to mouse events in other browsers, you need code like this:
var canvas = document.getElementById("drawSurface");
var context = canvas.getContext("2d");
context.fillStyle = "rgba(0, 0, 255, 0.5)";
if (window.navigator.msPointerEnabled) {
    // Pointer events are supported.
    canvas.addEventListener("MSPointerMove", paint, false);
}
else {
    canvas.addEventListener("mousemove", paint, false);
}

function paint(event) {
    // Default behavior for mouse on non-IE10 devices
    var squaresize = 2;
    context.fillStyle = "rgba(0, 0, 255, 0.5)";
    // Check for pointer type on IE10
    if (event.pointerType) {
        switch (event.pointerType) {
            case event.MSPOINTER_TYPE_TOUCH:
                // A touchscreen was used
                // Drawing in red with a square of 10
                context.fillStyle = "rgba(255, 0, 0, 0.5)";
                squaresize = 10;
                break;
            case event.MSPOINTER_TYPE_PEN:
                // A pen was used
                // Drawing in green with a square of 5
                context.fillStyle = "rgba(0, 255, 0, 0.5)";
                squaresize = 5;
                break;
            case event.MSPOINTER_TYPE_MOUSE:
                // A mouse was used
                // Drawing in blue with a square of 2
                context.fillStyle = "rgba(0, 0, 255, 0.5)";
                squaresize = 2;
                break;
        }
    }

    context.fillRect(event.clientX, event.clientY, squaresize, squaresize);
}
And again you can test the result here:

Sample 3: feature detecting msPointerEnabled to provide a fallback

Result: full complete experience in IE10 and default mouse events in other browsers

Step 4: support all touch implementation

If you’d like to go even further and support all browsers & all touch implementations, you have two choices: 1 – Write the code to address both events models in parallel as described in this article: Handling Multi-touch and Mouse Input in All Browsers 2 – Just add a reference to HandJS, the awesome JavaScript polyfill library written by my friend David Catuhe, as described in his article:  HandJS a polyfill for supporting pointer events on every browser As I mentioned in the introduction of this article, Microsoft recently submitted the MSPointer Events specification to W3C for standardization. The W3C created a new Working Group and it has already published a last call working draft based on Microsoft’s proposal. The MS Open Tech team also recently released an initial Pointer Events prototype for Webkit that you might be interested in. While the Pointer Events specification is not yet a standard, you can still already implement code that supports it leveraging David’s Polyfill and be ready for when Pointer Events will be a standard implemented in all modern browsers. With David’s library the events will be propagated to MSPointer on IE10, to Touch Events on Webkit based browsers and finally to mouse events as a last resort. It’s damn cool! Check out his article to discover and understand how it works. Note that this polyfill will also be very useful then to support older browser with elegant fallbacks to mouse events. To have an idea on how to use this library, please have a look to this article: Creating an universal virtual touch joystick working for all Touch models thanks to Hand.JS which shows you how to write a virtual touch joystick using pointer events. Thanks to HandJS, it will work on IE10 on Windows 8/RT, Windows Phone 8, iPad/iPhone & Android devices with the very same code base!

Recognizing simple gestures

Now that we’ve seen how to handle multi-touch, let’s see how to recognize simple gestures like tapping or holding an element and then some more advanced gestures like translating or scaling an element. IE10 provides a MSGesture object that’s going to help us. Note that this object is currently specific to IE10 and not part of the W3C submission. Combined with the MSCSSMatrix element (our equivalent to the WebKitCSSMatrix one), you’ll see that you can build very interesting multi-touch experiences in a very simple way. MSCSSMatrix is indeed representing a 4×4 homogeneous matrix that enables Document Object Model (DOM) scripting access to CSS 2-D and 3-D Transforms functionality. But before playing with that, let’s start with the basics. The base concept is to first register an event handler to MSPointerDown. Then inside the handler taking care of MSPointerDown, you need to choose which pointers you’d like to send to the MSGesture object to let it detect a specific gesture. It will then trigger one of these events: MSGestureTap, MSGestureHold, MSGestureStart, MSGestureChange, MSGestureEnd, MSInertiaStart. The MSGesture object will then take all the pointers submitted as input parameters and will apply a gesture recognizer on top of them to provide some formatted data as output. The only thing you need to do is to choose/filter which pointers you’d like to be part of the gesture (based on their ID, coordinates on the screen, whatever…). The MSGesture object will do all the magic for you after that.

Sample 1: handling the hold gesture

We’re going to see how to hold an element (a simple div containing an image as a background). Once the element is held, we’ll add some corners to indicate to the user he has currently selected this element. The corners will be generated by dynamically creating four divs added on top of each corner of the image. Finally, some CSS tricks will use transformation and linear gradients in a smart way to obtain something like this: image The sequence will be the following one: 1 – register to MSPointerDown & MSPointerHold events on the HTML element you’re interested in 2 – create a MSGesture object that will target this very same HTML element 3 – inside the MSPointerDown handler, add to the MSGesture object the various PointerID you’d like to monitor (all of them or a subset of them based on what you’d like to achieve) 4 – inside the MSPointerHold event handler, check in the details if the user has just started the hold gesture (MSGESTURE_FLAG_BEGIN flag). If so, add the corners. If not, remove them. This leads to the following code:
<!DOCTYPE html>
<html>
<head>
    <title>Touch article sample 5: simple gesture handler</title>
    <link rel="stylesheet" type="text/css" href="toucharticle.css" />
    <script src="Corners.js"></script>
</head>
<body>
    <div id="myGreatPicture" class="container" />
    <script>
        var myGreatPic = document.getElementById("myGreatPicture");
        // Creating a new MSGesture that will monitor the myGreatPic DOM Element
        var myGreatPicAssociatedGesture = new MSGesture();
        myGreatPicAssociatedGesture.target = myGreatPic;

        // You need to first register to MSPointerDown to be able to
        // have access to more complex Gesture events
        myGreatPic.addEventListener("MSPointerDown", pointerdown, false);
        myGreatPic.addEventListener("MSGestureHold", holded, false);

        // Once pointer down raised, we're sending all pointers to the MSGesture object
        function pointerdown(event) {
            myGreatPicAssociatedGesture.addPointer(event.pointerId);
        }

        // This event will be triggered by the MSGesture object
        // based on the pointers provided during the MSPointerDown event
        function holded(event) {
            // The gesture begins, we're adding the corners
            if (event.detail === event.MSGESTURE_FLAG_BEGIN) {
                Corners.append(myGreatPic);
            }
            else {
                // The user has released his finger, the gesture ends
                // We're removing the corners
                Corners.remove(myGreatPic);
            }
        }

        // To avoid having the equivalent of the contextual  
        // "right click" menu being displayed on the MSPointerUp event, 
        // we're preventing the default behavior
        myGreatPic.addEventListener("contextmenu", function (e) {
            e.preventDefault();    // Disables system menu
        }, false);
    </script>
</body>
</html>
And here is the result: Try to just tap or mouse click the element, nothing occurs. Touch & maintain only one finger on the image or do a long mouse click on it, the corners appear. Release your finger, the corners disappear. Touch & maintain two or more fingers on the image, nothing happens as the Hold gesture is triggered only if one unique finger is holding the element. Note: the white border, the corners & the background image are set via CSS defined in toucharticle.css. Corners.js simply creates four divs (with the append function) and places them on top of the main element in each corner with the appropriate CSS classes. Still, there is something I’m not happy with in the current result. Once you’re holding the picture, as soon as you slightly move your finger, the MSGESTURE_FLAG_CANCEL flag is raised and caught by the handler which removes the corners. I would prefer to remove the corners only once the user releases his finger anywhere above the picture, or as soon as he moves his finger out of the box delimited by the picture. To do that, we’re going to remove the corners only on the MSPointerUp or the MSPointerOut. This gives this code instead:
var myGreatPic = document.getElementById("myGreatPicture");
// Creating a new MSGesture that will monitor the myGreatPic DOM Element
var myGreatPicAssociatedGesture = new MSGesture();
myGreatPicAssociatedGesture.target = myGreatPic;

// You need to first register to MSPointerDown to be able to
// have access to more complex Gesture events
myGreatPic.addEventListener("MSPointerDown", pointerdown, false);
myGreatPic.addEventListener("MSGestureHold", holded, false);
myGreatPic.addEventListener("MSPointerUp", removecorners, false);
myGreatPic.addEventListener("MSPointerOut", removecorners, false);

// Once touched, we're sending all pointers to the MSGesture object
function pointerdown(event) {
    myGreatPicAssociatedGesture.addPointer(event.pointerId);
}

// This event will be triggered by the MSGesture object
// based on the pointers provided during the MSPointerDown event
function holded(event) {
    // The gesture begins, we're adding the corners
    if (event.detail === event.MSGESTURE_FLAG_BEGIN) {
        Corners.append(myGreatPic);
    }
}

// We're removing the corners on pointer Up or Out
function removecorners(event) {
    Corners.remove(myGreatPic);
}

// To avoid having the equivalent of the contextual  
// "right click" menu being displayed on the MSPointerUp event, 
// we're preventing the default behavior
myGreatPic.addEventListener("contextmenu", function (e) {
    e.preventDefault();    // Disables system menu
}, false);
which now provides the behavior I was looking for:

Sample 2: handling scale, translation & rotation

Finally, if you want to scale, translate or rotate an element, you simply need to write a very few lines of code. You need to first to register to the MSGestureChange
event. This event will send you via the various attributes described in the MSGestureEvent object documentation like rotation, scale, translationX, translationY currently applied to your HTML element. Even better, by default, the MSGesture object provides an inertia algorithm for free. This means that you can take the HTML element and throw it on the screen using your fingers and the animation will be handled for you. Lastly, to reflect these changes sent by the MSGesture, you need to move the element accordingly. The easiest way to do that is to apply some CSS Transformation mapping the rotation, scale, translation details matching your fingers gesture. For that, use the MSCSSMatrix element. In conclusion, if you’d like to handle all these cool gestures to the previous samples, register to the event like this:
myGreatPic.addEventListener("MSGestureChange", manipulateElement, false);
And use the following handler:
function manipulateElement(e) {
    // Uncomment the following code if you want to disable the built-in inertia 
    // provided by dynamic gesture recognition
    // if (e.detail == e.MSGESTURE_FLAG_INERTIA)
    // return;

    // Get the latest CSS transform on the element
    var m = new MSCSSMatrix(e.target.currentStyle.transform); 
    e.target.style.transform = m
    .translate(e.offsetX, e.offsetY) // Move the transform origin under the center of the gesture
    .rotate(e.rotation * 180 / Math.PI) // Apply Rotation
    .scale(e.scale) // Apply Scale
    .translate(e.translationX, e.translationY) // Apply Translation
    .translate(-e.offsetX, -e.offsetY); // Move the transform origin back
}
which gives you this final sample: Try to move and throw the image inside the black area with one or more fingers. Try also to scale or rotate the element with two or more fingers. The result is awesome and the code is very simple as all the complexity is being handled natively by IE10.

Direct link to all samples

If you don’t have a touch screen experience for IE10 available and you’re wondering how the samples on this page work, you can have a look at each of them individually here: – Simple touch default sample with nothing doneSimple touch sample step 1 with CSS -ms-touch-actionSimple touch sample step 2a with basic MSPointerMove implementationSimple touch sample step 2b with pointerType differentiationSimple touch sample step 3 with MSPointers and mouse fallbackMSGesture sample 1: MSGestureHold handlerMSGesture sample 1b: MSGestureHold handlerMSGesture sample 2: MSGestureChange

Associated resources:

W3C Pointer Events SpecificationHandling Multi-touch and Mouse Input in All Browsers : the polyfill library that should help a lot of developers in the future – Pointer and gesture eventsGo Beyond Pan, Zoom, and Tap Using Gesture EventsIE Test Drive Browser Surface which has greatly inspired lot of the embedded demos – Try some awesome games in IE10 with Touch: – Contre Jour and read a very interesting Behind The Scenes article – Atari Arcade Games and read also this very informative article: Building Atari with CreateJS which details the choice made to support Touch in all platforms. – Recording of the BUILD session 3-140: Touchscreen and stylus and mouse, oh my! Logically, with all the details shared in this article and associated links to other resources, you’re now ready to implement the MSPointer Events model in your websites & Windows Store applications. You have then the opportunity to easily enhance the experience of your users in Internet Explorer 10.

This article is part of the HTML5 tech series from the Internet Explorer team. Try-out the concepts in this article with three months of free BrowserStack cross-browser testing @ http://modern.IE

Frequently Asked Questions (FAQs) on Unifying Touch and Mouse with Pointer Events

What are Pointer Events?

Pointer Events are a unified API that handles input from various devices such as a mouse, touch, or pen/stylus. They are designed to simplify the process of handling different input types in a web application. Instead of writing separate code for mouse events and touch events, you can use Pointer Events to handle both. This makes your code cleaner and easier to maintain.

How do Pointer Events differ from Mouse Events?

Mouse Events are specific to mouse input, while Pointer Events can handle input from various devices including mouse, touch, and pen. This means that with Pointer Events, you can write a single set of event handlers for all input types. Pointer Events also provide additional information such as pressure, tilt, and width/height of the contact area, which are not available with Mouse Events.

How can I use Pointer Events in my web application?

To use Pointer Events, you need to add event listeners for the specific Pointer Events you want to handle, such as ‘pointerdown’, ‘pointermove’, and ‘pointerup’. These event listeners can be added using the addEventListener method. The event object passed to the event handler provides information about the event, such as the position of the pointer and the type of input device.

Are Pointer Events supported in all browsers?

As of now, Pointer Events are supported in most modern browsers including Chrome, Firefox, Edge, and Internet Explorer. However, they are not supported in Safari. You can use a polyfill like PEP (Pointer Events Polyfill) to add support for Pointer Events in browsers that do not natively support them.

What is the benefit of using Pointer Events over Touch Events?

Touch Events are specific to touch input, while Pointer Events can handle input from various devices including touch, mouse, and pen. This means that with Pointer Events, you can write a single set of event handlers for all input types, making your code cleaner and easier to maintain. Pointer Events also provide additional information such as pressure, tilt, and width/height of the contact area, which are not available with Touch Events.

How can I detect the type of input device in a Pointer Event?

The event object passed to the event handler of a Pointer Event includes a property called ‘pointerType’. This property can be used to determine the type of input device. It can have values like ‘mouse’, ‘pen’, or ‘touch’.

Can I use Pointer Events to handle multi-touch input?

Yes, Pointer Events can handle multi-touch input. Each touch point is represented as a separate Pointer Event with a unique pointerId. You can use this pointerId to track individual touch points.

How can I handle hover events with Pointer Events?

Pointer Events include two events for handling hover: ‘pointerover’ and ‘pointerout’. These events are fired when the pointer enters or leaves the bounding box of an element, similar to the ‘mouseover’ and ‘mouseout’ Mouse Events.

Can I prevent the default action of a Pointer Event?

Yes, you can prevent the default action of a Pointer Event by calling the preventDefault method on the event object. This can be useful in scenarios where you want to implement custom behavior for a specific event.

How can I test my web application’s support for Pointer Events?

You can use the Pointer Events API’s ‘onpointerdown’, ‘onpointermove’, and ‘onpointerup’ properties to check if your browser supports Pointer Events. If these properties are undefined, it means that your browser does not support Pointer Events.

David RoussetDavid Rousset
View Author

David Rousset is a Senior Program Manager at Microsoft, in charge of driving adoption of HTML5 standards. He has been a speaker at several famous web conferences such as Paris Web, CodeMotion, ReasonsTo or jQuery UK. He’s the co-author of the WebGL Babylon.js open-source engine. Read his blog on MSDN or follow him on Twitter.

HTML5 Dev Center
Share this article
Read Next
Get the freshest news and resources for developers, designers and digital creators in your inbox each week