Interview: Cameron Adams & DrawPad

    Paul Bridgestock
    Share

    At the Ignite Sydney event earlier this year, attendees were asked to use an iPad app to draw “What inspires you?”. Cameron Adams aka The Man in Blue is the developer of the DrawPad iPad app and has kindly agreed to talk to BuildMobile about it.

    Hello Cameron, for the benefit of the few people out there who don’t already know you and your work, could you please tell us who you are and what you do?

    Hi Paul,

    I’ve been running my website, The Man in Blue for a little under 10 years now and for most of that time I’ve been a freelance designer & coder. I normally find it hard to decide whether I’m more of a designer or a coder, so I’ve carved out a niche for myself by working on projects that combine the two into unique experiences – things that you couldn’t achieve with just a designer or a coder alone.

    This has also helped me stay on the forefront of web technologies and I was lucky enough to ride the incoming wave of JavaScript popularity around 2005, ending up writing two books on JavaScript and contributing to a couple of CSS ones as well.

    Today, I work in whatever technology is going to help me realise my imagination the best, whether that’s JavaScript, Flash, Processing, or something more emergent, like WebGL.

    Sounds like a cool niche. So which side of this niche played the greater role in DrawPad, designer or coder? Although we could watch the video or read your post, perhaps you could answer that question by also telling us what DrawPad is, exactly?

    Drawpad was definitely more from the coding side, as it drew from some existing projects created by other people, so the idea was already fairly well formed. It was mainly a matter of coding towards the outcome and using my aesthetic judgments to guide me along the way. That, plus the always tight deadlines on these sorts of projects mean you just have to code hell for leather in order to get it ready on the night.

    Ever since seeing the work by the Grafitti Analysis team I wanted to create a version with more readily available technologies, i.e. something you could run in a web browser, and see how people would use it at a live event. The form factor of the iPad was also attractive, as people could reach out and touch the “canvas” and draw with their fingers, so it’s easy to get people to interact with it at a nightclub/bar-type venue.

    I’ve got a good relationship with the people who run Ignite Sydney, and they use my Visible Tweets application to display realtime tweets throughout the night via a projector. After talking to the organisers and deciding we wanted to get the crowd even more involved on the night, it seemed like the perfect venue for me to deploy Drawpad, so I got cracking on it about two weeks before the event was going to happen.

    There’s really two parts to Drawpad – the drawing interface and the display interface. For the drawing interface we basically had two really friendly people wandering through the crowd with iPads, talking to people and getting them to draw what inspires them (as Ignite is quite an inspiring night). The data from the iPads was fed to a web server and a laptop connected to a projector took that 2D drawing data and turned it into a funky-looking 3D visualisation, with the Z-axis being time. This way you get a representation of what people drew, but extruded over time. When you move a virtual camera through this 3D object you get some nice abstract shapes that resolve into what people drew.

    In short, it’s a great way to get people involved and have some eye candy on the wall at a big event.

    Wow, lots of inspirational things to go and look at there, thanks. Diving into the code a little deeper then, most of the action goes down in the draw.js file containing the touchStart, touchMove and touchEnd methods. Can you tell us a little about how it all hangs together?

    Sure. Pretty much the only HTML element on the page is a <canvas> that fills the entire screen. I track people’s finger movements across the page and then use those co-ordinates to draw a path.

    As the user places their finger on the screen, the canvas element receives a touchStart event. This is the signal for DrawPad to start a new path. What’s most interesting about the touch events in a browser is that normally (in a browser) we’re used to receiving an event that represents one point of interaction – say, a mouse click or a keystroke – but with multi-touch devices like the iPad there’s actually an infinite numbers of points that the user could be interacting with (well, 20, if you count fingers and toes) and the event object in touchStart/touchMove/touchEnd contains information for all of them.

    That means you have to be careful about which component of the touch event that you’re dealing with. You don’t want to get them mixed up and start thinking that someone’s little finger is their index finger. A few canvas drawing apps that I’ve seen for iPad just discard any touches after the first one, so you don’t get a true multi-touch drawing experience. I wanted to make sure that DrawPad gave you that.

    In the event object passed to touchStart there is an event.touches property that lists all the currently tracked touch points (as an array). If you want to handle multi-touch you have to iterate through the array and handle each point individually. In the touchStart handler I do just this, creating a new path object for every new touch point that I see. Luckily, each touch point has a unique identifier attached to it, so as that point/finger moves around you can keep track of it and not get it confused with any other points.

    Once the start of the path has been created I then listen out for any touchMove events. When those are triggered I check the identifier of the touch and the new location of the point, then add it to the appropriate path.

    Again, for touchEnd, I identify the touch and then close the path for it. One thing you have to look out for with touchEnd is that event.touches will contain no data (because the touch has finished) so you instead have to look in event.changedTouches.

    Very cool, and I can almost keep up. The second part of the experience, wherein the data feeds into a 3D visualisation with the z-axis being time, is mind boggling. Please tell us a something about that, and what it means in the context of the question: in which direction are web technologies heading?

    The 3D visualisation that is produced from people’s iPad scribblings serves a number of purposes: A) It looks cool B) It produces something unique from each person’s drawing (adding something that they themselves cannot do) C) It provides a sense of mystery & curiosity to the whole experience – these flowing 3D forms are unrecognisable until the last few seconds when they coalesce into someone’s drawing.

    In public projections A is always mandatory. For me, B is also necessary, as that’s one of the key parts that I play as digital artist. I am both facilitator of people’s creations and creator of my own creations. It’s kind of like a dance between me and the participants – they give something to me and I respond with my own moves.

    People are drawing on the iPads in 2D, so to turn them into 3-dimensional objects I use the time taken to draw their drawings as the third dimension. This way each of their strokes has a depth to it, corresponding to how long it takes to draw. By giving each drawing a 3D structure I can then do a series of camera effects that makes it look like you’re flying through each drawing.

    To do the 3D I used a library called Three.js and it uses WebGL to draw 3D objects in the browser. I didn’t quite have enough time to nail down the exact geometry required to draw strokes in 3D, so I faked it using spheres – each 3D stroke consists of a bunch of spheres, positioned according to the x, y & time or a particular stroke. Kind of like 3D pixels.

    With a project like this, you can quickly see web technologies becoming the de facto process by which any digital experience can be delivered. Traditionally, installations like this require native frameworks like Processing, Cinder or OpenFrameworks to be achieved. Now we’re beginning to see the same sorts of things being done in the browser. This is great for portability and accessibility – anyone can see it on the web without downloading a special program and I can simply co-opt whatever computer is onsite with an Internet connection and use it to display my visualisation. Laptop, desktop, Windows, Mac; it doesn’t matter.

    There are also already a bunch of people that are already familiar with JavaScript and browser technologies, so when this type of interaction and graphics capability becomes available to them it creates a huge and entirely open community of digital artists. That generally leads to really exciting results!

    This has been awesome, thanks so very much Cameron. I could go on and on asking questions, but one last quick one. Music is a big part of your life, no doubt, and that seems to be true for most people who write code. So, most played track since we have been doing this interview?

    My most played track over the past month has been a little skewed, because I was working on the opening titles for Web Directions 2011, which is set to Magnetic Man’s “I Need Air”. I think I must have listened to it literally one hundred times in the space of three weeks. Thankfully, I’m not totally sick of it, as it’s a great song.

    For my actual listening pleasure, I’ve been listening to Foster The People’s album “Torches”, Nero’s “Welcome Reality” and Bonobo’s “Black Sands”.

    CSS Master, 3rd Edition