By Kevin Yank

Web Essentials 2005: Day Two

By Kevin Yank

If you haven’t already, check out my summary of day one (with many 3AM-ish typos now fixed).

The closing day of Web Essentials 2005 (WE05) in Sydney meant more of everything… except perhaps sleep. Having not had the foresight to register for Tantek Çelik’s breakfast session, I was able to sleep in till the blissful hour of 8AM, but then it was back to it:

Web Standards Workflow, Molly HolzchlagHolzschlag

Molly wanted to start out her second presentation by answering a few of the audience’s questions about the Web Standards Project (WaSP), Microsoft, and the future of Internet Explorer.

Items that were covered

  • The next major release of the Web Content Accessibility Guidelines (WCAG 2.0) is a project in crisis. People are leaving for all sorts of reasons, hobbling any progress on the spec, and parts of what has been completed is highly controversial.
  • Microsoft may be seeing a revolution from within, now that its employees are in the position to blog and thereby communicate directly with users. Witness the IEBlog.
  • The acceptance of standards in the Web development industry is a similar tale of an educated minority fuelling change from within reluctant organizations.
  • XHTML 2.0: when will it be finished, and—more importantly—can it serve skilled Web developers without adding significant barriers to entry for casual content publishers?

Of course, by the time she got through all this, she had a mere 15 minutes left to introduce the concept of CSS design prototyping and its strengths. Thankfully, the next speaker had that covered…

Rapid Design Prototyping with Standards, Eric Meyer

What we missed in Eric’s Day One talk was the opportunity to see him massage some real life CSS code into submission, and this session made up for it. He demonstrated how design decisions like page layout and colour choices can be made right in a meeting with clients by hacking CSS code to produce on-the-fly previews of design ideas. To do this, he took the audience through this very process with the design of the old Netscape DevEdge site, asking for ideas about what should be moved where and mocking it up on the fly with a few quick CSS edits.

If you feel confident enough with your CSS skills to do this in a real client meeting, there are a few procedural tidbits that Eric has learned from experience:

  • Use CSS positioning—no fancy float methods that can be very tempermental—to lay out your prototype.
  • Pick a prototyping platform—a single browser to display your prototype—and stick to it.
  • Leave any cross-browser testing, or indeed any thoughts of cross-browser issues, out of the prototyping process.

Designing For Accessibility: Beyond The Basics, Derek Featherstone

Derek showed us accessibility taken to the next level, solving non-obvious issues so that sites can work well for people with disabilities without sacrificing the visual appeal that designers cling to desperately.

In many cases, the solution really was “know your CSS”. If you know what you’re doing, it doesn’t matter that a required field icon or validation error message must come before an input field for accessibility—you can position it to the right of that field for visual browsers using CSS. But CSS isn’t a blanket solution.

Sometimes it comes down to thoughtful markup: why not put all required fields in a “required” fieldset and all optional fields in an “optional” fieldset? Sometimes you really have to play with a screen reader to come up with brilliantly simple ideas, like submitting a form to a #results fragment identifier, which points straight to a link to the search results (#details) on the resulting page, thereby allowing a screen reader to skip right over your “search again” field at the top of the page.

Derek delivered the coup de grâce when he showed a fully accessible DHTML crossword puzzle. It’s worth tracking down the slides and source code from this session just to see this one example with and without stylesheets enabled.

Understanding Ajax: Taking a peek under the covers, Tim Lucas

Tim gave a reasonably detailed and pragmatic introduction to AJAX (which, despite his defense of the alternative, SitePoint still spells with all caps)—asynchronous JavaScript and XML. Cameron Adams covered this pretty darned well here on SitePoint not long ago with AJAX: Usable Interactivity with Remote Scripting, so I won’t go into the basics of this technology. Tim did touch on a couple of particularly useful nuggets, however:

  • Avoiding cross-site scripting (XSS) vulnerabilities in AJAX applications is usually straightforward: protect destructive/irreversible operations like logging out users and deleting records by requiring a POST request, and check the credentials for that request by requiring that a valid session ID be passed as a parameter in the link URL (not just a cookie!).
  • All the AJAX applications that are getting big press right now are designed from the ground up to require that AJAX work (i.e. JavaScript, ActiveX turned on). But best practice is to build a “dumb” site and then enhance it with AJAX in a gracefully-degradable manner.

After showing a few of the classic examples of AJAX in use (updating results from the server on the fly, dragging and dropping list items to record their new order on the server, etc.), Tim talked a bit about some of the toolkits that are out there today: DOJO (tackles big problems, 4MB of JavaScript pre-compression, kinda scary), Rico (no Safari support, so not worth your time), SAJAX (solves small problems well, PHP-centric for now, but growing), and Prototype/ (bundled with Ruby on Rails, produces sexy JavaScript code).

He stressed that none of these toolkits have solved the accessibility problems with AJAX yet, but that’s something for the innovators to figure out. If you want to use AJAX today—accessibility headaches hasn’t stopped Google Maps!—a toolkit might be the best way to do it.

The Title Attribute: What is it good for?, Steve Faulkner

The first of two sessions that I had the pleasure of introducing, Steve talked about the surprisingly complex accessibility issues surrounding an attribute that—quite ironically—was introduced to improve accessibility: the title attribute.

Off the bat, some limitations of this attribute: Mozilla browsers truncate long title values when displayed as tooltips, the tooltip is only displayed for 5 seconds in most browsers, and in most cases a mouse is required to access the content in the title attribute.

In particular, modern screen readers (e.g. JAWS) generally ignore titles on text links by default (and with tweaking let you hear the title attribute value instead of the link text). They do seem to read the title attribute on text fields pretty reliably, however. Taking the example of a recent A List Apart article  that made heavy use of the title attribute with its inline links, he demonstrated how links could become confusing or downright misleading if critical information were placed in the title attribute.

Some examples of the practical advice that Steve derived from his testing:

  • Titles for acronyms and abbreviations: use them, but include the expanded form in the main text at the first mention for those who can’t access the title content
  • For annotating links: use the title for non-critical information only, and include a copy of the link text before the additional content.

Not great for something that was supposed to improve accessibility, right? In order to bring some of the promise of this attribute to fruition, Steve proposed the use of a server-side XSLT transformation to convert title attributes into inline content following the annotated element for those users who could benefit from this information in an accessible form.

JavaScript and the DOM, Cameron Adams

Cameron’s co-authoring a book on JavaScript for us—The JavaScript Anthology—so it was immensely gratifying to be there as he shared some of the ideas and slick examples he is currently playing with. Gratifying, and a little vexing—with a book to write, when did he have time to clone the Mac OS X interface in DHTML?

Cameron gave a great overview of the Document Object Model (DOM), backed up by a raft of cool examples, from an absurd animated punch-up between Eric Meyer and Doug Bowman (who apparently has the body of a criminal) to slick, modular form validation, all with a keen wit and plenty of hilarity to be spotted if you eyed his screenshots closely enough (“Sexy Standards Chicks”, anyone?).

The moment of the day for me was when Cameron realized why his S5 presentation slides had stopped advancing after he had disabled JavaScript in his browser to show off how gracefully one of his examples degraded. “Can anyone give me a definition of irony?”

Cameron echoed Tim Lucas’s thoughts that while coding entire apps that rely on JavaScript is making headlines right now, selective enhancement of functionality that works without JavaScript enabled is the best practice we should be striving for.

Microformats: Evolving the Web, Tantek Çelik

Put simply, microformats are simple standards for using the extensible bits of HTML—the meta tag and the class, rel, and rev attributes—to add meaningful functionality to the Web today without having to wait for new standards to evolve to support that functionality. Tantek did a masterful job in this session of explaining the motivation for microformats and the fundamental design decisions that are being made about them to make them useful and successful in a very short time.

The non-obvious motivator for microformats is that users should control their own data. There shouldn’t be the barriers to moving your email from one email program to another that there are today, for example. Simple, focused, and most importantly open standards can make it possible for users to take their data and jump ship if a given application/service isn’t up to scratch and take that data to a better service.

Great for users, but why would any company want this? The game so far has always been to lock users into a piece of software by storing that data in a proprietary format and throwing up barriers to moving that data elsewhere. The argument for open standards is that knowing that a company will make it easy for you to take your business elsewhere will lead you to trust that company with your data much more readily. Furthermore, the benefit of potentially using the same data format as related services means that your application can benefit from the work of others that will popularize and enhance the potential of that data format.

Tantek then took us through a brief history of microformats, demonstrating those that are available today in the order they arose. Particularly captivating were the hCard and hCalendar formats, which let you embed contact and event data in your Web pages in a standardized format that can be read by vCard and iCalendar compatible applications through a simple bookmarklet.

Microformats seem to be going from success to success in terms of adoption in the wild (e.g. 40,000+ Avon representatives now have hCard contact details online—the first personal data format in history to be adopted more quickly by women than by men?), and Tantek outlined some of the design decisions that were made for microformats that he believes are fuelling that adoption:

  • Microformats are stylable and visible on the Web, not hidden away where only search engines can see them.
  • Microformats work today.
  • Microformats can be implemented with little or no cost.
  • Microformats can be coded by hand.
  • Microformats can be embedded in existing types of content without disturbing them.

And best of all, microformats are integral to the Web 2.0 culture of freeing up data to be accessed and used by interoperable services without locking that data into a centralized repository.

Designing the Next Web: The principles of user-centered design translated for Web 2.0, Jeff Veen

This was the mandatory Web 2.0 session of the conference, and thankfully Jeff brought all the energy to the stage that this new movement/technology/architecture/whatever while simultaneously dispelling all of the hype. Jeff put forward several ideas that are central to Web 2.0, then suggested that for developers in the trenches, Web 2.0 means tackling old problems using new platforms that embody these ideas.

So yes, podcasting is really just a new name for streaming audio, but it’s an open, standard platform for doing that, which has engendered all sorts of tools, communities, and possibilities that have never been available before now—as are wikis and blogs to content management systems, and as is anticipatory navigation (AJAX à la Google Maps) to previous Web mapping technology.

Web 2.0 means treating your users as participants in building the value of your offerings. Web 2.0 means being open and relinquishing control over data (yours and your users’). Web 2.0 means giving the easy stuff away to the community and charging for the hard stuff. Web 2.0 is exciting, and I can’t wait to find my place in it.

Tomrrow, a summary of my experience at WE05 and of what I took away from it. Until then, I’m off to get some sleep!

The most important and interesting stories in tech. Straight to your inbox, daily. Get Versioning.
Login or Create Account to Comment
Login Create Account