Jetty 6.0 to provide new architecture for AJAX apps

AJAX applications, websites that communicate with the server in the background to update displayed pages on-the-fly, are increasingly presenting a new challenge to Web servers that were not designed with AJAX in mind. The lightweight Jetty Java Web server software is set to offer a new AJAX-friendly architecture so that such applications can be supported without overwhelming the server with traffic.

Most Web servers, and the standards they comply with, were designed to handle a simple request-response cycle, where Web browsers will issue requests for content and the server will return responses as quickly as possible. AJAX applications break this mold, often requiring the Web server to send notifications of events when they happen, without the browser issuing a specific request.

To do this within the current capabilities of the Web, AJAX applications will often send a request to the server with the expectation that that request will not receive a response until the server wishes to notify the browser of some event (e.g. receiving a new message in a chat application). Because Web browsers will give up waiting for a response after a certain amount of time, the AJAX application will reissue the request whenever it times out.

On the browser side of things, this is a very elegant solution. Each user of the system will have a “standing request” with the server to be notified of events. On the server side, however, this can be a big problem.

Web servers up until now have generally been designed around a “one thread per request” model, so that the server will allocate resources for each active request in order to generate the response to that request. In a non-AJAX Web application, this model can support a practically unlimited number of active users, so long as only a manageable number of those users are issuing requests at any given time.

With the AJAX processing model of every active user having a “standing request” with the server, this architecture falls apart. A thousand logged-in users means the server is burdened with a thousand active requests, for which it will allocate a thousand times the resources required to process a single request. This sort of load can quickly overwhelm even a powerful Web server.

An innovative solution to this quandary is being tested in the current Alpha 3 version of Jetty 6.0. A new feature called Continuations allows a Web application to put a request “on hold”, so that it doesn’t consume any resources until the application is prepared to produce a response for it.

At first, I was a little wary of this admittedly cool idea. The official Java Servlet API with which Jetty so meticulously complies does not provide for breaking the thread-per-request model in this way. Because Java Web development is founded on principles of cross-server compatibility, I was afraid that adoption of a nonstandard feature like this one could be very destructive indeed.

As it turns out, the Jetty team deserves more credit than I was giving it. The way they’ve designed the Continuation API, any standards-compliant Java Web server (that is, all of them) will simply stop and wait for an event within the thread for a “standing request”, as usual. On Jetty 6.0, however, the Continuation will actually cause the request to fail with an exception that Jetty will catch internally, putting the request into a queue. This failure frees up the thread responsible for the request (thus solving the request load issue), but keeps the request on file until the Web application wishes to notify clients of an event, at which point any “standing requests” on file are brought back to life and processed from scratch, as if they had just been received.

From the browser’s point of view, the whole thing is transparent. The “standing request” is sent, and the server responds to it when an event occurs on the server.

Greg Wilkins, lead developer of Jetty, details the new Continuations API in a blog post, and mentions that he will submit the feature for inclusion in the official 3.0 Servlets API if developer feedback is favourable.

Free book: Jump Start HTML5 Basics

Grab a free copy of one our latest ebooks! Packed with hints and tips on HTML5's most powerful new features.

  • Mr. Niceguy

    This is an idea whose time is overdue. Props to Jetty for wading into it.

    AJAX applications will often send a request to the server with the expectation that that request will not receive a response until the server wishes to notify the browser of some event

    I find that a little hard to believe since developers have to know they are hammering the server when they do that. There is a general understanding that the client app will need to initiate each request and that the server is not to be tied up.

    Of course this also introduces problems since the server keeps getting tiny requests to check if there is an update. Theoretically the “continuation” idea could free up a lot of resources.

    If this ever takes off I would expect to start seeing it implemented in a broad range of servers and not just for Java.

  • http://www.crystalcleardesigns.com ccdesigns

    I am glad to see you posting on AJAX. Reminds me to go ahead and start getting myself waist deep in the technology. I absolutely love what google has done with it – and your article here does shed some light on the worries I had being part of a small web dev firm with limited server resources.

    Thanks Kevin.

  • Pingback: SitePoint Blogs » SitePoint at Web Essentials 2005

  • sent2null

    Nice article on Jetty. I’ve been using it since somewhere around version 3 and it is very fast. I think the api could use some major help in that the class and method names sometimes don’t make sense and are barely documented in the javadocs but you can’t beat the price!

    As for the AJAX news, just the other day I was looking at the javadocs to the alpha version of the Jetty 6 API and saw the continuations class. It does look like a very interesting way to save on resource utilization at the server while providing a near real time feel to actions occuring on the browser. However, the article mentions only ONE use of AJAX techniques. Namely, the browser making requests to the server that don’t require immediate response, or that require response only if the status of some server attribute changes. The more popular (and easier in the sense that server modifications are not required) to use AJAX is to significantly reduce the data size of requests. This is how google and other sites are using it, to make periodic requests to the server for small data packets which are then incorporated into a standing page by some javascript. The overhead of the periodic requests is lestened by the significantly reduced data size which speeds up the overall browser experience significantly. Continuations would seem to be more useful where remote object calls are made, having a browser request trigger notification of the client when a server action is performed and possibly sending object data (not just snippets of xml destined for formatting in html at the browser) in the process. Still, continuations are not required to facilitate AJAX in Jetty or any other web server platform (any) if the data requests are small, the data delivered is small and the update periodicity is not too frequent.

  • anonymous

    This capability has been available in .NET for quite a while. http://msdn.microsoft.com/msdnmag/issues/03/06/Threading/default.aspx

  • Pingback: boyohazard.net » Blog Archive » Jetty 6.0 and AJAX

  • Pingback: SitePoint Blogs » JRun 5 hits beta