JSON vs PHP to handle returned data via ajax

I’m not sure if I’m describing this properly. Currently, the way I handle ajax request is as followed:

  1. An ajax request is made, say: /shop/cart/:productid/add ie:
  2. The product is added to the users cart via MySQL
  3. The new cart data is returned and wrapped in a html view
  4. which javascript receives and updates the correct node on the page.

What’s wrong with this methodology? Seems pretty simple and effective. My partner is telling me that it’s considered “Old school” that I’m “2 years behind” and saying that’s it’s very slow compared to letting JSON handle it.

His methodology, if I understand it correctly, works as followed:

  1. An ajax request is made, say: /shop/cart/:productid/add ie:
  2. The product is added to the users cart via MySQL
  3. The new cart data is returned and encoded in a json format
  4. And it looks like JS is doing all the rendering of the HTML and data manipulation before updating the correct node.

I don’t have a lot of experience using his method, so I don’t really have an opinion on it other than it looks like a lot of unnecessary work that PHP could do just as fast and with half the code. Does anybody recognize or have experience in his way of doing this?

If my way is indeed considered bad practice, then I’d like to know, cause I’ve been doing it like that for sometime now… Thanks!

I admit that I use method 1 up to this very day, however I have seen the light and will endeavour to use method 2 where possible in the future.

If I understand correctly, method 2 returns a useable object, as opposed to a slab of html text.

What this means is that you can then break up the response and use it in more than one way. For example, you might want to update 2 sections of the page - or you might want to do different things conditionally depending on the result that you can’t do in php.

You are probably going to be returning a more optimised result as well, considering that you can remove the excess HTML from the response and handle that client side. This is particularly significant when you work with different types of views, where you might want to output a language other than HTML.

This is more sophisticated than the true/false (all or nothing) method 1. I’m still of the opinion that method 1 is preferable for very simple ajax requests.

You make an interesting point about being able to update more than one node on the page. Are there any good articles that stick out in your head that would be worth reading?

Edit:
Nevermind… I’m getting some good results for articles on google when I search “ajax response html or json”.

Mostly I’d say it depends what you’re doing. If you are returning data which needs to be processed on the client then JSON is the way to go.

If all you’re doing is requesting something and putting it back into the page, it makes far more sense just to return a block of xml/html. There’s no point generating the HTML in javascript when you likely have PHP doing that already the first time the page loads.

Actually, if you pull the necessary data from the database and send it JSON encoded, without any sort of markup - you’re letting the users’ computer do all the rendering.
And welcome to the world of parasitic computing, where you share the workload with the client.
I welcome second option more than first. It might be trivial in terms of performance, but if you just used PHP to pull the data out of the database, JSON-encode it and send it to the user - you’re alleviating workload.
It might not be that much but for 10 000 users - it does account for in performance.

The problem is that you already need the code on the server for generating the HTML for both non-js clients and presumably the first page load (unless you’re firing off an ajax request on page load… which is probably actually a performance loss due to the second request). Repeating HTML generation code in javascript is a maintainability nightmare.

I feel my method works best (doesn’t every programmer though? )

  1. An ajax request is made.
  2. EventDispatcher maps the request to ProductUpdateEvent
  3. ProductUpdateEvent updates cart state in database, queues SummaryEvent
  4. EventDispatcher fires summary event, hands it the relevant models.
  5. Summary Event returns a JavaScriptResponse Object.
  6. EventDispatcher sends the response back to the browser. It has header application/javascript and so is eval’ed by the prototype.js framework immediately on reception.

That means the javascript that initiated the call does not need to concern itself with what happens to the view state. This structure creates a transparency where PHP can update the whole page or only sections of it depending on the nature of the request. Importantly the javascript section of the callback lives alongside the php section of the callback so they can be reviewed together.

Anyway, XML vs. JSON really depends on how much you are asking of the XML. A large HTML response escape stringed into a JSON object is going to be larger than an xml tag with a CDATA tag - not to mention a hell of a lot harder to read when debugging. JSON is useful for a great many things - transfering large HTML blocks is not one of them in my opinion.

I’ve yet to see any json html response be anything more than straight html that has had addslashes passed over it. Further, I don’t think it’s possible to make the encoding any smaller, not without creating two entirely separate template forms.

Agreed. The only bypass would be to use a smarty-like template manager to copy the html blocks to json_blocks, but at that point you’ve reintroduced the work load you tried to dodge Blue, and then some as I guarantee any parser beyond addslashes is going to need to do a hell of a lot more work to reduce the html to a smaller json copy.

I use json for most responses but for if large html blocks are along for the ride I use XML instead with the structure


<r>
  <j> (javascript callback here)</j>
  <h><![CDATA[  (html content here ) ]]></h>
</r>

(The lack of an xmldoctype is intentional - prototype.js doesn’t need it to be included to properly parse it, so it is omitted to save space)

The only code I need to persist clientside is a caller that if PHP responses with XML it will call and eval the code in the j tag. That code does the heavy lifting to place the html snippet.

Unless the HTML was very short json with addslashes is not going to win the fewest character arms race here. Even if close, we’re talking bytes here - which would require millions of hits / day to start becoming a factor.

The advantage is I only have 1 template to manage and the template doesn’t have to care about how it goes out (nor should it). The only code that cares is the dispatcher, which chooses between javascript and html mode before the responder is started and determines the responder based on that choice.

I like that idea a lot. I’ve been storing the callbacks in various js files and including them on page load. This is a neat alternative. Doesn’t it become a performance loss when someone does the same action multiple times though? Presumably the callback is returned each time so if it was just defined as a standard function in a .js file it would be cached across different page loads and never need to be returned in an ajax request. From a program flow perspective it’s great though.

I don’t use this method for callbacks over, say, 50 lines. Those get independent scripts as you’re doing now.

Most of the time though the callback consists of single update call to the node to be updated. If node(s) need updating the xml file carries multiple html node with tag names h1, h2 and so on. The permanent callback that handles the XML only cares about the j tag contents, the script in that tag is responsible for everything else. If the callback is lengthy and needs its own file for performance then the callback merely invokes the callback function of that file appropriate to the response - so frequent callback libraries can be used with this method as well. It’s very flexible.

Footnote: I’ve toyed with the idea of using <d> tags containing json encoded data so that data and the html structure of their display arrive separate. This combines the best of both worlds.


<r>
  <j> javascript callback here. </j>
  <d> javascript data in json form here.</d>
  <h> <![CDATA[  (html template for that data here ) ]]></h>
</r>

That might be what Blue is driving at when he speaks of having the javascript do the rendering, especially on a lot of table rows. HTML after all is just a heartbeat away from being an XML tree. That said, for this to be effective html rendering methods would need to be implemented in both PHP and javascript and they wouldn’t necessarily be compatible with each other. Also, only the simplest template renders could be done this way - anything complex would ramp up the javascript until it became as heavy as smarty which is not ideal.

Ok now you’ve lost me. Sending data + html template + callback has just put the repeated display logic back into javascript hasnt it?

I thought you were doing something like (as a trivial example)


<r>
	<j>function callback(htmlNode) { 
		$('.somediv').appendChild(htmlNode);
	};</j>

	<h><![CDATA[  <h1>Foo</h1> ]]></h>
</r>

(I’d probably just put the HTML as XML elements inside the <h> rather than using text but that’s irrelevant)

Where the first parameter to the callback is the HTML from <h></h>

Unless I’m getting the wrong end of the stick here.

Yes, that is what I’m doing now. The second method I mused about hasn’t ever been done, but I’ve thought about doing it occasionally.

I think the requirements outweigh the benefit.

Sometimes I return data (XML) other times HTML. It just depends on the situation. Splitting them up just to follow some purist ideal is normally more trouble than its worth, not to mention replicated display logic. There are times I have split up the data/display but its not something I do often. Majority of the time I just return HTML and inject it into the DOM. If the application degrades gracefully than its much simpler to return the HTML your after without the wrapper than it is to fetch it as a data structure, pre-process and add to DOM. The only place it can really bite is if you have obtrusive JavaScript, although using a library such as; JQuery can ~alleviate~ that issue. The other gotcha being events assigned directly to elements that will be replaced. I always keep my JavaScript outside of the body and use event bubbling, so I rarely run into issue with injecting straight HTML. By attaching event handler to elements that your not refreshing with AJAX, you can replace HTML inside as many times as needed keeping the event handling intact without updating it. For that reason, I’m a a supporter of not attaching listeners to exact elements, but rather a known “static” ancestor that doesn’t get changed or replaced. Than using Event bubbling to delegate via target. That approach seems to much better serve persistent, AJAX driven paradigms than attaching listeners directly to items that will change. In which case every time they change the event handlers need to be updated and it just becomes a mess.

Triggering 15 event handlers on a constant parent (say, the form tag itself) is not exactly efficient, bubble up or no. Also the handlers typically need to be done away with with the element is gone. If you have enough handlers running around there will be noticeable slowdown on older hardware, or IE.

I think your misunderstanding the technique. The technique is to place the event handler on an ancestor known to never be replaced. By doing so you can have a single event handler managing what could amount to several events. There will only be a single resource and it will never be removed, since the known ancestor will never be replaced. A good example is placing an event listener on an ordered list to manage events for all anchors. Through analyzing the nodeName you can than detect the actual target and respond appropriately without adding separate event handlers to every link in the ordered list. Also, when elements are added or removed from the list so long as the base ol element isn’t replaced all event handling will be kept in tact. However, if event listeners were assigned to anchors directly when new anchors are removed or added one would also need to run clean-up and reassign new handlers each time. That entire process is eliminated by the above technique.

I suppose it could work if the elements are related to keep the function size down.

Sorry in advance if I’m going to come off as a grumpy old man here, but …

The overhead of a HTTP-transaction is massive, compared to a dozen bytes. I would much suggest to obey the standards in this case. In fact, since the network traffic is transferred in blocks, chances are that it makes absolutely no difference at all. Might in fact be slower, because the parser has to compensate for the malformed xml.

That aside, I’m afraid I don’t like your idea of combining multiple documents into the same response. Presumably the html template would be fairly static, whereas the json part would be variable for most responses, yes? If this is the case, transmitting them separately, would allow the http-infrastructure to cache at various places. When you’re gluing them together, this isn’t possible and the result is that you need to transfer the template on each and every response.

On top of that, there is already a standard way to serve multiple documents in the same response. You can use multipart for this. I’m not sure how well XmlHttpRequest handles that - You might have to do some manual parsing here. I would suggest that you investigate it, if you insist on going down this route. Standard protocols are usually the better choice.

On a side note - If you’re transmitting over HTTP 1.1, then you can use Connection: keep-alive to pipeline multiple transactions. You still have a slight overhead of a separate request for each of the documents, but the network activiy is much less than it would be to make each of these as separate transactions.

While I agree that omitting the xml declaration is rather frivolous, there is merit in what Michael is trying to achieve. As I understand it, he’s trying to keep all the display logic in one place. This, at least in my opinion, is something to strive for.

Essentially he’s transmitting two things: the processed HTML and a small amount of display logic about what the page should do with the HTML when it receives it. This is akin to dynamically adding a button to a form in a windows GUI application where you would specific details of where the button should go along with the button itself.

edit: Basically the equivalent of
Button button = new Button;
window.addButton(button);

rather than
Button button = new Button;
return button;

The implementation details of this (E.g. xml vs multipart data) are somewhat irrelevant. The idea that he’s presenting is more important.

Talk to Google. The still don’t close their <body> or <html> tags on their main page. Those bytes seem to matter to them, and certainly not to the browser.

In any event the XMLHTTP_Request object will try to interpret the document as an XML tree regardless of its actual type. I think it’s because it’s, well, an XMLHTTP_Request object. Why tell the parser what it already knows and ignores anyway. Further, I’ve never seen any other site tag a doctype onto the response.

That aside, I’m afraid I don’t like your idea of combining multiple documents into the same response. Presumably the html template would be fairly static, whereas the json part would be variable for most responses, yes?

If you’re referring to the first use case, you’re way off the mark. The response consists of an HTML object and enough js to figure out what to do with it - rarely more than 3 lines. A typical response

[highlight=‘xml’]
<r>
<j>$(‘dataTable’).update( html );</j>
<h><![CDATA[<tr><td>A new row</td></tr>]]></h>
</r>



Normally the HTML is much larger, but the javascript is more or less the same except a different target element for the node update. The contents of the h tag are moved to var html by the controlling javascript which makes the eval call. 

Without the js the HTML doesn't know where to go. Without the HTML the js has nothing to do.  Hence they go together without incurring the overhead of multiple transactions.

The second situation I mused about, sending a json object, an html object and eval js, is something I've never done and for various reasons including some that you gave I'm unlikely to do.


> If this is the case, transmitting them separately, would allow the http-infrastructure to cache at various places.


Pointless.  The html changes by request since it will have different datasets bound each time by PHP.  And the javascript is always very tiny by design and has no use outside the context of the call.


> On top of that, there is already a standard way to serve multiple documents in the same response. You can use multipart for this. I'm not sure how well XmlHttpRequest handles that


It doesn't but thanks for playing.


>  - You might have to do some manual parsing here.


I am. In PHP. With one block of code regarless of the dispostion of the response being a javascript callback or a new page.


> On a side note - If you're transmitting over HTTP 1.1, then you can use Connection: keep-alive to pipeline multiple transactions.


I'm well aware of that but XMLHTTPRequest doesn't support that evenly across all browsers either.

Without the js the HTML doesn’t know where to go. Without the HTML the js has nothing to do. Hence they go together without incurring the overhead of multiple transactions.

I’ll try and clarify this, because I didn’t understand it at first either. The biggest advantage of what you’re suggesting is actually in the code you didn’t demonstrate: the top layer of javascript.

Rather than assigning an event to an element and at the same time defining how to deal with the response (before even knowing what the response is), the response can dictate what action should be taken. This allows for the avoidance of display logic such as if (result.success == 1) {} else if (result.success == 0); in the javascript. All this logic can be handled by PHP where it is also dealing with the HTML generation code and is probably already checking that condition anyway. Consider deleting a record. There are multiple possible outcomes, success and failure (which could also have a reason). On success you may want to remove a table row. On failure, display the returned message. Since the check is already being done in PHP why replicate that in javascript?

Michael and I don’t always agree ( :wink: ) but what he’s describing here is an incredibly eloquent solution.