SitePoint Sponsor

User Tag List

Page 4 of 5 FirstFirst 12345 LastLast
Results 76 to 100 of 117

Thread: REST opinions

  1. #76
    SitePoint Zealot
    Join Date
    Mar 2004
    Location
    Netherlands
    Posts
    138
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    It wouldn't surprise me to find that this is a widespread objection to the strict implementation of REST.
    Since the wide browser support for xmlhttprequest, it's not hard to make restful web application interfaces. The main reason that not everyone following REST practices is plain ignorance, or a poor understanding of the benefits.

    And yes, in a closed controlled environment, where the application author controls both the server and all (types of) clients connecting to it, REST doesn't give much of an advantage.

    But this is almost never the case. Who uses custom-webbrowsers on their intranet? The usability of an application improves if you use GET for search forms, because of the 'restful' behaviour of webbrowser (refusing to reissue a POST request without warning).

  2. #77
    SitePoint Zealot
    Join Date
    Mar 2004
    Location
    Netherlands
    Posts
    138
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    You are right about following standards. But I don't think it's fair to say "WWW should stay that way". Like all things, WWW must evolve.
    I don't agree with "The web must evolve". I think it would be better to say "The web WILL evolve." Any regulations (self-regulation by interested parties or enforced by the government) are just a part of that evolution.

    Road traffic has also evolved from chaotic to quite predictable. Think way back in history, when there only where horses and horse-cars, and no hardened roads. Would the first road users in history would have automatically kept to the right? I think this behaviour has evolved from practical considerations. And it doesn't even matter much which side of the road you choose, as long as you choose one. The British do quite well on the left side of the road.

    Protocols are there for a reason...

    "Follow HTTP on port 80" is just like "keep right on the public road".
    There will be less accidents.

  3. #78
    throw me a bone ... now bonefry's Avatar
    Join Date
    Nov 2004
    Location
    Romania
    Posts
    848
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by meryn
    Since the wide browser support for xmlhttprequest, it's not hard to make restful web application interfaces.
    I don't agree with your claim. XmlHttpRequest was not part of the original www design, and it's not even a standard. And yes, wider browser support actually means only IEXplorer + Mozilla + Opera (only partiall). I made AJAX components and it's not a pice aof cake.

    Quote Originally Posted by meryn
    Road traffic has also evolved from chaotic to quite predictable. Think way back in history, when there only where horses and horse-cars, and no hardened roads. Would the first road users in history would have automatically kept to the right?
    Your example is not correct. As cars evolved and engines grow in horse power, so grew the necesitty for bigger and better roads. And that's what HTTP is. A road with limited rules for the times we are living in.

  4. #79
    SitePoint Wizard DougBTX's Avatar
    Join Date
    Nov 2001
    Location
    Bath, UK
    Posts
    2,498
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by bonefry
    Your example is not correct. As cars evolved and engines grow in horse power, so grew the necesitty for bigger and better roads. And that's what HTTP is. A road with limited rules for the times we are living in.
    Rather than the width of the road, think of GET and POST the same way you think about driving on the left or the right. And as the saying goes, "If all the traffic is heading towards you, you're in the wrong lane".

    Just because there are roads doesn't mean you can't use an aeroplane or a boat.

    At the same time, arguing about using PUT and DELETE, that goes in the "would be nice but sorry" box. Just send those two as POST data and be happy. And noone is stopping you putting a query string in a POST request uri.

    Douglas
    Hello World

  5. #80
    SitePoint Zealot
    Join Date
    Mar 2004
    Location
    Netherlands
    Posts
    138
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    At the same time, arguing about using PUT and DELETE, that goes in the "would be nice but sorry" box.
    I more or less agree with you.
    The advantages of using such specific methods are currently mostly theorethical.
    In terms of safety, you're 100% safe if you make a clear distinction between GET and POST. PUT and DELETE make things only 'a litter nicer, a litte smoother, a little faster'.

    There aren't much clients who can issue PUT or DELETE requests by itself. Personally, I would love to see native browser support for this. I only know about the Mozile editor (firefox plugin) which can PUT a document you've edited back to the server.

    And noone is stopping you putting a query string in a POST request uri.
    There isn't even a problem with that. Just don't respond to the GET request in the same way as if it would have been a post.

    I don't agree with your claim. XmlHttpRequest was not part of the original www design, and it's not even a standard.
    With "web application interfaces" I was referring to the kind where some application developer wants to issue POST, PUT or DELETE requests in a response to link being clicked. That's now possible, so there isn't a good technical excuse to use GET for actions like updating a table or deleting a row.

    As I said in my earlier post, you can always fall back on a form post. Then you HTTP API should understand a PUT or DELETE request 'tunneled' inside a POST. That's not hard to do. Some people suggest using 'method=put' or 'method=delete' inside the POST data (probably a hidden field in you form).

    Maybe it would be better if I (as a sort of REST advocate) would stick to stressing the GET and POST distinction, because my main interest lies in a 'safe' web (just like 'safe' roads). Adding PUT and DELETE to the mix may give you a little extra mileage in certain circumstances (which don't occur if you're mainly targeting browsers), so maybe it's not even worth the time to talk about it.

  6. #81
    SitePoint Zealot
    Join Date
    Mar 2004
    Location
    Netherlands
    Posts
    138
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    I'll say it different. Forget about REST altogether, it's pure theory.

    Just read the HTTP specification and stick to it, as if the rules were traffic rules. I hope everyone understands the advantages of driving on one side of the road. That's a protocol. It makes things better for all of us.

  7. #82
    throw me a bone ... now bonefry's Avatar
    Join Date
    Nov 2004
    Location
    Romania
    Posts
    848
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by meryn
    I'll say it different. Forget about REST altogether, it's pure theory.

    Just read the HTTP specification and stick to it, as if the rules were traffic rules. I hope everyone understands the advantages of driving on one side of the road. That's a protocol. It makes things better for all of us.
    you are right here

  8. #83
    SitePoint Wizard DougBTX's Avatar
    Join Date
    Nov 2001
    Location
    Bath, UK
    Posts
    2,498
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by meryn
    PUT and DELETE make things only 'a litter nicer, a litte smoother, a little faster'.
    'a little nicer, a little harder, a little slower'
    Hello World

  9. #84
    SitePoint Enthusiast
    Join Date
    Jan 2003
    Posts
    36
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    @meryn:
    From my point of view, your interpretation of the HTTP spec ist too literal and strict. I'll try to explain why I think so by offering some counter-arguments.

    1. The HTTP spec (http://www.w3.org/Protocols/rfc2616/rfc2616.txt) is primarily concerned with laying out the protocol with which clients and servers exchange messages in a request-response cycle. It speaks more in abstract terms of the roles a client or server shall play to request resources. It is not blueprint how all web applications mustbehave. An application may sit on top of the server and process incoming requests as it sees fit, and delegate the response via the server back to the client.

    2. The spec differentiates in the rules it lays out between may, should and must. Because specs often evolve from a collaborative effort and are influenced by the ways how people already use a technology in certain contexts, it is understandable that the spec leaves at some points more room to interpret a rule. If we now look at the definition of "Safe Methods" on page 50, we find

    In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval.
    Please notice that the words "SHOULD NOT" have been used, as opposed to "MUST NOT". Quite a difference. The use of "SHOULD" implies that it's the recommended strategy for GET requests, but that there may be situations where it's okay to deviate from the spec. Having insufficient form/link controls in common browsers (i.e. you can't issue a POST request through a link) can be seen as warranted case to implement things a little bit different ly. After all, should is not the same as must.

    3. Further down the spec another property of safe methods is discussed: They have to be idempotent. The spec understands idempotence as

    Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request.
    (quoted from page 50) So if I now issue a GET request like this

    Code:
      GET /app/article.php?action=delete&id=23 HTTP/1.1
    and the result on the server would be that article no. 23 gets deleted and a result page gets delivered to the client, wouldn't that fulfill the requirement of idempotence? Because when I repeatedly issue this request, the result (and side-effect) is always the same: An article gets deleted, and a response page is delivered which tells me this fact.

    4. Being adamantly strict about GET having no side-effects on the server is not very practical. I suppose that every correctly configured web server already produces a side-effect on each GET request. Namely a log entry in a file, database table or logger demon. Which, strictly spoken and perhaps a bit braindead, is actually the changing of a resource and has nothing to do with the retrieval of information.

    5. Arguing that we can now use AJAX to work around the limitations of links to create POST requests astonishes me. I can remember some years ago when JavaScript was not at all that popular as it is now in the current AJAX hype. The prevalent opinion then was that JavaScript should in no case be used for critical parts of an application/website, and if at all, only to augment its functionality. Reasons given were varying browser support and those clients who can't or don't want to enable JavaScript, and which should be presented with a working site as well. Frankly I don't see why AJAX should be burdened with this task. Creating "safe" roads in the web must also ensure that sites can be served correclty for those without a special client-side technology besides a HTTP client.

    6. Following from my points above, I just want to say that I consider the side-effects of the Google Web Accelerator to be broken and akin to a man-in-the-middle attack. It's an easy cop out to blame it on web authors and application developers to not stay close to a de jure standard, when all they did (me included) they worked with the de facto standard. In regard to the Accelerator, if one brings out a product that sits between servers and clients as a kind of pre-fetching proxy, one needs to be careful to analyze the current situation and play along the established rules, and not blindly implement a spec.

    In the last days there was thread about PHP 4.4 broke backwards-compatibilty by "fixing" references. It was often mentioned that this "fix" introduced errors in code people had been using for ages, even if the fix *might* be correct in the sense of internal engine semantics. IMO this situation is comparable to the GET-idempotence argument.

    7. Last but not least I'd like to mention that I consider REST an intriguing idea and worth discussing, but backporting older applications to this scheme suffers IMO from great practicability issues.
    "It is a mistake to think you can solve any
    major problems just with potatoes." -Douglas Adams

  10. #85
    SitePoint Zealot
    Join Date
    Mar 2004
    Location
    Netherlands
    Posts
    138
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    I do this from the top of my head, so excuse me if I don't get the terminology exactly right.

    Regarding SHOULD NOT and MUST NOT. "SHOULD NOT" means there is a very good reason for not doing that, and you should way that against the reason for doing it anyway (disregarding the rule).
    In the driving analogue: You may sometimes choose to drive left of the road, maybe to save lives. It could very well be that if you get a fine for that, and you can prove the absolute urgence of your action, a judge will release you for the fine, and don't take your driver's license away. It's all about circumstances.

    There's a difference between 'safe' and 'idempotent':
    Safe means you can issue the request without having to worry about anything changing on the server. (Access logs are outside the 'scope' of the application, so they don't matter in this discussion)
    Idempotent means it doesn't matter if you issue one or many of the same requests. The end result is the same.

    Safe includes idempotent (because nothing ever changes)
    Idempotent does not neccesarily mean safe.

    GET and HEAD are safe and thus also idempotent.
    PUT and DELETE are both idempotent, but are definitely not safe.
    POST is neither idempotent nor safe.

    For more details: see the spec.

    Creating "safe" roads in the web must also ensure that sites can be served correclty for those without a special client-side technology besides a HTTP client.
    A site can be served correctly AND conform to HTTP without javascript, but not as pretty as with javascript. Thus it comes down waying 'prettiness without js' against conforming to the rule of to only use GET for safe operations.

    If I were the judge, I'd say prettiness is not as important as using GET correctly.
    I also wouldn't let you on the left side of the road because you were late for work.

    I perfectly understand that there's no 'judge' for these types of technical issues. I also understand that a judge could feel that some very pretty application should be exempted for HTTP conformance, but then it would have to be very, very pretty.

    IMO this situation is comparable to the GET-idempotence argument.
    Yes, the right thing to do 'technically' can be different from the right thing to do 'pragmatically'. I already included the term 'technically' in my argument about Google Web Accelerator...

    You know what I think about the Google WA team? I think they were stupid to assume that we're all 'law obeying citizens'. But that doesn't keep me from hoping that one day, all people will obey the law, and trying to get there by explain the law (that things are for a reason) to an (hopefully) interested public.
    Last edited by meryn; Jul 25, 2005 at 11:43. Reason: clarification in third sentence

  11. #86
    SitePoint Zealot
    Join Date
    Mar 2004
    Location
    Netherlands
    Posts
    138
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)

  12. #87
    SitePoint Evangelist
    Join Date
    Jun 2003
    Location
    Melbourne, Australia
    Posts
    440
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by meryn
    PUT and DELETE make things only 'a litter nicer, a litte smoother, a little faster'.

    There aren't much clients who can issue PUT or DELETE requests by itself. Personally, I would love to see native browser support for this. I only know about the Mozile editor (firefox plugin) which can PUT a document you've edited back to the server.
    Unless web browsers offer content editors which adhere to the same standards, it won't happen soon.
    Quote Originally Posted by meryn
    With "web application interfaces" I was referring to the kind where some application developer wants to issue POST, PUT or DELETE requests in a response to link being clicked. That's now possible, so there isn't a good technical excuse to use GET for actions like updating a table or deleting a row.
    Through XmlHttpRequest? (Excuse my ignorance.) HTTP/1.1 ([url=http://www.w3.org/Protocols/rfc2616/rfc2616.txt]) was proposed in 1999. Fielding's dissertation was completed in 2000. When did XmlHttpRequest become available (not to mention widespread)?

    As we know, the form element is the only one with a method attribute and the only two accepted values are GET and POST. It seems ludicrous to advocate the use of certain HTTP methods and not provide a means of invoking them in the most common kind of user agent. Would that not require that designers and developers should be able to use some sort of markup to instruct the browser to invoke those methods? Has no noticed this 'gap' before?
    Zealotry is contingent upon 100 posts and addiction 200?

  13. #88
    SitePoint Addict
    Join Date
    May 2003
    Location
    The Netherlands
    Posts
    391
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by meryn
    Since the wide browser support for xmlhttprequest, it's not hard to make restful web application interfaces. The main reason that not everyone following REST practices is plain ignorance, or a poor understanding of the benefits.
    I'd say that no-support of XmlHttpRequest is even wider ... meaning that the lowest common denominator between clients that support it and clients who don't is ... not supporting it. Pretty amazing that you advocate for some kind of outdated standard and forget everything about accessibility, which IMO should be a priority.

    IMHO, your comparison with road traffic is also wrong. We're not discussing about moving through the net, as traffic would imply, but rather how somebody who owns a gas station should behave in order to comply with road traffic. There is no thing as such. Anybody should be free to decide which functionalities they want to implement and how they should do it. The HTTP protocol should regulate traffic, not what you do once you've left the traffic.

    I do agree with using GET in a non intrusive way, but that should belong to some kind of best practices, and never be mandatory. The decission should be left to the server administrator/application developer.

    I don't think it is a security issue either. People are not likely to be surfing through the net "safer" because of the use you may make of GET or POST. If GET or POST should imply any kind of security, that would be only relevant to the server and never to the client.

    Quote Originally Posted by meryn
    I'll say it different. Forget about REST altogether, it's pure theory.
    I think we all agree on this one

    Quote Originally Posted by mordred
    An application may sit on top of the server and process incoming requests as it sees fit, and delegate the response via the server back to the client.
    Couldn't agree more.

    Quote Originally Posted by mordred
    I can remember some years ago when JavaScript was not at all that popular as it is now in the current AJAX hype. The prevalent opinion then was that JavaScript should in no case be used for critical parts of an application/website, and if at all, only to augment its functionality. Reasons given were varying browser support and those clients who can't or don't want to enable JavaScript, and which should be presented with a working site as well. Frankly I don't see why AJAX should be burdened with this task. Creating "safe" roads in the web must also ensure that sites can be served correclty for those without a special client-side technology besides a HTTP client.
    I remember it too. Back then I was a client side programmer and I used to build GUI's based on widgets with Dan Steinman's Dynapi. While very hype and nice it came the moment when due to accesibility problems (not to mention how hard and expensive it was to develop cross-browser code, even with the Dynapi) all these concepts were drop and reduced to some enhancements for controlled environments as intranets.

    Quote Originally Posted by meryn
    Safe means you can issue the request without having to worry about anything changing on the server.
    Again I fail to see why the client should bother about it. Should it not be up to the system administrator and/or the application developer?

  14. #89
    SitePoint Zealot
    Join Date
    Mar 2004
    Location
    Netherlands
    Posts
    138
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    hrough XmlHttpRequest? (Excuse my ignorance.) HTTP/1.1 ([url=http://www.w3.org/Protocols/rfc2616/rfc2616.txt]) was proposed in 1999. Fielding's dissertation was completed in 2000. When did XmlHttpRequest become available (not to mention widespread)?
    It wasn't Fielding who was suggesting that you could use xmlhttprequest, it was me. And I suggested to use it for a very specific purpose, namely to be able to create the 'visual appearance' of a link to trigger a PUT or DELETE action. I already said you could also trigger a javascript form post, or just use a form button (in case the client doesn't support javascript). Do you know about unobstrusive javascript and graceful degradation? This would be an excellent case for that.

    As I also said, I'm not specifically advocating the use of PUT and DELETE over the use of normal form posts, I'm just saying that through xmlhttprequest, it's possible.

    To be more specific: since the widespread javascript form posting support of browsers, there isn't a good excuse for using GET requests for non-safe purposes.

    It seems ludicrous to advocate the use of certain HTTP methods and not provide a means of invoking them in the most common kind of user agent.
    That has nothing to do with HTTP, only with browser vendors and the W3C HTML commitee. We're also still waiting for a better way to handle HTTP authentication (for example, allowing login through a form, but then not issuing a POST).

    I'd say that no-support of XmlHttpRequest is even wider
    IE supports dates back to 5.0. I think easily 90% of all clients have support. Again: I'm not pushing this technology: I'm saying you can if you want to. But always consider graceful degradation, that's always important.

    Pretty amazing that you advocate for some kind of outdated standard and forget everything about accessibility, which IMO should be a priority.
    xmlhttprequest is not an outdated standard, it's an important part in the current evolution of web applications. Do GMail, Google Maps, etc ring a bell?

    As for accessibility: There never should be a compromise between accessibility and interoperability. Often, they even go hand in hand. Try to get the first, and you get the second. Use unobstrusive javascript and allow for graceful degradation: Your app should still work without javascript.

    The HTTP protocol should regulate traffic, not what you do once you've left the traffic.
    My 'road traffic' anologue was based on the fact that on the public road, you have many (not much meaningful) interactions with other drivers. In such a complex system, it's good to have clear rules about what is and isn't allowed, so that you don't have to manouvre around people driving on your side of the road.

    On the internet, actual traffic (data transfer) is handled by TCP. That already works fine. The important thing is: The web (and search engines and caches, etc) can't be built directly on TCP? Why? Because TCP has no restrictions on the semantics of the messages going from point to point. And TCP shouldn't have to.

    To add restrictions to the semantics of messages, we have all kinds of application protocols layered on top of the transfer protocol.

    The extra restrictions search engines need to be able to 'safely' crawl the web are provided by HTTP. The web needs HTTP. Without HTTP (or an architectural similar protocol) we would just have the Internet, and no web.

    So an incoming HTTP request is not a meaningless message which can be interpreted any way you like. It has some very specific meaning: one is that a 'GET request' should be safe to issue, with no unintented consequences for the issuer of the request. It doesn't matter if the URL says "doevil.php?kill-all-people". The operation is still *GET*, not "doevil" or "kill-all-people", or anything else. A search engine could index such page. Maybe it has a button on it which issues a POST, but a search engine won't issue POST request. It's wiser than that.

    but that should belong to some kind of best practices, and never be mandatory.
    Last time I checked, there weren't plans for setting up a HTTP Police and HTTP court, so I think you won't be actually forced to do such thing.

    However, the fact that there isn't any enforcement of a law doesn't mean there isn't a law. On the Internet, as netizens, we're supposed to adhere to the Internet protocols. Some protocols state that you MUST or MUST NOT do certain things. So that's mandatory I think.

    You SHOULD NOT use GET for non-safe purposes, so if you do, you have to have a very good reason for that. At least be aware of the consequences.

    I don't think it is a security issue either.
    I don't think so either. I never mentioned security. Safe (in the context of the HTTP
    spec) means that you won't get 'into trouble' for it. GET is safe because through that method, you don't express any intention of doing something on the server, so the server owner can't hold you accountable for damages.

    If you issue a POST however, you're basically saying 'process this message', so you should understand the message you're sending very good, and think about what the server might do in response to that message.

    An application may sit on top of the server and process incoming requests as it sees fit, and delegate the response via the server back to the client.
    Then you shouldn't use port 80. Port 80 is regulated space. Please use another port for custom RPC purposes.

    Creating "safe" roads in the web must also ensure that sites can be served correclty for those without a special client-side technology besides a HTTP client.
    Again: graceful degradation is your friend. You may be able to make the app look prettier when you have javascript, because you can change the form buttons into links. Again: I'm not saying you should use the visual appearance of a link to trigger a POST request, but with javascript, you can.

    Again I fail to see why the client should bother about it.
    The client may not always know what it's doing... It may not understand the meaning of the text inside a link, so it has to follow it blindly. It's up to the application developer to make sure that a client can issue a GET on each and every URL it encounters, without unintented consequences.

  15. #90
    throw me a bone ... now bonefry's Avatar
    Join Date
    Nov 2004
    Location
    Romania
    Posts
    848
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by meryn
    Then you shouldn't use port 80. Port 80 is regulated space. Please use another port for custom RPC purposes.
    Ouch. Don't like this statement. That's the advantage web services have versus CORBA i.e. the ability to use HTTP (on standard ports) to pass more easilly through firewalls.

  16. #91
    SitePoint Addict
    Join Date
    May 2003
    Location
    The Netherlands
    Posts
    391
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by meryn
    Do you know about unobstrusive javascript and graceful degradation? This would be an excellent case for that.
    From the moment on that JavaScript is not supported by 100% of all existing user agents there is not such a thing as unobtrusive JavaScript (and please do not get me wrong, I love JavaScript).
    Quote Originally Posted by meryn
    To be more specific: since the widespread javascript form posting support of browsers, there isn't a good excuse for using GET requests for non-safe purposes [...]
    IE supports dates back to 5.0. I think easily 90% of all clients have support. Again: I'm not pushing this technology: I'm saying you can if you want to. But always consider graceful degradation, that's always important.
    Again, widespread is a very relative adjective. 10% of all clients are still A LOT OF THEM ...
    Quote Originally Posted by meryn
    xmlhttprequest is not an outdated standard, it's an important part in the current evolution of web applications. Do GMail, Google Maps, etc ring a bell?
    They do, but I was referring to the HTTP protocol itself not to XmlHttpRequest.
    Quote Originally Posted by meryn
    As for accessibility: There never should be a compromise between accessibility and interoperability. Often, they even go hand in hand. Try to get the first, and you get the second. Use unobstrusive javascript and allow for graceful degradation: Your app should still work without javascript.
    With all due respect, I think you're looking at it from the opposite perspective. You should only use JavaScript (or any other client technology for that matter) to enhance functionality, not to provide basic functionalities. You refer all the time to search engines, do they support JavaScript? If REST rests on use of a client side technology to be fully implemented, IMHO it's not worth it.
    Quote Originally Posted by meryn
    My 'road traffic' anologue was based on the fact that on the public road, you have many (not much meaningful) interactions with other drivers. In such a complex system, it's good to have clear rules about what is and isn't allowed, so that you don't have to manouvre around people driving on your side of the road.

    On the internet, actual traffic (data transfer) is handled by TCP. That already works fine. The important thing is: The web (and search engines and caches, etc) can't be built directly on TCP? Why? Because TCP has no restrictions on the semantics of the messages going from point to point. And TCP shouldn't have to.

    To add restrictions to the semantics of messages, we have all kinds of application protocols layered on top of the transfer protocol.

    The extra restrictions search engines need to be able to 'safely' crawl the web are provided by HTTP. The web needs HTTP. Without HTTP (or an architectural similar protocol) we would just have the Internet, and no web.

    So an incoming HTTP request is not a meaningless message which can be interpreted any way you like. It has some very specific meaning: one is that a 'GET request' should be safe to issue, with no unintented consequences for the issuer of the request. It doesn't matter if the URL says "doevil.php?kill-all-people". The operation is still *GET*, not "doevil" or "kill-all-people", or anything else. A search engine could index such page. Maybe it has a button on it which issues a POST, but a search engine won't issue POST request. It's wiser than that.
    First of all, IMHO putting restrictions defeats the freedom sense of the Internet. We should be discussing guidelines not restrictions. Second, a request will never have unintended consequences for the issuer, in any case there would be implications for the server, and that's what I mean saying the HTTP protocol should not care about it. I think that the HTTP protocol should start where a request starts and should end where the requests ends. How this request is been handled by the server should be up to the server itself. I do embrace best practices and standards, but they should not place any unnecessary burden.

    Quote Originally Posted by meryn
    I don't think so either. I never mentioned security. Safe (in the context of the HTTP spec) means that you won't get 'into trouble' for it.
    I still think that should be up to the server admin/app developer to decide. For the client, any request is as safe, so why should he/she care?

    Quote Originally Posted by meryn
    The client may not always know what it's doing... It may not understand the meaning of the text inside a link, so it has to follow it blindly. It's up to the application developer to make sure that a client can issue a GET on each and every URL it encounters, without unintented consequences.
    Again, I don't see how this can be an issue for the client. A client makes a request and the request is processed. If that request meant that all resources of the server get deleted that would be the admin's problem and it would never have any consequences for the client, more than an error message or a blank page.

    I don't understand Nordic languages either, but that doesn't mean everybody should publish their content at least in English so that everyone would get the chance of reaching the content in a common language.

    2x.01

  17. #92
    SitePoint Zealot
    Join Date
    Mar 2004
    Location
    Netherlands
    Posts
    138
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nacho: I don't think there is anything more I can say to explain what I mean. Maybe my English isn't good enough, maybe I use the wrong examples, I don't know.

    All I can say is that putting restrictions on the usage of a common good (be it the public road or tcp port 80) can be a good thing, because then people can get a long better, because they don't have to communicate and understand each other each time they meet.

    The added value of a protocol lies in the restrictions it imposes. Otherwise we could better send whatever bits over the line 'as we see fit'.

    Is there someone here who does understand what I'm saying? Otherwise it's no use posting anything else about this...

  18. #93
    throw me a bone ... now bonefry's Avatar
    Join Date
    Nov 2004
    Location
    Romania
    Posts
    848
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    OK, I found the time to read about REST.

    The REST phylosophy is actually aiming at RPC as a way to simplify web services. The action in a RPC environment is based on verbs (i.e. by describing data types and calling procedures/objects to return those data-types). Where in REST it is based on nouns (meaning that parameters are sent through GET or POST requests).

    I also think many of us (at least me) have used this approach without knowing it has a name.

    The hole discution is on a different track though, because in here we have discussed about web applications design mostly and not web services. And although this has brought some attention towards the missuse of GET and POST, that's not the point of REST.

  19. #94
    SitePoint Zealot
    Join Date
    Mar 2004
    Location
    Netherlands
    Posts
    138
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Web applications and (lowercase) web services are more or less the same. The same principles apply.

    REST is a generalization of the architectural properties of the www. It basically tells WHY the web is so succesful. It turns out to be a combination of HTTP and hypertext.

    RPC doesn't have standardized verbs, so there's no common agreement that a certain type of request is 'safe' or 'idempotent'. RPC is more general, and that makes it less suitable for large scale interactions between many different client and services.

    I (and others who understand and like REST) believe that we can repeat the successs of 'web services for human consumption' for 'web services for automated consumption', if we stick to REST principles. Basically this means making webservices which follow the HTTP protocol.

    I believe the people behind the SOAP/WDSL based webservices are misguided, because they're is based on RPC. It's like programming againts custom interfaces.
    There's nothing wrong with interfaces on a small scale (for example, inside an application, where you control both the implementation and the consumption of the interface) but it doesn't work as well on a large scale.

  20. #95
    SitePoint Addict
    Join Date
    May 2003
    Location
    The Netherlands
    Posts
    391
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by meryn
    Nacho: I don't think there is anything more I can say to explain what I mean. Maybe my English isn't good enough, maybe I use the wrong examples, I don't know.
    Don't you worry, meryn, I do understand you, I just don't agree with all of your arguments, is that possible?
    Quote Originally Posted by bonefry
    OK, I found the time to read about REST.
    Well, I admit that is something I still have to do. No much time for the 180 pages dissertation at the moment. My opinions are merely based on this thread, so I'll shut up from now.

    Friends?

  21. #96
    throw me a bone ... now bonefry's Avatar
    Join Date
    Nov 2004
    Location
    Romania
    Posts
    848
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Web applications and (lowercase) web services are more or less the same. The same principles apply.
    No, not really. A web application also has a presentation tier, and it may or may not give direct access to its business logic. And that's a huge difference.

    Basically this means making webservices which follow the HTTP protocol.
    See, that's the point. You weren't discussing web services, but web applications with login system and stuff...

    I believe the people behind the SOAP/WDSL based webservices are misguided, because they're is based on RPC. It's like programming againts custom interfaces.
    SOAP/WDSL is the wrong approach because it's so fuc*ing hard to implement it. And SOAP advocates promote tools that don't exist.

  22. #97
    SitePoint Evangelist
    Join Date
    Jun 2003
    Location
    Melbourne, Australia
    Posts
    440
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by meryn
    It wasn't Fielding who was suggesting that you could use xmlhttprequest, it was me.
    I didn't intend my post to read as if I were attributing to Fielding what is rightfully yours, Meryn. My point is that, some years after the emergence of REST as a body of theory, we still have no web browser that can use all the methods we might suppose would be useful, taking into account those ideas.
    Quote Originally Posted by meryn
    That has nothing to do with HTTP, only with browser vendors and the W3C HTML commitee.
    I have to admit that I haven't read all the literature (and in these days of the web that would take a lifetime!), but I have never ever seen anyone ask a question like, "Wouldn't it be great if we could get a web browser to issue a PUT or DELETE?" The reason I stress this is not because I feel I should commit to a REST approach, but because I like the intention behind the PUT and DELETE methods. I can't help thinking how useful and simple it would be (for us developers anyway) if browsers could be made to issue these methods. That's why in an earlier post I referred to the PHPRestSQL project.

    And if we wanted to give developers the facility to select which methods are issued, it would have to be in the HTML markup, would it not? Yes, I guess that's for the W3C. Has no one there even considered this?
    Zealotry is contingent upon 100 posts and addiction 200?

  23. #98
    SitePoint Member
    Join Date
    Oct 2004
    Location
    malaysia
    Posts
    18
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    One of the goal of REST model in Fielding's dissertation is to identify problems in current protocols and implementations, in order to fix them in immediate future.

    (More details: http://www.ics.uci.edu/~fielding/pub...h_icse2000.pdf)

    In this thread, discussions are more on how web developers are dealing with single client-server web application. Indeed, when building a single web application or web service, customized protocols (ex. using "?action=" queryString) between client-server softwares might serve very well. And we don't have to read about HTTP standards nor REST.

    But, when it comes to communication between your web application with another web application or among web services (ex. the ATOM project), uniform interfaces and standard protocols become more essential.

    It would be fine if ATOM project just create a new customized protocols and tunneling transactions through HTTP. However, in that way, other web applications need to re-learn all new protocols in order to "talk" with ATOM.

    ATOM is designed to live in Web, and ATOM choose to use the same protocol that Web uses. So now, any web applications who live in Web, can communicate well with ATOM easily.

    It's harder to build "Babel Tower" without uniform interfaces. The Web is built based on REST model. We all live in this Web. So, we either rebuild a new Web with a desired new model and desired protocols, or we study and fix the REST model for better improvements.

    btw, i found a nice intro to REST http://naeblis.cx/rtomayko/2004/12/12/rest-to-my-wife

  24. #99
    SitePoint Guru BerislavLopac's Avatar
    Join Date
    Sep 2004
    Location
    Zagreb, Croatia
    Posts
    830
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by bonefry
    SOAP/WDSL is the wrong approach because it's so fuc*ing hard to implement it. And SOAP advocates promote tools that don't exist.
    I don't know of your experience, but I have extremely easily created SOAP services in Java, PHP4 (via NuSOAP) and PHP5.

  25. #100
    throw me a bone ... now bonefry's Avatar
    Join Date
    Nov 2004
    Location
    Romania
    Posts
    848
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Quote Originally Posted by BerislavLopac
    I don't know of your experience, but I have extremely easily created SOAP services in Java, PHP4 (via NuSOAP) and PHP5.
    Yes, of course it's easy when you use an iterface that hides all the details. But I don't think you know what's under the hood. And if you do, you're the man.

    Think of it this way: it's like those damn ORM classes for database access. It all works fine until you have a complex query to execute and you end up writting the query manually.


Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •