SitePoint Sponsor |
|
User Tag List
Results 26 to 33 of 33
Thread: Please Clarify XHTML 1.0?
-
May 11, 2009, 22:49 #26
- Join Date
- Nov 2004
- Location
- Ankh-Morpork
- Posts
- 12,158
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
But if you're not serving it as X(HT)ML, then you're not using XHTML at all. Any arguable benefits go right out the window; you're using invalid HTML, nothing more, nothing less.
The only 'benefit' left is that a validator – should you remember to use one – will point out some completely unnecessary tags that you may have missed.
Anything with draconian error handling is unlikely to have any potential for end users. Most authors are not programmers, after all. They think HTML Transitional is too strict, for Pete's sake!
One word: Dreamweaver. Lots of people use it and the default setting (IIRC) is to upload on save.
That may work if you publish a page every other fortnight or so, but not if you have a busy news site where dozens of people publish a hundred articles a day. They'll use a CMS, of course, but that CMS had better guarantee well-formed markup if it's using XHTML...
I'm not forgetting it, and it is common sense for a programmer. But it's not necessarily so for a copywriter. Spellchecking? Yes, but that's about it.
No more than HTML needs validation. XHTML needs to be well-formed, though. Browsers will have to cope with invalid XHTML as long as it's well-formed. For instance, <span><h2>I'm Stupid</h2></span>.
No, I wouldn't deploy an application without checking it first. Why? Because debugging, fixing the error, recompiling, testing and redeploying would take a long time, during which the app would be inaccessible to my users. In a professional environment, it would also normally have to go through several testing stages, and the programmer wouldn't be allowed to deploy his/her own app to the acceptance test server or the production server.
If there's a minor error in document markup, fixing it and re-uploading it is a matter of seconds. That's why content writers may not apply the same rigorous deployment procedures as programmers.
That's the wrong analogy. XHTML simply adds a few bells and pink ribbons to the katana blade, in the form of sprinkled slashes and unnecessary end tags.Birnam wood is come to Dunsinane
-
May 12, 2009, 00:12 #27
- Join Date
- Sep 2005
- Location
- Sydney, NSW, Australia
- Posts
- 16,875
- Mentioned
- 25 Post(s)
- Tagged
- 1 Thread(s)
I disagree that those tags are completely unnecessary. Having those tags there can save a lot of time with future maintenance of the page as with them there the web page source is much easier to read. Those tags are ones which a browser might need to consider as optional because of all the poorly written pages but which ought to be included in any properly written page.
One of the problems with the W3C standards is that they are standards for the browser writers and do not therefore include the other 70% or so of the standards that web page authors ought to be following.
I don't think anyone has suggested that non-programmers should be expected to follow such strict rules but then the XHTML rules are relatively lenient compared to what can happen with a lot of small typos in proper programming code and so programmers are used to being a lot more precise in their code than other people.
Validating against XHTML instead of HTML for a page to be served as HTML is worth it just for it identifying if you have left out or misplaced any of those extremely useful tags that make the page so much easier to maintain that you mistakenly suggested are unnecessary.Stephen J Chapman
javascriptexample.net, Book Reviews, follow me on Twitter
HTML Help, CSS Help, JavaScript Help, PHP/mySQL Help, blog
<input name="html5" type="text" required pattern="^$">
-
May 12, 2009, 00:22 #28
- Join Date
- Nov 2004
- Location
- Ankh-Morpork
- Posts
- 12,158
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
They are technically unnecessary, since their presence can be implied – unambiguously. Thus a user agent doesn't need them. I agree that they are useful for human readers, though. And that's why I do use them, even though they're unnecessary.
Omitting tags was considered useful back when SGML and HTML first came about. Bandwidth was precious, as was storage space and memory. Why write any more than you have to?
These days those concerns are mainly moot, except perhaps for some mobile devices. The only reason to omit tags today is probably laziness.
I've said it before, but I'll say it again: validators should only look for syntax errors; if you want something to enforce what current fashion decides is 'best practice' you should use something like lint(1).
Omitting a </p> tag in HTML is not an error, because it is 100% clear where the paragraph ends, even without the end tag. Using an explicit end tag helps human readers, though, which is why it might be a good idea to write them out, but they are not strictly necessary for parsing the document. And HTML documents are mainly intended to be read by machines, not people.Birnam wood is come to Dunsinane
-
May 12, 2009, 01:51 #29
- Join Date
- Sep 2005
- Location
- Sydney, NSW, Australia
- Posts
- 16,875
- Mentioned
- 25 Post(s)
- Tagged
- 1 Thread(s)
Except when you are the author of the page and are using an editor where you get to work on the tags directly where you want the code to be as easy to read and hence as easy to maintain as possible.
Of course it would be possible to set up programs to do the conversions back and forth between browser optimised HTML with all the optional tags stripped out and author optimised XHTML. Then we'd have the best of both worlds with it. I wonder why such a useful program has never been produced (or if it has why it hasn't been properly publicised). Even just a lint program that reads XHTML and spits out web optimised HTML would be useful.
Equally useful would be if they ported all the useful stuff from XHTML 1.0 back into HTML and called it HTML 4.1 - all it would need is a new doctype tag to identify what follows as real HTML rather than its being HTML with errors, nothing in the actual code or validation thereof would need to change from what currently exists for XHTML 1.0 except that it would no longer be necessary to use pretend HTML in order to do the logical thing and close all tags.Stephen J Chapman
javascriptexample.net, Book Reviews, follow me on Twitter
HTML Help, CSS Help, JavaScript Help, PHP/mySQL Help, blog
<input name="html5" type="text" required pattern="^$">
-
May 12, 2009, 08:53 #30
- Join Date
- Feb 2009
- Location
- England, UK
- Posts
- 8,111
- Mentioned
- 0 Post(s)
- Tagged
- 1 Thread(s)
I have stated in many threads my preference towards XHTML, however unlike many, I serve my website correctly... XHTML 1.1 if browsers support it but for IE and the latter... HTML 4.01. I do happen to like the draconian error handling as I come from a programming background and the thought of syntax errors or invalid code makes my spine crawl
Either way the way I choose to look at it is... Browsers which do have support for XHTML get the functionality and for browsers like IE which cannot handle the XHTML functionality they get a small but visible message at the top of the window saying "This website could perform better but Internet Explorer currently does not have the ability to take advantage of this website effectively, to see this site in all its glory perhaps try another browser!" (a little yellow highlighted popup) and PHP deals with the checks and triggers to make sure those that support it get what they can. It is simple to implement, and best of all, it means that I do not have to be inhibited by what code is implemented as I know my website will outright work (progressive enhancement) on a variety of platforms
-
May 12, 2009, 09:56 #31
- Join Date
- May 2006
- Location
- Lancaster University, UK
- Posts
- 7,062
- Mentioned
- 2 Post(s)
- Tagged
- 0 Thread(s)
I know my website will outright work
Then the user wouldn't be able to view your site at all.Jake Arkinstall
"Sometimes you don't need to reinvent the wheel;
Sometimes its enough to make that wheel more rounded"-Molona
-
May 12, 2009, 10:53 #32
- Join Date
- Nov 2004
- Location
- Ankh-Morpork
- Posts
- 12,158
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
I assume Alex examines the Accept HTTP header, rather than rely on primitive browser sniffing. That's what I do on my blog.
That has the unfortunate effect of serving XHTML to Safari and Chrome, which state that they prefer application/xhtml+xml over text/html, yet they don't fully support XHTML. Oh well, that's not really my fault, is it?Birnam wood is come to Dunsinane
-
May 12, 2009, 22:05 #33
Bookmarks