Better Semantics worse performance


The Better Semantics with CSS Combinators & Selectors by Chris Sealey article on Design Festival talks about using

h1 + p::first-line {     font-variant: small-caps;     }


#page > * { width: 100%; }


a[href$=".pdf"]::before {     background-image: url(images/icon-pdf.png);     }


<li>List Item 1</li>     
<li>List Item 2</li>     
<li>List Item 3</li>     
<li>List Item 4</li>     
<li>List Item 5</li>    
 <li>List Item 6</li> 
ul li { border-top: 1px solid #DDD; } 
ul li:last-child { border-bottom: 1px solid #DDD; } 
ul li:nth-child(even) { background: #EEE; }

Now Optimize browser rendering by google recommends against all of these css methods.

To be fair the author does say use these in limited situations and as a form of progressive enhancment. Also I am not critisizing the article as it is well written, but even the comments at the end of this blog post don’t clear up making an educated decision about the tradeoff.

Using these techniques:

  1. create cleaner css
  2. less css
  3. possibly more semantic

However the big trade off (google’s article) is performance.

So are the performance degradations too high to consider using these techniques, or in your opions are more specific ways selecting elements better?



I think for general use it will make little difference to the average website. Steve Souders has an interesting article on the subject here.

There’s probably a tipping point some where along the line where the page starts getting large and things will start to slow but I am finding that some of the new css3 selectors such as gradient filters can slow gecko even small pages down to a crawl and need more consideration that some of the selectors mentioned above.

I would avoid the universal selector and I very seldom see a need for the child combinator either. The adjacent selector is also heavily dependent on the structure never changing and as we all know this is rarely the case so I would be very careful in using it in a dynamic site.

It would be nice to see more quantifiable results but I doubt that for most average sites the difference would be noticed or better savings could be made in other areas first.

Hi Paul,

Thanks for your valued perspective.

The article by Steve Souders was further informative regarding my questiron, so thanks forwarding this.

I notice that not often in your code do you use child, partner, or universal selectors, but I guess once ones knowledge and proficiency in CSS gets to a certain point the benefit of of these less specific and less targeted ways of selecting don’t matter as much.


That’s where testing comes into play. If on the site it’s for things are fine – don’t worry about it.

There’s this habit of saying “NEVER USE THOSE” just because it’s slow on one site – what’s slow on one could be blazingly fast on another – that’s CSS in a nutshell; what works in one place isn’t necessarily good in another.

CSS3 gradients is a great example – you use it once or twice on small elements with no extra effects attached, it’s fine. You layer three or four of them with alpha transparency, rounded corners and box-shadows, and just trying to scroll the page in gecko becomes a painful affair (especially as the redraw takes so long you can see it “flash” on and off even on a i7)

Some of the ones listed I don’t see there being issues with… :first-child and :last-child for example should have static-trips for targeting inside the browser. Those shouldn’t be any slower than using classes. As a programmer, I would actually expect them to be FASTER than a class – classes have to be tested against every element or a list of elements with that class (depending on the implementation). first and last should already exist in the DOM as pointers! You’ve grabbed the parent element, it should already HAVE pointers to first-child and last-child… otherwise how would it know what it’s first child is or where to add new ones?!?

Others, like sibling selectors… I kind-of feel like they just make things more complicated than they need to be… BUT they shouldn’t take any more or less time than descendant selectors – parsing a chained list vs parsing a list of children – net change zero!

In a lot of ways though, the talk about browser rendering speed on things like selectors ends up a bit like the Anti-table mafia’s arguments about the “speed of rendering tables”… if a 386/40 running IE4 on Win 3.1 could handle a table, it’s a BS thing to worry about when the average handheld has a 600mhz to 1ghz processor… and a $200 nettop is at 1.6ghz with hyperthreading.

Selectors should be much the same thing – that Google page falls into that category on everything except :hover… and that’s actually NOT because it slows the initial page render, but because it takes time to render the state change – which means for small changes it’s fine; major changes with giant slabs of content – probably not so much.

As with anything else, it’s more about what you’re using it for than actually “using it or not using it”.