I admittedly don’t think about this idea very often… how efficient is the CSS that we write, in terms of how quickly the browser can render it?
This is definitely something that browser vendors care about (the faster pages load the happier people are using their products). Mozilla has an article about best practices. Google is also always on a crusade to make the web faster. They also have an article about it.
Let’s cover some of the big ideas they present, and then discuss the practicalities of it all.
Right to Left
One of the important things to understand about how browsers read your CSS selectors, is that they read them from right to left. That means that in the selector ul > li a[title="home"] the first thing thing interpreted is a[title="home"]. This first part is also referred to as the “key selector” in that ultimately, it is the element being selected.
ID’s are the most efficient, Universal are the least
There are four kinds of key selectors: ID, class, tag, and universal. It is that same order in how efficient they are.
#main-navigation { } /* ID (Fastest) */
body.home #page-wrap { } /* ID */
.main-navigation { } /* Class */
ul li a.current { } /* Class *
ul { } /* Tag */
ul li a { } /* Tag */
* { } /* Universal (Slowest) */
#content [title='home'] /* Universal */
When we combine this right-to-left idea, and the key selector idea, we can see that this selector isn’t very efficient:
#main-nav > li { } /* Slower than it might seem */
Even though that feels weirdly counter-intuitive… Since ID’s are so efficient we would think the browser could just find that ID quickly and then find the li children quickly. But in reality, the relatively slow li tag selector is run first.
Don’t tag-qualify
Never do this:
ul#main-navigation { }
ID’s are unique, so they don’t need a tag name to go along with it. Doing so makes the selector less efficient.
Don’t do it with class names either, if you can avoid it. Classes aren’t unique, so theoretically you could have a class name do something that could be useful on multiple different elements. And if you wanted to have that styling be different depending on the element, you might need to tag-qualify (e.g. li.first), but that’s pretty rare, so in general, don’t.
Descendant selectors are the worst
David Hyatt:
The descendant selector is the most expensive selector in CSS. It is dreadfully expensive — especially if the selector is in the Tag or Universal Category.
In other words, a selector like this is an efficiency disaster:
html body ul li a { }
A selector that fails is more efficient than that same selector matching
I’m not sure if there is much we can learn from this, because if you have a bunch of selectors in your CSS that don’t match anything, that’s, uhm, pretty weird. But it’s interesting to note, that in the right-to-left interpretation of a selector, as soon as it fails a match, it stops trying, and thus expends less energy than if it needed to keep interpreting.
Consider why you are writing the selector
Consider this:
#main-navigation li a { font-family: Georgia, Serif; }
Font-family cascades, so you may not need a selector that is that specific to begin with (if all you are doing is changing the font). This may be just as effective, and far more efficient:
#main-navigation { font-family: Georgia, Serif; }
CSS3 and Efficiency
Kind of sad news from David Hyatt:
The sad truth about CSS3 selectors is that they really shouldn’t be used at all if you care about page performance.
The whole comment is worth reading.
CSS3 selectors (e.g. :nth-child) are incredibly awesome in helping us target the elements we want while keeping our markup clean and semantic. But the fact is these fancy selectors are more browser resource intensive to use.
So what’s the deal, should we actually not use them? Let’s think about practicalities a bit…
Practicalities
That Mozilla article I linked to at the top? Literally 10 years old. Fact: computers were way slower 10 years ago. I have a feeling this stuff was more important back then. Ten years ago I was about to turn 21 and I don’t think I even knew what CSS was, so I’m not going to get all old school on you… but I have a feeling we don’t talk about this rendering efficiency stuff very much is because it’s not that big of a problem anymore.
This is how I’m feeling about it: the best practices we covered above make sense no matter what. You might as well follow them, because they don’t limit your abilities with CSS anyway. But you don’t have to be all dogmatic about it. If you happen to be in the position where you need to eek out every last drop of performance out of a site and you have never considered this stuff before, it may be worth revisiting your stylesheets to see where you can do better. If you aren’t seeing much rendering slowness in your site, then don’t worry about it, just be aware for the future.
Super-speed, Zero-practicality
So we know that ID’s are the most efficient selectors. If you wanted to make the most efficiently rendering page possible, you would literally give every single element on the page a unique ID, then apply styling with single ID selectors. That would be super fast, and also super ridiculous. It would probably be extremely non-semantic and extremely difficult to maintain. You don’t see this approach even on hardcore performance based sites. I think the lesson here is not to sacrifice semantics or maintainability for efficient CSS.
Thanks to Jason Beaudoin for emailing me about the idea. If anyone knows more about this stuff, or if you have additional tips that you use in this same vein, let’s hear it!
Just as a quick note, I’d also like to mention that since CSS style selectors are also used in many JavaScript libraries, these same concepts also apply. ID selectors are going to be the fastest while complicated qualified descendant selectors and such will be slower.
Those are excellently well researched points to improve browser rending speeds. What are you thoughts regarding css compression via minification?
Compression is awesome, and will greatly benefit you in a production environment if you can make that compression part of your workflow that doesn’t become cumbersome.
It should be noted here that the “efficiency” we are talking about above has nothing to do with compression. High efficiency selectors will be highly efficient whether they are served from a compressed stylesheet or not.
Duly noted.
Yea, even I was about to ask the same thing. But very well pointed out Chris. I always thought that CSS performance can be improved by compressing it but never knew it depended so much on the way we select it. Do these matter if the style sheet has become lengthy?
It’s interesting – and more than a little depressing – to learn that descendant selectors are an “efficiency disaster.” I’ve always operated under the assumption that it was better to assign a class to a parent element and then use a descendant selector to style its child elements, as in
ul.someclass li {...}
I’m also tag-qualifying there… sigh.
I wouldn’t feel overly guilty about that… You could lose the tag-qualifying if you are able, but descendant selectors are just part of life for us CSS developers a lot of times.
I’m trying to think about how you might actually test this out. Would the Net panel in Firebug reflect the differences?
nav menus come to mind.
Just for “best practices” I’ve changed the way I write my CSS by not using descandant selectors anymore.
So my nav would be:
#nav { list-style: none; other styles… }
#nav li { }
#nav a { }
#nav span { }
and etc. Of course I could apply classnames to make it more efficient, but that just seems overkills unless you have dropdowns.
The tag qualifying is interesting. I will definitely be considering that more in my development.
However, isn’t trading more efficiently rendered CSS for weighted down HTML with lots of id’s and classes kind of a bad trade? Shouldn’t creating lightweight semantic markup trump a CSS file that renders more quickly? I guess there is probably a happy-medium in there somewhere but it just seems wrong to me to start adding more specific classes and id’s in the name of CSS rendering speed.
OK so I overlooked the text in bold… “I think the lesson here is not to sacrifice semantics or maintainability for efficient CSS.”
Anyway, great article.
Probably not. I don’t know how you would get reliable metrics on this. I would be very interested to know if someone has a tool that could be used though.
I can go into the documentation of different browsers about this and make a benchmark script in PHP, rendering loading time differences between two or more stylesheets.
I.e.: Your current stylesheet and a somewhat optimized version of it, by your best of knowledge. I think there would be a great deal of interest for a CSS benchmarking site. Ofcourse, if this gets a succes, i know where to get me a great designer ^^
I guarantee that mozilla has a way to extract metrics on this from their rendering engine… I don’t know what it is, but they regression test EVERYTHING and have tests to make sure the don’t regress performance in this kind of area.
Also the instrumentation may not be available to web content, only stuff running with chrome privileges… so you could make an addon that tells you about this stuff.
In any case unless you write REALLY horrible selectors they shouldn’t cause a performance problem on reasonable pages (ie could be a problem on gargantuan pages… but those will have performance issues no matter what probably)
So aside from citing an article that was current when Bill Clinton was still President, what other information are you basing your article on?
Also, an example of descendant selectors like HTML body ul li a { } is ridiculous, although not unheard of. The first three elements are redundant and unnecessary. In other words, li a {} would accomplish exactly the same thing.
I clearly noted that the Mozilla article was old in the article. But there are more current writings about it that still reference it, like this excerpt from Even Faster Websites.
Your second example is exactly the kind of thing I was hoping to make more clear to people.
Did you not see the Google page linked in the same paragraph?
Of course I did. However all I’ve read so far are best practices and nothing that actually shows speed tests on CSS selectors.
Ever done a myspace customization? Now those are some ugly descendant selectors.
People still use myspace? lol
Seriously. Literally “td td td td td td td td td li a td….” just to customize a section haha.
And @Stephen. No, really, nobody does. :P Both of mine are just kind of floating out in the internet, abandoned and lonely.
I think this is one of the reasons nobody uses it anymore – it gives you the option to customize, but then it’s so retarded in the way you have to go about it that people just get frustrated and give up.
=)…jep, they use myspace and there are still web “developers” which use table-designs for NEW sites!
The following festival site is redesigned every year (last year they had another designer…) http://www.gurtenfestival.ch/ … have a look at the source code, but don’t start to cry =P…
I thought that “HTML body ul li a { }” ist just a dummy. It could be “html#one body#article ul.items li#active a.tooltip”. Yes, there a occasions id-ing the html-element. And, yes
Why should “HTML body ul li a { }” be ridiculous? This thing ist called CSS which means Cascading Style Sheet. Why didn’t they call it Selector Style Sheet?
Finally: HTML5 makes cascading much worse: section#article section.chapter figure.photo figcaption.right.
That’s not the cascade, tht’s just decendent selectors.
The cascade is how elements inherit styles.
ul {
styles here
}
ul li {
style here
}
that’s the cascade
I learn from every comment, so I drop my grain of salt here: the cascade is about overriding styles because of specificity and order of the selectors.
Like
ul{color:#fff}
ul{color:#f00}
#someParticularUl{color:#0f0}
#someOtherUl{color:#00f}
would make the first group red, the second element green, and the third element blue because of the way the browser interprets the styles cascade .
The first two are unnecessary, but the ‘ul’ might not be; ‘li’s can be contained in ‘ol’s.
Hey Chris, thank you for this article, is awesome!
I am interesting in finding of a proper way of naming CSS selectors (DIVs).
I wrote an article about giving names for DIVs but so far is not a closed subject and I have the feeling that is still a lot to be said about this subject.
Can you help us in this?
Thanks!
Nice article, there are some points (regarding RTL parsing) that I’d never considered before. There are times when qualifying your classes with an element name can be helpful though, particularly in order to overcome specificity issues.
Adding the tag adds 0,0,1,0 to the specificity, which is pretty low. I think in general if you want to fight against a specificy problem it’s better to add an ID to it and target based on that new ID. I see what you are saying though, but now knowing that tag-qualifying is less efficient it might as well be avoided.
Yes, it’s low but sometimes that might be all you need :)
Don’t get me wrong, I’m definitely an advocate of clean markup & CSS, but do you know of any figures showing the impact of selector pedanticism? It’s just hard to imagine that real-world CSS usage influences the draw times of pages THAT much – even http://sxsw.beercamp.com/ renders quickly.
You say ‘never do this’:
ul#main-navigation { }
Yet, there is a very good reason (beyond efficiency) to do this… By qualifying an ID with an element, it’s easier to just look at the CSS and know that that ID applies to an unordered list. Without the element qualifier, you’re forcing those who edit and use your CSS to always switch to the markup to understand what element this ID applies to.
Also, you might have a ‘roving’ ID. If the ID is applied to a UL, that rule is more specific than a just an ID selector.
If tag-qualifying helps the readability of CSS to you that much, then go for it. Just know that it’s less efficient and not necessary. Some people don’t use short hand because they think listing the individual properties is more readable, yet they are aware that that increases the size of the CSS file.
IDs are only unique within the context of a single page. However, it is a best practice to concatenate stylesheets that can be shared across pages — in this case an ID might need to be scoped or qualified.
People tag-qualify for readability sake (which is quite important too), i think what’s most important is to strike a decent balance.
/*ul*/#main-navigation { }
That should help with readability without the inefficiency. Of course, you could also add a comment above the line identifying main-navigation as an unordered list.
This paired with an automated minification process that strips comments would be a pretty decent solution to optimize readability and efficiency. There might even be other places where this is useful (for example, indicating skipped descendants).
/*ul*/#main-navigation /*li*/ a { }
Wow, that’s a great idea. I’m definitely implementing that in my style sheets :)
second!
Some browsers can’t handle comments mid selector so it may skip the rule altogether. Also, having a lot of comments will slow things down a little.
@Timmy: Well, wouldn’t “This paired with an automated minification process that strips comments” make that a non-issue?
Word!
you idiot, why would you do that and bloat your stylesheet!
I agree with Shaun, but for a very different reason. Style sheets are not just used on one page (unless you want to load a different one for each page, which doesn’t really go with the efficiency theme of this article), meaning an ID on one page might be connected to another element on a different page. So if you only want to target, say a span element with the ID of “message”, but not the textarea on a different page with the same ID, you have to use span#message. Sure, it should be avoided when possible for efficiency reasons, but don’t say “never”.
This was indeed the only point I thought “oh shit, that’s what I do” =P…and I’m doing it for the exact same reason…readability, so I’ll have to digg into css minification (already doing it with the JS) but as far as I knew I had some problems with IE when minifing the CSS so I took it out…
But very nice article @Chris
The other thing I thought about: If you really care about performance of your site I think you have to make a hierarchy in what slows the page mostly down…and I think the CSS adds a veeery small part to the loading time…I think most of the loading time comes from content/css-images (when not in sprites), then JS and then all the other stuff…so first I would optimize the images and the JS
but when writing new sites, this css-performance stuff is helpful for sure…just to do it right =)…as others said…where are the benchmarks? =)…
Thanks Chris,
This will actually make me think a little bit more about how I write my markup, at the moment if I am unsure whether I will be re-using a selector I just throw a class on it, maybe I should start thinking about id’s.
thanks
I think the most important part of this is just understanding the costs of our code and taking it into consideration. As Chris noted, rigidly applying these principles would surely result in some degradation of creating semantic markup. Thanks for the article Chris!
Interesting. I never thought of the efficiency of CSS before. I rarely ever attach elements to my IDs and Classes, but if I don’t need it, I’ll make sure to keep it away.
Thanks for another great post Chris. I constantly use html & body descendant selectors so it might be time to change my ways.
Why do browsers parse CSS right-to-left? That sounds like the efficiency problem to me…
It makes more sense for it to read the limiting selector before ones that would logically be lower in the stack.
I guess that’s why: http://en.wikipedia.org/wiki/Bottom-up_parsing
I can for sure say it’s for performance. I’m not sure if I can explain exactly why.
Basically if you have a tree like:
div ul li
if has to match every div, then go through and check every element to see if it’s a ul, then if it’s a ul check every element and see if it’s a li. To do so it has to traverse the DOM tree.
Doing it in reverse, it can build a look-up table of every li element on the page. (I’m not sure if it’s literally the exact way it is implemented, but it’s functionally the same). It then just have to check it’s ancestors to see if it has a ul anywhere, and then anywhere about that point it it has a body.
Doing it RTL will be much faster for cases where you can’t make assumptions about the structure of the document being parsed (aka, a generic parser).
superfast:
.list {}
.list_li {}
.list_li_span{}
.list_li_span_a{}
don’t use ID!
Are you kidding?
no! :)
Do you put such verbose class names on everything?
You sound like a CMS
@zolotoy – good pattern!
This is excellent if you automate that via a script.
Why even use markup if you are using class names like that. Are you wrapping all your content in a DIV and setting float left?
So …
really?
This pattern is good for more efficiently rendering CSS in high performance websites
Great article.
How would you go about testing this? In other words, how do you measure how efficent the CSS is? What tools and methods would you recommend?
Rick
http://static.yandex.net/reflowmeter/_reflow-meter.js
Hey Chris,
What are your thoughts about psuedo selector regarding this article?
I hope your browser sees a psuedo selector as part of the parent selector item. Otherwise, the whole :hover thing would be a great pain in the ass for browsers.
So, yet again: What are your thoughts about this?
Google pointed out a couple of over qualified selectors in my style-sheet, before that i had no idea about them. Cool post!
“Just as a quick note, I’d also like to mention that since CSS style selectors are also used in many JavaScript libraries, these same concepts also apply.”
It’s just not true. The tag-qualify thing in jQuery has more speed. It’s because when you select something like $(‘.class’) what it does behind the scene is that it uses getElementByTagName and put a star in it and check every element to see if it has that class. It’s more efficient to do like $(‘a.class’).
I’m wondering though why the parser goes from right to left when what we write is left to right?
It’s true in that using an ID selector is JavaScript is the fastest/easiest selector, just like in CSS.
Hey Chris & Hassan.
You’re both right on the jQuery code comment – in the past it was faster to actually help the jS/jQuery find the element through ID/ through the DOM when it comes to using classes.
However nowadays it looks as though jQuery uses getElementByClassName which has recently been added future to more recent browsers (Chrome definately does)!
Read through/search through – http://code.jquery.com/jquery-1.4.2.js
For more details!
It is true, but you certainly have a very valid, easily over looked point.
jQuery basically “implements” CSS selection, so they way it selects elements is different from how the browser naively selects elements to apply CSS styles. It wraps the native javascript methods to make development easier, and does so in the best performance possible it can on top of the native javascript functions.
The browser has a more comprehensive, native method for applying the css rules, and uses different methodology.
For javascript, you select relatively few things to manipulate. For browser DOM styling, every single dom element has to be evaluated. The backend logic is thus handled differently.
Above (on an early post on the broswer parser RTL issue) I have a more thorough description of how/why it’s done right-to-left.
Great article. I began teaching myself CSS just a little over a year ago, and I still consider myself quite a novice especially when it comes to best practices, and getting all of the semantics down. I am also happy to have read that it’s not a good idea to “tag qualify.” I had been doing this quite a bit after looking over other designers code while trying to teach myself. I assumed this was something that should be done for organizational purposes, and I’m glad to find out otherwise!
Nice! Helped me just solve and issue I’ve been having. Dang, can’t stress enough how much that just helped me!
Chris, thanks for this. I actually just barely started following most of these practices before I read this article, but not because I thought they would speed up my CSS loading, but because I liked how simple they kept my code. However, I found that to avoid using too make descendant selectors and class names I had to start using the !important property to fix specificity issues. I personally don’t mind this, but wondered what your thoughts are on this… I know some are very against ever using !important ever, or at least avoiding it as much as possible…
While I’d never put the html body portion, descendant:
ul li a { }
Offer me a lot of power in building quick websites and application development; especially in environments where we have many apps that share a similar design.
For instance, we’re building an internal property management app with divisions for engineering, security, time tracking, and for our tenants to interact with us. By skipping out on excessive ID’s and classes, I’ve cut my html and css down from 900 to 300 lines of code (it’s a hefty application), and it allows my younger designers and developers to code faster, since certain basic html tags will default when used in certain ways. We actually found it faster for users this way, but again the trade off was a rigid, html structure for semantics instead of id here class here.
Steve Souders also discussed this On his blog
I try to avoid very inefficient rules, i think it`s time to take a closer look to the inefficient ones too !
Hey Chris,
That was a great post. I’m a long time reader, first time commenter!
I had one of those “AHHH HAAA” moment’s when I read this:
#main-navigation li a { font-family: Georgia, Serif; }
#main-navigation { font-family: Georgia, Serif; }
I’ve been coding CSS for a while and I’ve made that mistake sssoo many times. Efficiency FTW!
Thanks again for all the great posts and screencasts!
Haha and I botched the reply even though I read your “Remember” to the right.
I’d best go and submit myself to “Submit A Douche” :)
Writing stylesheets this way might be useful when writing a stylesheet for mobile devices. These are slower than computers still. So every second still counts, right?
Hi guys :) Interesting article.
While it is good to write optimized CSS code as much as we can in huge, and I mean huge web sites with lots of content and different layouts for different section, there might be some problems with class names.
If I have 6 or 7 types of spacer elements in my content like
1. Menu items
2. Paragraphs
3. Side menu items
4. Side menu blocks
5. List items
6. Something else
There all can be classed as .spacer, but styling them accordingly their tag name is much easier and semantically right than assigning long and hard to understand class names like – “.sideNavigation-MenuBlockSpacer” while we can use smt like this – “#menuBlock li.spacer”.
So – don’t sacrifice semantics for performance and vice versa :)
Css is just one speed issue that we will all face in the future as I hear that Google is starting to factor in loading time in to there algorithms. I found Yslow add on for firebug (for fire fox) quite use full to see why site are under preforming. For instance Chis I see your making 10 ext. Java script requests. every second counts when your up against the world…
Is this really *rendering* performance or just *parsing*?
The rendering experience has far more to do with how repaints and reflows are triggered and handled, doesn’t it?
Depends on how you define “rendering.”
I’d consider rendering to be the time spent from the program (in this case, the browser) receiving the input to the time it’s drawn on the screen. In order to do that, it first has to parse the input (the css instructions) to figure out what to draw, then draw it.
So parsing is part of the rendering process. And my uneducated guess would be that traversing the DOM and finding out what needs what styles applied to it is likely one of the most time consuming parts of the rendering process.
Great article!
There are some great points here for optimizing your CSS for load times – very important for large projects, and even more so now that Google are factoring page load times into their algorithms!
Thanks Chris!
Very interesting, I had no idea that selectors were read right to left. I can certainly see how that could greatly decrease efficiency. I never tag qualify and I generally write compact descendant selectors but you pointed out some areas there that can be improved. Thanks!
Chris, surely this article contains hints of extreme importance for anyone working with web development or even the area’s fan. However, I disagree with you on some statements, let them.
Never do this:
ul#main-navigation { }
Most projects that develop, is using content management systems like Drupal and WordPress or Joomla!, it is common for these functions and that due to pre-defined template tags is necessary to adopt the style sheet a higher hierarchy than usual.
Based on comments on my experience and realized some tests and found that:
.class-list { };
was less effective than using
ul.class-list { };
The rendering of CSS often becomes more effective when we specify the class to that element, I tried to implement a suggestion made in the comments, using
/*ul*/.class-list { };
but this is no different from first use.
I also understand that it is much more efficient that I use:
div.main div.box ul li a { };
What to use:
.class-link { };
<div class="box">
<ul>
<li>
<a class="class-link" rel="nofollow"></a></li>
</ul>
</div>
</div>
We are talking not only speed but also of organization and standardization of the code. Would you like a your opinion on these considerations, taking into consideration as you said, computers today are much faster and will increasingly passes along with the years.
I doubt that slapping an id on every element would be remotely performant, since then the browser would have to mach more rules and couldn’t re-use as much stuff. Sure id selectors are probably faster individually than other kinds, but that doesn’t mean that if you abuse them your page will be faster.
It’s likely that you’ll get better mileage by eliminating redundant rules, which is a double win since it also makes your CSS shorter so there’s less latency on the page load.
Really for most sites network latency is more important than CSS selector matching.
Also probably of way more importance is the actual markup and the styles applied to it not the selectors. (ie if you ask the browser to composite 20 png’s with alpha on top of each other it will almost certainly spend far more time drawing images that matching selectors)
I’ve commented about it a couple weeks ago on another post: https://css-tricks.com/specifics-on-css-specificity/#comment-75911.
Keep the number of HTML nodes low and you probably don’t need to care that much about how long it takes to render your CSS (unless it’s a mobile device) – check the number of elements your page have by using:
document.getElementsByTagName('*').length
try to keep it under 1000.Interesting read in case you want to know more details: Performance Impact of CSS Selectors – real benchmarks and shows how much you actually gain by improving selectors.
PS: not all the JS CSS selector engines uses the right-to-left approach..
Cheers.
A nice tip for pages with large css files (2000 more lines), but in regular pages the speed gain is not significant.
nice!~
Chinese translation: http://www.vfresh.org/w3c/727
Well said:
“I think the lesson here is not to sacrifice semantics or maintainability for efficient CSS.”
Good to make informed decisions when balancing efficiency with practicality and clean markup. So these tips are helpful in understanding the balance. Thanks!
If you write the efficient and nice CSS coding, your client will like that? or you have more tip from client? maybe not, but you will like professional, and every browsers will nice rendering to your website :)
It’s a shame descendant selectors are so expensive, they are very useful to avoid cluttering html with repetitive classes and IDs – there is already a tree relationship we can use after all.
Nice!!!!
I’ve been using CSS adamantly since about 2002/03, prior to that being a dinosaur who used in-tag properties and all the old stuff. My CSS isn’t perfect by the elitist’s standards, but I write it very clean and readable for myself and I put it through a “minifier” before I upload it. My problem has and always will likely be redundancy. I could group more stuff together to shorten it somewhat. In the end, I’m not going to be like a lot of youngin’s obsessed with image and rep and stress over 0.7kb, but I do try.
Great post, never even thought of this before
ul#main-navigation { } better than #main-navigation { }, it make it easy to identify what element is it and more readability
Not sure if thats true, as the browser first will look for all UL elements in the DOM and then search for the ID, instead of directly searching the DOM for the ID. Correct me if i’m wrong though….
I’ll try and test it soon and see if it even makes a difference.
i also not sure right or wrong.
but after make some test/research, i think it has different “CSS Specificity Value”(https://css-tricks.com/specifics-on-css-specificity/)
where eg. div.box has BIGGER CSS Specificity Value than .box
like
.box{
display:block;
background:#000;
height:200px; width:200px;
}
div.box{
background:#DDD;
}
The color eventually been override and become #DDD;
but if we write like this
div.box{
display:block;
background:#000;
height:200px; width:200px;
}
.box{
background:#DDD;
}
the color still #000
Correct me if i make use wrong example or if i wrong
Another way to improve rendering speed is to avoid using @import statement.
Check this great article by Steve Souders about the subject.
Hey!
Thanks for this stuff.
But can you proof your arguments with some performance-tests and graphics?
Greetings
Chris
Big performance leaps only show on major websites wich uses alot of css, and the performance issues due infact save you another few ms.
Another way to increase performance is to only serve css wich is only used for the specific page. Basicly you have to create seperate stylesheets for each page/template. Offcourse this needs some extra backend, and is not always feasable or possible on projects.
Those are really good points, but one thing that I am surely not going to do is adding a class or ID to every CSS target just to increase its rendering speed – HTML size is also a factor I care.
I will from now consider those points when I write my CSS – especially about avoiding too many universal selectors – but the performance incrase is not worth a refactoring of the already existing one.
I guess Another way to increase performance is to only serve css wich is only used for the specific page. Basicly you have to create seperate stylesheets for each page/template. Offcourse this needs some extra backend, and is not always feasable or possible on projects.
really awesome.. thanks a lot..
I would like to add one more
it is better to arrange css properties in alphabetically order eg.
.class{
widht:100px;
background:#000000;
}
instead go with this
.class{
background:#000000;
widht:100px;
}
for a better performance….
Thanks for share..
I think this is just a pet peeve of yours, if anything.
Thanks for this article Chris! I learned a few things that up until now I only suspected were problematic, but never knew for sure. Case in point – tag qualifying. It’s interesting to note that Eric Myer actually taught this practice in his book Eric Myer on CSS.
Granted that was 8yrs ago and development ideals have come a long way since, but I remember thinking (even as a noob, with that being the first CSS book I ever bought) that it seemed like a redundant way of coding. I never adopted the practice, but it wouldn’t surprise me if many devs still write CSS that way, simply because the instruction came from someone so well-known and respected in the industry.
Articles like this are important because they remind us it’s good to continually question the HOW of “best practices”, not to mention WHEN or IF they should be applied on a per project basis. Nice work. :)
Surprised no one has mentioned this yet: this is incredibly relevant to mobile sites. With the lower network and processing speeds getting performance improvements is much more noticeable.
As for slapping IDs around: if your IDs are self descriptive there’s no need to do things like ul#main-navigation. The way to do it is to name the ID something more like #ul-main-navigation, though it could be better. Same thing with class names and even variables in programming. Dunno why people don’t do that. If reading the variable answers the question "what does this variable hold?" then it should be fine.
Ideally there will be no need to refer to the HTML when reading the CSS. If that can be achieved then readability is definitely pretty good.
Not a lot of light has been shed on CSS selection speed.. while we have SunSpider for JS and the Sizzle.js RTL selection engine, I was wondering if there’s a CSS selector speed test?
Though the article makes a lot of sense, I’m a little skeptical about the performance hits of descendant tag selectors, and furthermore whether performance issues are routed in the complexity of selectors themselves, or the process of applying CSS styles to the returned set of elements.
Are there any tests or statistics to prove the points in this article?
Fascinating, never had a clue. I was sure there might be some optimal patterns to follow and what-not, but I never took time to dig in and find out.
Learned something new, thanks!
You’re forgetting something:
Using selectors reduces the size of the CSS file if you do it properly — meaning, I can use 2 lines of CSS to cover more than I could if I applied a unique ID to something everywhere. My CSS file is a few hundred lines shorter than it would be if I had gone this old route.
Also, CSS hardly effects the speed of the page when it’s compressed. With my superiorly small CSS file, with compression, it will outperform a bloated and messy CSS file in comparison.
Don’t believe me? Try it yourself. This article is useful for small websites, but certainly not big projects.
Interesting points, but I’d love to see some real numbers to get a picture of the real performance hit.
Well, this is true for desktop computers but many of the currently used mobile devices are as slow as an old Pentium III when it comes to rendering web pages :( so the article seems to be still very relevant.
Does this mean that using a framework system is actually slow?
Since it is just multiple classes that are strung together it seems okay but just still not as fast as ID. Is this correct?
That’s correct Charlie, but generally the issue that you note is not one of the greatest offenders in slowing down your page. More info at CSS Wizardry.
Has anyone seen a tool for measuring Rendering performance on particular pages? As Fiddler would do for measuring server speed, I need a tool to analyze performance hits in rendering on specific pages.
Very interesting subject, I have been wondering a lot about these questions lately. Especially after a year of using SASS and nesting and viewing the endless descendant selectors that are created. For that exact reason I now try to use child selectors instead.
I had a quick question.. the reasoning put forth here implies $(‘#object’).find(‘a’) should be more efficent than $(‘#object a’) since we “override” the right to left principle of CSS. is this correct?