While working on some projects recently, I found myself frequently writing CSS rules like this to enable CSS3 hardware accelerated "fade" transitions for elements in my app:
.mySelector {
-webkit-transition: opacity
1
s linear
0
s;
transition: opacity
1
s linear
0
s;
}
$(".mySelector").css("opacity", 1)
I've documented this approach before, and hopefully you've given it a try in some of your projects.
But as I started using this technique more, I realized how wasteful it was to re-define the CSS opacity transition everywhere it was needed. I wondered if I could simplify my CSS with the old CSS2 universal selector, better known as the "star" selector.
Instead of defining the CSS3 opacity transition every time I wanted to use it, I just defined it once in my stylesheet for all elements:
* {
-webkit-transition: opacity
1
s linear
0
s;
transition: opacity
1
s linear
0
s;
}
Whoa, whoa. Before you throw me overboard for using the CSS star (*) selector, let's look at the facts.
Clearly, the biggest concern any time you drag-out the universal selector is performance. As you might expect, the universal selector instructs the browser to select every element in the page, which can (theoretically) be a drag on performance. Furthermore, intuitively, as developers, we're trained to assume "*" anything is both lazy and bad.
The reality with CSS is actually different, though.
Mr. Performance Steve Souders put CSS selectors to the test in 2009 and discovered that they actually have relatively little impact on page render time. In "real world(ish)" scenarios, with thousands of DOM elements and CSS rules, even known "slow" selectors like child and decedent are not that much slower than "baseline" direct element selectors.
With this evidence in mind, I ran my own tests.
Using a page that borrow's Souder's HTML, I created three versions:
In my tests, I am looking for relative change from the "No CSS" baseline to see if the * selector has any meaningful impact on page rendering time. The absolute values are irrelevant.
Tests in-hand, I put Chrome (22), Opera (11.6), IE8, iPad 3, and even Kindle Fire through the battery of tests. I didn't test more browsers because the results we're plainly consistent.
In every case (including devices), the universal rule had no meaningful impact on page rendering time.
If universal selectors really aren't all that bad, then why all the fear?
As it turns out, much of the fear is rooted in our lingering IE6 era developer hangover. In the IE6 days, the universal selector was the source of pain and, as was often the case, hacks. If you've spent any time around CSS in the last 10 years, you're probably well aware of the IE "star HTML hack." This hack played on IE's poor handling of the universal selector, and for many developers, this was the extent of their use of this particular CSS feature.
Universal selectors are also irrationally feared because they can be used for bad. Like any powerful coding technique, the universal selector in the wrong hands is like giving a machine gun to a child. It can do a lot of unintentional harm in a hurry. If you use the universal selector, you must be sure your usage is not spawning unintentional performance consequences.
So, there you have it.
As best I can tell (and I'm willing to be proven wrong with better testing/data), there is no reason to avoid the universal selector for configuring CSS transitions, particularly transitions that you find yourself configuring for many elements on a page.
Hopefully this insight will save you some time and give you more confidence the next time you use the star selector in your CSS.