How would I go about testing the performance benchmarks of different css selectors? I've read articles like this. But I don't know if it is applicable to my website because he used a test page with 20000 classes and 60000 DOM elements.
Should I even care,does performance really get downgraded that much based upon the css strategy you take?
Fo example, I like doing this ...
.navbar { background:gray; }
.navbar > li { display:inline;background:#ffffff; }
<ul class="navbar">
<li>Menu 1</li>
<li>Menu 2</li>
<li>Menu 3</li>
</ul>
... but I could do this ...
.navbar { background:gray; }
.navbar-item { display:inline;background:#ffffff; }
<ul class="navbar">
<li class="navbar-item">Menu 1</li>
<li class="navbar-item">Menu 2</li>
<li class="navbar-item">Menu 3</li>
</ul>
Some would say (and might be right) that the second option would be faster.
But if you multiply the second method across all pages I see the following disadvantages:
- The page size will increase because all the elements having classes
- Number of css classes can get quite large which would require more css class parsing
My pages seem to be ~ 8KB with ~1000 DOM elements.
So my real question is how do I create a test bed where I could test performance deltas based on strategy taken for realistic web page sizes? Specifically how do i know how long it takes for a page to be displayed? javascript? how exactly?
Help and just plain opinions are welcome!