I am a fan of valid web pages and always spend time passing new sites through the W3C validator.
When trying to make a case for why companies should validate web pages I quickly thought of accesibility and the future-proofing of the web site on more primitive devices such as phones, fridges, watches, the next-big-thing etc.
However I then wondered if there is a computational overhead involved in rendering web pages that do not validate?
Has there been any research done in this area? and do some browsers handle invalid content better than others?