A lot of this has to do with the technical lock-in effect, for better or worse. I think this is caused on a number of factors.
(1) The marginal cost of learning and/or deploying an entirely new technology for each problem very rarely outweighs the marginal benefit over a "good enough" solution using familiar technologies. In your example of Windows/Linux, if you determined that switching to Linux boxes saved you $100K in licensing fees, would it be worth replacing or retraining your entire staff and likely re-doing a lot of work that has already been put in, then go from the stable (maturity) phase back to the chaos of an initial deployment? Then there is the opportunity cost of the value you are not adding while you are retooling.
(2) Switching technologies introduces project risk based on the "devil you know..." line of thinking. There are no guarantees that the project will be successful in any platform. Engaging in a project using a new technology has a strong potential to scape-goat the new technology for any problems on that project regardless of whether they are specifically related to the switch. The person who made the decision to use that new technology may be painting a target on their back by doing so. Add to this that the person making that decision may be less confident in their ability to pull off the project using an unfamiliar technology. The old technology may be clunky and less than ideal, but at least you know what you are getting into.
(3) At the rate new versions come out for each of these technologies, it can be a daunting task to remain current on your skills in any one of them. It forces us to chose whether we want to be an expert or a generalist, which I feel a career decision that should be based on your personal ambitions.