I work on a project that's distributed for free in both source and binary form, since many of our users need to compile it specifically for their system. The necessitates a degree of consideration in maintaining backwards compatibility with older host systems, and primarily their compilers.
Some of the cruftiest of these, such as GCC 3.2 (2003!), ICC 9, MSVC (almost abandonware, not C++!) and Sun's compiler (in some old version that we still care about), lack support for language features that would make development much easier. There are definitely also cases where enabling users to stick with these compilers costs them a lot of performance, which runs counter to the goals of what we're providing.
So, at what point do we say enough is enough? I can see several arguments for ceasing to support a particular compiler:
- Poor performance of generated code (relative to newer versions, asked about here)
- Lack of support of language features
- Poor availability on development systems (more for the proprietary than GCC, but there are sysadmin issues with getting old GCC, too)
- Possibility of unfixed bugs (we've isolated ICEs in ICC and xlC, what else might be lurking?)
I'm sure I've missed some others, and I'm not sure how to weight them. So, what arguments have I missed? What other technical considerations come into play?
Note: This question was previously more broadly phrased, leading many respondents to point out that the decision-making is fundamentally a business process, not an engineering process. I'm aware of the 'business' considerations, but that's not what I'm looking for more of here. I want to hear experiences from people who've had to support older compilers, or made the choice to drop them, and how that's affected their development.