In general, I tend to use try/catch for code which has multiple failure points for which the failures have a common handler.
In my experience, this is typically code which qualifies input or context before performing some action or output after performing some action.
I have received counsel from literature and colleagues to minimize the code in such blocks and I accept that as generally good advice.
I would like to understand a bit more about the foundation for the above advice:
- What is the nature of the overhead?
- Are there recent development guidelines that address the recommended usage (or avoidance) of try/catch blocks?
- How much do faster processors and more modern compilers mitigate the problems with try/catch?
Thanks in advance for the help,
AJ