While, as far as I remember, IEEE 754 says nothing about a flush-to-zero mode to handle denormalized numbers faster, some architectures offer this mode (e.g. http://docs.sun.com/source/806-3568/ncg_lib.html ).
In the particular case of this technical documentation, standard handling of denormalized numbers is the default, and flush-to-zero has to be activated explicitly. In the default mode, denormalized numbers are also handled in software, which is slower.
I work on a static analyzer for embedded C which tries to predict correct (if sometimes imprecise) ranges for the values that can happen at run-time. It aims at being correct because it is intended to be usable to exclude the possibility of something going wrong at run-time (for instance for critical embedded code). This requires having captured all possible behaviors during the analysis, and therefore all possible values produced during floating-point computations.
In this context, my question is twofold:
among the embedded architectures, are there architectures that offer only flush-to-zero? They would perhaps not have to right to advertise themselves as "IEEE 754", but could offer close-enough IEEE 754-style floating-point operations.
For the architectures that offer both, in an embedded context, isn't flush-to-zero likely to be activated by the system, in order to make the reaction time more predictable (a common constraint for these embedded systems)?
Handling flush-to-zero in the interval arithmetic that I use for floating-point values is simple enough if I know I have to do it, my question is more whether I have to do it.