The answer can be found in the IEEE specification of floating point representation:
For the single format, the difference between a normal number and a subnormal number is that the leading bit of the significand (the bit to left of the binary point) of a normal number is 1, whereas the leading bit of the significand of a subnormal number is 0. Single-format subnormal numbers were called single-format denormalized numbers in IEEE Standard 754.
That is, (if I interpret it correctly) Double.MIN_NORMAL
is the smallest possible number you can represent, provided that you have a 1 in front of the binary point (what is referred to as decimal point in a decimal system). While Double.MIN_VALUE
is basically the smallest number you can represent without this constraint.