The decimal type implements decimal floating point whereas double is binary floating point.
The advantage of decimal is that it behaves as a human would with respect to rounding, and if you initialise it with a decimal value, then that value is stored precisely as you specified. This is only true for decimal numbers of finite length and within the representable range and precision. If you initialised it with say 1.0M/3.0M, then it would not be stored precisely just as you would write 0.33333-recurring on paper.
If you initialise a binary FP value with a decimal, it will be converted from the human readable decimal form, to a binary representation that will seldom be precisely the same value.
The primary purpose of the decimal type is for implementing financial applications, in the .NET implementation it also has a far higher precision than double, however binary FP is directly supported by the hardware so is significantly faster than decimal FP operations.
Note that double is accurate to approximately 15 significant digits not 15 decimal places. d1 is initialised with a 7 significant digit value not 6.