Decimal is the most sensible type for monetary amounts.
Decimal is a floating point base 10 numeric type with 28+ decimal digits of precision. Using Decimal, you will have fewer surprises than you will using the base 2 Double type.
Double uses half as much memory as Decimal and Double will be much faster because of the CPU hardware for many common floating point operations, but it cannot represent most base 10 fractions (such as 1.05) accurately and has a less accurate 15+ decimal digits of precision. Double does have the advantage of more range (it can represent larger and smaller numbers) which can come in handy for some computations, particularly some statistics computations.
One answer to your question states that Decimal is fixed point with 4 decimal digits. This is not the case. If you doubt this, notice that the following line of code yields 0.0000000001:
Console.WriteLine("number={0}", 1m / 10000000000m);
Having said all of that, it is interesting to note that the most widely used software in the world for working with monetary amounts, Microsoft Excel, uses doubles. Of course, they have to jump through a lot of hoops to make it work well, and it still leaves something to be desired. Try these two formulas in Excel:
The first yields 0, the second yields ~-2.77e-17. Excel actually massages the numbers when adding and subtracting numbers in some cases, but not in all cases.