There's an interview with C# Creator Anders Hejlsberg that touches on the subject here. Basically, exactly what @Marc Gravell said: typesafety first, unsafety by explicit declaration.
So to answer your question: nothing in the CLR prevents it; it's a language idiom designed to allow you to work with safety gloves when dealing with types. If you want to take the gloves off, it's your choice, but you have to make the active choice to take the gloves off.
Edit:
For clarification: I know what
"unsafe" and "safe" code is. It's just
a question of why must we do all the
extra work (ok, not THAT much extra)
just to be able to use these features.
As mentioned in the interview I linked, it was an explicit design decision. C# is essentially an evolution of Java and in Java, you don't have pointers at all. But the designers wanted to allow pointers; however because C# would typically be bringing in Java developers, they felt it would be best if the default behavior be similar to Java, i.e. no pointers, while still allowing the use of pointers by explicit declaration.
So the "extra work" is deliberate to force you to think about what you are doing before you do it. By being explicit, it forces you to at least consider: "Why am I doing this? Do I really need a pointer when a reference type will suffice?"