The primary driver for this functionality was to support SQL Servers stringent requirements for integrating the CLR into SQL Server 2005. Probably so that others could use and likely for legal reasons this deep integration was published as a hosting API but the technical requirements were SQL Servers. Remember that in SQL Server, MTBF is measured in months not hours and the process restarting because an unhandled exception happened is completely unacceptable.
This MSDN Magazine artical is probably the best one that I've seen describing the technical requirements the constrained execution environment was built for.
The ReliabilityContract is used to decorate your methods to indicate how they operate in terms of potentially asynchronous exceptions (ThreadAbortException, OutOfMemoryException, StackOverflowException). A constrained execution region is defined as a catch or finally (or fault) section of a try block which is immediately preceded by a call to System.Runtime.CompilerServices.RuntimeServices.PrepareConstrainedRegions().
System.Runtime.CompilerServices.RuntimeServices.PrepareConstrainedRegions();
try
{
// this is not constrained
}
catch (Exception e)
{
// this IS a CER
}
finally
{
// this IS ALSO a CER
}
When a ReliabilityContract method is used from within a CER, there are 2 things that happen to it. The method will be pre-prepared by the JIT so that it won't invoke the JIT compiler the first time it's executed which could try to use memory itself and cause it's own exceptions. Also while inside of a CER the runtime promises not to throw a ThreadAbort exception and will wait to throw the exception until after the CER has completed.
So back to your question; I'm still trying to come up with a simple code sample that will directly answer your question. As you may have already guessed though, the simplest sample is going to require quite a lot of code given the asynchronous nature of the problem and will likely be SQLCLR code because that is the environment which will use CERs for the most benefit.