views:

737

answers:

4

What are the consequences (positive/negative) of using the unsafe keyword in C# to use pointers? For example, what becomes of garbage collection, what are the performance gains/losses, what are the performance gains/losses compared to other languages manual memory management, what are the dangers, in which situation is it really justifiable to make use of this language feature, is it longer to compile... ?

+2  A: 

To quote Professional C# 2008:

"The two main reasons for using pointers are:

  1. Backward compability - Despite all of the facilities provided by the .NET-runtime it is still possible to call native Windows API functions, and for some operations this may be the only way to accompling your task. These API functions are generally written in C and often require pointers as parameters. However, in many cases it is possible to write the DllImport declaration in a way that avoids use of pointers; for example, by using the System.IntPtr class.
  2. Performance - On those occasions where speed is of the utmost importance, pointer can provide a route to optimized perfomance. If you know what you are doing, you can ensure that data is accessed or manipulated in the most efficient way. However, be aware that, more often than not, there are other areas of your code where you can make necessary performance improvemens without resoirting to pointers. Try using a code profiler to look for bottlenecks in your code - one comes with Visual Studio 2008."

And if you use pointer your code will require higher lever of trust to execute and if the user does not grant that your code will not run.

And wrap it up with a last quote:

"We strongly advice against using pointers unnecessarily because it will not only be harder to write and debug, but it will also fail the memory type-safety checks imposed by the CLR."

Oskar Kjellin
Can you post the link(s) for your reference?
Alerty
It's a book that I've bought: http://www.wrox.com/WileyCDA/WroxTitle/Professional-C-2008.productCd-0470191376.html
Oskar Kjellin
Okay! Thank you :D
Alerty
NP! Perhaps read this: http://www.c-sharpcorner.com/UploadFile/gregory_popek/WritingUnsafeCode11102005040251AM/WritingUnsafeCode.aspx
Oskar Kjellin
+4  A: 

I can give you a situation where it was worth using:

I have to generate a bitmap pixel by pixel. Drawing.Bitmap.SetPixel() is way too slow. So I build my own managed Array of bitmap data, and use unsafe to get the IntPtr for Bitmap.Bitmap( Int32, Int32, Int32, PixelFormat, IntPtr).

Conrad Albrecht
Yes, bitmaps still don't have a fast 'managed' API.
Henk Holterman
+11  A: 

As already mentioned by Conrad, there are some situations where unsafe access to memory in C# is useful. There are not as many of them, but there are some:

  • Manipulating with Bitmap is almost a typical example where you need some additional performance that you can get by using unsafe.

  • Interoperability with older API (such as WinAPI or native C/C++ DLLs) is another area where unsafe can be quite useful - for example you may want to call a function that takes/returns an unmanaged pointer.

On the other hand, you can write most of the things using Marshall class, which hides many of the unsafe operations inside method calls. This will be a bit slower, but it is an option if you want to avoid using unsafe (or if you're using VB.NET which doesn't have unsafe)

Positive consequences: So, the main positive consequences of the existence of unsafe in C# is that you can write some code more easily (interoperability) and you can write some code more efficiently (manipulating with bitmap or maybe some heavy numeric calculations using arrays - although, I'm not so sure about the second one).

Negative consequences: Of course, there is some price that you have to pay for using unsafe:

  • Non-verifiable code: C# code that is written using the unsafe features becomes non-verifiable, which means that the your code could compromise the runtime in any way. This isn't a big problem in a full-trust scenario (e.g. unrestricted desktop app) - you just don't have all the nice .NET CLR guarantees. However, you cannot run the application in a restricted enviornment such as public web hosting, Silverlight or partial trust (e.g. application running from network).

  • Garbage collector also needs to be careful when you use unsafe. GC is usually allowed to relocate objects on the managed heap (to keep the memory defragmented). When you take a pointer to some object, you need to use the fixed keyword to tell the GC that it cannot move the object until you finish (which could probably affect the performance of garbage collection - but of course, depending on the exact scenario).

My guess that if C# didn't have to interoperate with older code, it probably wouldn't support unsafe (and research projects like Singularity that attempt to create more verifiable operating system based on managed languages definitely disallow usnsafe code). However, in the real-world, unsafe is useful in some (rare) cases.

Tomas Petricek
A: 

Garbage Collection is inefficient with long-lived objects. .Net's garbage collector works best when most objects are released rather quickly, and some objects "live forever." The problem is that longer-living objects are only released during full garbage collections, which incurs a significant performance penalty. In essence, long-living objects quickly move into generation 2.

(For more information, you might want to read up on .Net's generational garbage collector: http://msdn.microsoft.com/en-us/library/ms973837.aspx)

In situations where objects, or memory use in general, is going to be long-lived, manual memory management will yield better performance because it can be released to the system without requiring a full garbage collection.

Implementing some kind of a memory management system based around a single large byte array, structs, and lots of pointer arithmetic, could theoretically increase performance in situations where data will be stored in RAM for a long time.

Unfortunately, I'm not aware of a good way to do manual memory management in .Net for objects that are going to be long-lived. This basically means that applications that have long-lived data in RAM will periodically become unresponsive when they run a full garbage collection of all of the memory.