You can omit the using around the SqlCommand
. The GC will eventually clean it up for you. However, I strongly advice you not to do this. I will explain why.
SqlCommand
indirectly inherits from System.ComponentModel.Component
, and it therefore inherits its Finalizer
method. Not calling dispose on a SqlCommand
will ensure that the command is promoted at least one generation after it goes out of scope (the .NET garbage collector is a generational gc). For instance: when the command was in gen 1, it will move on to gen 2. Finalizable objects are kept longer in memory to ensure the finalizer can safely run. But not only the command itself is kept in memory, but all the objects it references go with it to that generation. Objects that it will reference are the SqlConnection
, the list of SqlParameter
objects, the possibly large CommandText
string, and many other internal object it references. That memory can only be removed when that generation is collected, but the higher the generation the less often it is swept.
Not calling will therefore give extra memory pressure and extra work for the finalizer thread.
When .NET is unable to allocate new memory, the CLR will force a garbage collect of all generations. After this, the runtime will usually have again enough space to allocate new objects. However, when this forced collect happens when there are a lot of objects in memory that still have to be promoted to a next gen (because they are finalizable, or are referenced by a finalizable object) it is possible that the CLR is unable to free up enough memory. An OutOfMemoryException
will be the result of this.
I must admit I have never seen this happen, because developers didn’t dispose just their SqlCommand
objects. However, I have seen OOMs in production systems a lot, caused by not disposing objects properly.
I hope this gives a little bit of background on how the GC works and what the risk is of not disposing (finalizable) object properly. I always dispose all disposable objects. While a look in Reflector could prove that this not necessarily for a certain type, this kind of programming leads to code that is less maintainable and makes code depend on internal behavior of a type (and this behavior might change in the future).