views:

1050

answers:

8

Software will use memory, no big suprise, but how do you keep this usage to a minimum in comparison to how big your program is.

Best example I think would be Firefox. Some users have experienced it, others haven't, but it's pretty safe to say that all the previous versions of Firefox used much more memory then the current version. Yet still, functionality expands and options are added. I'd expect the memory usage to go up as extra options and such stuff gets added.

So in other words, there must be methods by which to make sure your program doesn't use up the memory of the computer.

So, I'm turning this into a "best-practices" question, asking all of you what your little tricks and tweaks are to make your program do what it does, with less CPU then you'd normally think. And also, what to most certainly avoid.

A little side-question here: I came accross something in a book about C#. Apparently, when coding an Enum, it's possible to set the size of the index of this Enum. With large Enum's, you should let the compiler handle it I guess, but for an Enum which only holds like 2 or 3 items, you can do this:

public enum HTMLTYPE : sbyte
{
    HTML401,XHTML10,XHTML11
}

For those of you who don't know why, apparently the amount of memory reserved for the index of any Enum is automatically set to an integer in C#. So in other words, that amount of memory is going to be reserved. But when defining so little things in your Enum, an integer is a waste of space. The book claimed that this could cut down the amount of memory used by the program. I'm wondering if this is true.

EDIT: indeed, it should be memory, darn me. Changed all entries.

+2  A: 

It is true and it isn't true. There is a reason behind using int - processor use it naturally (except when running x64 Windows, then more natural would be Int64). This is because in the CPU you have 4 registers 32bit long (or 64bit in x64 mode).

Besides, let's face it: .NET isn't exacly about efficiency, both in memory and CPU. There are practices not to make big mistakes (like string concatenation in a loop rather than using StringBuilder) but enums reduced from 4 bytes to 1 byte isn't worth it.

Migol
In general, of course, I agree. However, for very large data sets, a 75% reduction in storage size can be worth it. :)
BobbyShaftoe
Uhm, no. I'm developing VERY large application for insurance company. Such optimalization for one milion objects would give... 3 MB. And it may kill performance. So it's better to seek them somewhere else.
Migol
+5  A: 

First, you're probably confusing CPU and RAM (aka memory). CPU is the processor, i.e., what runs your code against your data. Memory is where that code and data are stored.

That enum trick should actually be avoided. First, sbyte isn't CLS-compliant. Then it can limit future expansion. There's always the fact that the CPU always uses entire words (int in 32-bit architectures and long in 64-bit architectures). You lose all that, and for what gains? A few bytes off your memory footprint.

More to the point, follow these wise words: Premature optimization is the root of all evil.

That means, only optimize when it is really required. Measure things first. You'll quite likely realize it's not those three bytes from the enum that need cutting back.

Martinho Fernandes
I upvote event though I am a bit nonplussed by yet another quoting of the whole "Premature optimization" thing!
BobbyShaftoe
What is it you find "not right" about Knuth's assertion?
Martinho Fernandes
+4  A: 

One technique is to use lazy initialization to create objects only when you need them.

Also, be sure to dispose (or set to null) objects that you no longer need, so they can be garbage collected.

David
+2  A: 

The best way to save memory is to firstly write a code clean way so that you can see the design of the application in it.

Then make a new version of code that's tweaked for memory. This way you are assured that future releases don't work with obfuscated code.

And yes with better memories and CPUs to come lesser you will think about such optimizations.

Xolve
+2  A: 

When it comes to optimizations, it's usually best to focus on algorithm design first. If you still don't get the performance you want, then's the time to figure out micro-optimizations.

A disclaimer though: optimizing for memory usage may not always be a good thing. Unfortunately, a lot of the time you have to make decisions about whether to optimize for time (CPU usage) or space (RAM or disk space). While you can sometimes have your cake and eat it too, it's not always that simple.

A little side-question here: I came accross something in a book about C#. Apparently, when coding an Enum, it's possible to set the size of the index of this Enum. With large Enum's, you should let the compiler handle it I guess, but for an Enum which only holds like 2 or 3 items, you can do this:

...

For those of you who don't know why, apparently the amount of memory reserved for the index of any Enum is automatically set to an integer in C#. So in other words, that amount of memory is going to be reserved. But when defining so little things in your Enum, an integer is a waste of space. The book claimed that this could cut down the amount of memory used by the program. I'm wondering if this is true.

I'm not 100% sure about this, but that might not necessarily be true. In fact, that probably isn't true if you're using Mono and deploying this application on other systems. The reason being that different operating systems and processors have different memory alignment requirements. Thus, even though you declare it as an sbyte, it might get coerced into a 32-bit or 64-bit integer by the time it actually goes into memory anyway (OS X is particularly picky about memory alignment).

Now, I could be totally missing the point here and the book could have been totally right in this case. But my point here is more to say "it's a little bit more complicated than that" and to point out that such optimizations may not be portable to other platforms (different OSes, processors, and programming languages).

Jason Baker
Actually, i'd say that you can USUALLY have your cake and eat it too. time-vs.-memory-tradeoffs are the exception; in most cases a smaller memory footprint means fewer cache misses and results in faster execution as well (sometimes dramatically so).
Michael Borgwardt
+2  A: 

All of these are good algorithmic approaches to the memory problem, but in practice, you also want to run your code through a profiler to see where the memory and cpu resources are being taken up.

John Ellinwood
+1  A: 

There is a good online book about this:

http://www.cix.co.uk/~smallmemory/

A: 

Try to use references (ref in C#, & or pointers in C++).

Let's take an example:

I have an array of 1.000.000 elements (pretty large and it takes much many) I'm just explaining a concept, might not work or syntax might be wrong.

int[] largeArray = new int[1000000];

Let's say I pass this array to a method in another class. Let this be the method:

void DoSomethingWithMyArray(int[] a) { 
//... 
}

What happens here that I make a temporary copy of my largeArray so that it can be used in my method. But this means I now have 2 times 1.000.000 elements in my memory for a short period. For small things this is not a problem but imagine that you work with big lists, this could be painful!

A solution is to pass the parameter as a reference using the keyword ref:

void DoSomethingWithMyArray(ref int[] a) {
//...
}

This is a way to save memory but use it rarely because refs can be strange and cause trouble.

juFo