Working in an embedded (PIC) environment, programming in c.
I have to keep track of 1000 values for a variable (history) and return the moving average of those values. I'm just wondering if it will be more efficient in terms of speed, ROM and RAM usage if I use an array or 1000 16bit variables. Is there a difinitive answer to that? Or would i have to just try both and see what works best?
Thanks.
EDIT: Hmm... I already ran into another problem. The compiler limits me to an array size maximum of 256.
EDIT2:
For clarification... the code fires about 1000 times a second. Each time, a value for history[n] (or history_n) is calculated and stored. Each time I need to calculate the average of the 1000 most recent history values (including current). So (history[1000] + history[999] + ... + history[1] + history[0]) / 1000;
or something to that effect. Obviously each time I need to kick out the oldest and add the newest.
EDIT3:
I've re-worked the code such that now the 256 array size is not an issue. A sample size of around 100 is now suitable.