I have a table of data with survey results, and I want to do certain calculations on this data. The datastructure is somewhat like this: (ignore all the data beeing similar, i cut and pasted all the rows)
____________________________________________________________________________________
| group |individual | key | key | key |
| | |subkey|subkey|subkey|subkey|subkey|subkey|subkey|subkey|subkey|
| | |q|q|q |q |q |q|q|q |q|q|q |q |q |q|q|q |q|q|q |q |q |q|q|q |
|-------|-----------|-|-|--|--|---|-|-|--|-|-|--|--|---|-|-|--|-|-|--|--|---|-|-|--|
| 1 | 0001 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |
| 1 | 0002 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |
| 1 | 0003 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |
| 2 | 0004 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |
| 2 | 0005 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |
| 3 | 0006 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |
| 4 | 0007 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |1|7|5 |1 |3 |1|4|1 |
------------------------------------------------------------------------------------
Excuse my poor ascii skills...
So, every individual belongs to a group, and has answered some questions. These questions are always grouped in keys and subkeys.
Is there any simple method to calculate averages, deviations and similar based on the groupings. Something like
public float getAverage(int key, int individual);
float avg = getAverage(5,7);
I think what I'm asking is what would be the best way to structure the data in C# to make it as easy as possible to work with? I have started making classes for every entity, but I got confused somewhere and something stopped working. So before I continue along this path, I was wondering if there are any other, better, ways of doing this?
(Every individual can also have describing variables, like agegroup and such, but that's not important for the base functionality.)
Our current solution does all calculations inline in the queries when requesting the data from the database. This works, but it's slow and the number of queries equals questions * individuals + keys * individuals, which could be alot if individual queries.
Any suggestions?