Imagine collecting all of the world's high-school students' grades each month into a single table and in each student's record, you're required to include the final averages for the subject across the student's class, city and country. This can be done in a post-process, but your boss says it has to be done during data collection.
Constraint: the rows are written to a flat file then bulk-inserted into the the new table.
What would be good strategy or design-pattern to hang on to the several hundred thousand Averages until the table is done without adding excessive memory/processing overhead to the JVM or RDBMS? Any ideas will be helpful.
Note: Because the table is used as read-only, we add a clustered index to it on completion.