I am looking to optimize reading/writing huge data for a C++ simulation application. The data termed as a "map" essentially consists of integers, doubles, floats and a single enum. A majority of this map data is fixed in size but a small part of it can vary (from a few to several KB) in size. Several such maps (typically millions) are computed once at the start of the application and then stored in a single binary file to be parsed at each simulation time-step.
Since there are a few million maps, parsing of this binary file is quite slow with fseek and fread being the major bottlenecks. I am looking for an alternative approach to doing the same.
Any pointers?