The title is a bit confusing, so i will explain a bit more using examples. Just a note: i am parsing a file format. Say we have a structure like this:
struct example
{
typeA a;
typeB b;
typeX x;
typeY y;
typeZ z;
};
So far its ok. Now the problem is, that typeX
, typeY
and typeZ
can vary in size. Depending on flags in the file header (metadata) they can be either two or four bytes large. Another thing is, that there are several more structures like this (about 40). Each of them use the typeX
, typeY
, typeZ
. Some all of them, some just one or two. And finally, most of them are optional, so there might be just four or five structures used, or 20 or 30...
I would like to know if anyone has some idea how to store such a varying set of data. I thought about using templates, but dont know if its the right way.
EDIT: to clarify more: memory is not a big issue, so i probably can afford wasting a bit of space. If typeX is four bytes then it is so for all structures. Hovewer they are not synced, so typeX can be 4 bytes, typeZ can be 2. Most structures might be used multiple times, so there can be 50 example1 structures, 10 example2 structures etc.