tags:

views:

85

answers:

5

I've got a part of my memory that I want to dump to a file. One reason is to save the information somewhere, and another is to read it again when my program restarts.

What is the proper way of doing this?

My first thought was:

char* start = my_pointer;
int i;
for (i = 0; i < MEMORY_SIZE; i++) {
    // write *start to file
    start++;
}

Can I write it all as characters? And then use something like this to restore it to memory.

//loop
    *my_pointer = fgetc(f);
    my_pointer++;

Will my "datastructures" survive as "charachters", or do I need to write it in some sort of binary / hexa data-mode? Or is it a stadard way of doing this?

+2  A: 

If you're on a unixy style system memmap and memcpy might give you neat solution.

Michael Anderson
+2  A: 

You can use

size_t fwrite ( const void * ptr, size_t size, size_t count, FILE * stream );

function.

ptr - pointer to you memory segment.
size - size of memory to write.
stream - file you writing to.

Will my "datastructures" survive as "charachters", or do I need to write it in some sort of binary / hexa data-mode? Or is it a stadard way of doing this?

when you open file - use 'b' character in "mode" param

Artem
+4  A: 

This problem is called "serializing" and can range from trivial to really complicated. If your data structure is self contained, for instance a bunch of pixels in an array and you know the array dimensions, you can just dump the data out and then read it back.

If you have for instance linked lists, or pointers of any kind, in your data, then those pointers will not point to anything valid once you read them back. This is where a more formal approach to serializing starts to make sense.

This can range from saving as file formats, use databases, convert to XML or other hierarchical format and so on. What is an OK solution depends completely on what kind of data you have, and what types of operations you are doing on it plus how often you plan to write and then read back from disk. (Or network. Or whatever you are doing.)

If what you have is a trivial blob of data, and you just want to write it out the simplest way possible, use fwrite():

fwrite(my_pointer, MEMORY_SIZE, 1, fp);

and then fread() to read the data back. Also see a related (more or less related depending on how advanced your needs are) serializing question on StackOverflow.

Proper serialization also solves the problems that appear when different kinds of CPUs are supposed to be able to read data from each other. Proper serialization in C is lot more complicated than in other languages. In Lisp for instance, all data and code is already serialized. In Java, there are methods to help you serialize your code. The properties of C that makes it a suitable language for high performance and systems programming also makes it tougher to use for some other things.

Amigable Clark Kant
I think I'm going to re-program a bit to make it more simple and easy to dump/restore.
kristus
+1  A: 

The proper way of doing that is to use a serialisation library.

Whether you really need that depends on the complexity of your data. If the data you need to write out does not contain any pointers of any kind, then you could just use fwrite to write the data out and fread to read it back in. Just make sure that you have opened the file with the data in binary mode.

If the data to serialise contains pointers, you are way better of using an external library written for this purpose, as the library will ensure that the pointers get written in such a way that they can be properly reconstructed later.

Bart van Ingen Schenau
+2  A: 

As long as the data you are dumping out contains no pointers, just dumping it out like that will work. (HINT: Use the calls that can write long sequences of data all in one go to cut time.) The only thing to watch out for is if you're writing out integers or floating point numbers and reading them back in on a machine with a different architecture (e.g., big endian instead of little endian). That might or might not be a concern for you.

But if you've got pointers inside, you've got a problem. The problem is that you cannot (well, cannot easily) guarantee that you'll get the data loaded back at the same position in the receiving process's virtual memory space. What's more, if you have data that has pointers to things that you're not saving (e.g., a stray FILE*) then you've got to think about what to do to resynthesize a valid replacement at that point. Such serialization is deeply non-trivial, and requires writing code that has knowledge of exactly what you're saving and loading.

There is a way to simplify serialization a little when you've only got pointers within the contiguous data being saved and are always going to restore on the same architecture. Dump out the memory as before, but put a prefix descriptor on that says at least the length of the data and the number of pointers within, then also save (at the end) a table of exactly where (as offsets within the data) the pointers are and where the start of all the data was. You can then restore by reading the data in and performing address arithmetic to correct all the pointers, i.e., you can work out what offset relative to the start of the original data they were pointing to – as a char*, not the original type – and make sure that they point to the same offset relative to the address of the whole data after reloading. This is a somewhat gross hack and is formally not the most portable thing ever, but within the constraints outlined at the beginning of this paragraph I'd expect it to work. However you'll also have a really non-portable serialization format; do not count on it at all for any sort of persistent archival use!

Donal Fellows