I assume you mean BinaryFormatter
; it depends ;-p
The purpose of serialization is to express a complex in-memory object as a simple sequence of bytes (or depending on the serializer - characters, etc) that can be re-hydrated at the other end to re-create the object.
Some types (primitives, strings, etc) have inbuilt direct support by the serializer - it writes these directly.
In the case of classes, the type metadata (incuding assembly name etc) is written, then all of the fields on the type are enumerated (essentially, Type.GetFields()
, including private etc). For every field (not marked [NonSerialized]
), the field name is written, and the value is serialized (through the same process). Eventually, everything boils down to the inbuilt primitives, some type definitions, and some name/value field pairs.
An exception here is if the type implements ISerializable
- in which case the type is asked to serialize itself to the output. This is common in things like dictionary types, where the in-memory layout of the type can be expressed differently to a stream.
During deserialization the process is reversed; the type-metadata is used to create an empty object (unless it has a special serialization constructor/ISerializable
); then the fields are set as they are found in the stream.
In both serialization and deserialization there are "callback" points where you can execute additional code to fix-up objects for (de)serialization.
This process is brittle; for lots of reasons, see here - but it is also version intolerant and implementation specific (you can't consume it from java etc).
protobuf-net solves a lot of these problems, by being a binary serializer that is contract-based, rather than field-based.