views:

26

answers:

1

Hello,

the Ethernet II frame format does not contain a length field, and I'd like to understand how the end of a frame can be detected without it.

Unfortunately, if have no idea of physics, but the following sounds reasonable to me: we assume that Layer 1 (Physical Layer) provides us with a way of transmitting raw bits in such a way that it is possible to distinguish between the situation where bits are being sent and the situation where nothing is sent (if digital data was coded into analog signals via phase modulation, this would be true, for example - but I don't know if this is really what's done). In this case, an ethernet card could simply wait until a certain time intervall occurs where no more bits are being transmitted, and then decide that the frame transmission has to be finished.

Is this really what's happening?

If yes: where can I find these things, and what are common values for the length of "certain time intervall"? Why does IEEE 802.3 have a length field?

If not: how is it done instead?

Thank you for your help!

Hanno

+1  A: 

Your assumption is right. The length field inside the frame is not needed for layer1.

Layer1 uses other means to detect the end of a frame which vary depending on the type of physical layer.

  • with 10Base-T a frame is followed by a TP_IDL waveform. The lack of further Manchester coded data bits can be detected.
  • with 100Base-T a frame is ended with an End of Stream Delimiter bit pattern that may not occur in payload data (because of its 4B/5B encoding).

A rough description you can find e.g. here: http://ww1.microchip.com/downloads/en/AppNotes/01120a.pdf "Ethernet Theory of Operation"

Curd