Hello , I am building an Ethernet Application in which i will be sending packets from one side and receiving it on the other side. I want to calculate delay in packets at the receiver side as in RFC 3393. So I have to put a timestamps in the packet at the sender side and then take the timestamps at the receiver side as soon as i receive the packet . Subtracting the values i will get the difference in timestamps and then subtracting this value with subsequent difference i will get One way ipdv delay . Both the clocks are not synchronized . So any help is greatly appreciated. Thank you.
RFC 3393 is for measuring the variance in the packet delay, not for measuring the delay itself.
To give an example: you're writing a video streaming application. You want to buffer as little video data as possible (so that the video starts playing as soon as possible). Let's say that data always always always takes 20ms to get from machine A to machine B. In this case (and assuming that machine A can send the video data as fast as it needs playing), you don't need any buffer at all. As soon as you receive the first frame, you can start playing, safe in the knowledge that by the time the next frame is needed, it will have arrived (because the data always takes exactly 20ms to arrive and machine A is sending at least as fast as you're playing).
This works no matter how long that 20ms is, as long as it's always the same. It could be 1000ms - the first frame takes 1000ms to arrive, but you can still start playing as soon as it arrives, because the next frame will also take 1000ms and was sent right behind the first frame - in other words, it's already on its way and will be here momentarily. Obviously the real world isn't like this.
Take the other extreme: most of the time, data arrives in 20ms. Except sometimes, when it takes 5000ms. If you keep no buffer and the delay on frames 1 through 50 is 20ms, then you get to play the first 50 frames without a problem. Then frame 51 takes 5000ms to arrive and you're left without any video data for 5000ms. The user goes and visits another site for their cute cat videos. What you really needed was a buffer of 5000ms of data - then you'd have been fine.
Long example, short point: you're not interested in what the absolute delay on the packets is, you're interested in what the variance in that delay is - that's how big your buffer has to be.
To measure the absolute delay, you'd have to have the clocks on both machines be synchronised. Machine A would send a packet with timestamp 12337849227**2**8 and when that arrived at machine B at time 12337849227**4**8, you'd know the packet had taken 20ms to get there.
But since you're interested in the variance, you need (as RFC 3393 describes) several packets from machine A. Machine A sends packet 1 with timestamp 1233784922**72**8, then 10ms later sends packet 2 with timestamp 1233784922**73**8, then 10ms later sends packet 3 with timestamp 1233784922**74**8.
Machine B receives packet 1 at what it thinks is timestamp 1233784922**12**8. The one-way delay between machine A and machine B has in this case (from machine B's perspective) been -600ms. This is obviously complete rubbish, but we don't care. Machine B receives packet 2 at what it thinks is timestamp 1233784922**15**8. The one-way delay has been -580ms. Machine B receives packet 3 at what it thinks is timestamp 1233784922**16**8. The one-way delay was again -580ms.
As above, we don't care what the absolute delay is - so we don't even care if it's negative, or three hours, or whatever. What we care about is that the amount of delay varied by 20ms. So you need a buffer of 20ms of data.
Note that I'm entirely glossing over the issue of clock drift here (that is, the clocks on machines A and B running at slightly different rates, so that for example machine A's time advances at a rate of 1.00001 seconds for every second that actually passed). While this does introduce inaccuracy in the measurements, its practical effect isn't likely to be an issue in most applications.