views:

258

answers:

3

Hello.

I'd like to know if there was a way to know the time elapsed during the travel of a data on the network.

For example, I send a packet from computer A to computer B and C (so elapsed time might be different for each depending on the distance, etc), and I want to know the time between sending and receiving for each client (to synchronize exactly precize data).

Moreover, it is important to know that my client MUST work in asynchronous mode (that's not a problem).

Somebody knows how to do it?

KiTe.

A: 

Unless all your nodes have synchronized clocks this would be near impossible to do. If you do have an accurate sync mechanism in place and can trust that the clocks are the same then you could just insert a timestamp into the packet when you send it from A and then in C you compare that to the current time.

But again, you need high res time sync for this approach to work.

What you could do if you just want to benchmark and get an idea of an average time is to make the packet bounce back. Basically, tell C to send the same packet back to B and then to A and in A you compare the original timestamp with the current time (which will be using the same clock). This will give you a round trip latency which you can divide by two to get one-way latency time.

If you are worried about the overhead added by sending messages back then you could do one (or both of) the following

  1. Send a much smaller packet back. Basically an integer corresponding to the sequence number of the received message
  2. Only send an ACK every Nth message. Should provide enough data to know "typical" latency time
Isak Savo
the problem is that I need quite a high rate of sending/receiving, so sending back the data will certainly slow down the global appp.I wonder how they are doing in games such fps or rts : even it there is a central server that act as a switch between hosts, there is still a delay (which can go to hundreds of ms)
Kite
but basically, when I send a packet with my socket, shouldn't I receive an ACK ? (since I use tcp sockets) And then I can compare the time i got the ACK with the time I send the data?but with .NET sockets there is no possibility to do so, since the "end send" or "send" doesn't mean that the data as been actually received by the peer...
Kite
The ACK I assume you're talking about is down at the transport layer and does not necessarily conform to the ACK you want. You packet could be split into multiple lower level packets each with different ACKs. The only way to do this is to implement the ACK logic yourself, using a mechanism similar to what I describe in my answer. If you're worried about slowing down the app, then you could design it so that it only ACKs every 10th message or so.
Isak Savo
+1  A: 

Corvil is a well-known software specifically aiming at latency analysis.
For your analysis there are several different layers software and hardware-wise involved and thus it is very complex to implement.
When it comes to synchronizing, it is more important to have a trustworthy key like a sequence number - as you use TCP you have a large problem when there is a problem with losing a package as this triggers a requeue of several packags.

weismat
I'm using tcp, so it shouldn't be useful to create a sequence number, right?but as i asked just above, is it possible to get the tcp ack with .NET to be sure that the peer as received the data?Then it should be easier to do a sync (I just have to store the difference of time, with is actualized at each sending/ack receiving and so on)
Kite
The application layer just assumes that it will be received when the sending is done. You can guess the time if the client responds - but this is a guess which will also be fluctuating during the day.
weismat
+1 for sequence numbers
Jon Seigel
A: 

Clock synchronization:

  • Computer A asks Computer B for time
  • Computer A sets the time that B sent (time of A + Travel Time)
  • Computer A asks for time again
  • Computer A subtracts time received from its current time (Difference is roughly twice the travel time).
  • Computer A sets its new time equal to (time of A + error/2)

Transfer time caclulation:

  • A sends time to B
  • B send back the difference from its own system clock
Hasan Khan