views:

1723

answers:

5

The aim of the test is to check the shape of the network response time between two hosts (client and server). Network response = the round trip time it takes to send a packet of data and receive it back. I am using the UDP protocol. How could I compute the response time ? I could just subtract TimeOfClientRequest - TimeOfClientResponseRecieved. But I'm not sure if this is the best approach. I can't do this only from inside the code and I'm thinking that the OS and computer load might interfere in the measuring process initiated by the client. By the way, I'm using Java.

I would like to listen to your ideas.

+1  A: 

I think the method you mention is fine. OS and computer load might interfere, but their effect would probably be negligible compared to the amount of time it takes to send the packets over the network.

To even things out a bit, you could always send several packets back and forth and average the times out.

Eric Petroelje
A: 

it would be nice if you could send ICMP packages - I guess, because they are answered directly by the network layer, your answer will loose no time in user mode on the server.

Sending ICMP packages in java seems however not to be possible. You could:

 boolean status = InetAddress.getByName(host).isReachable(timeOut)

this will send an ICMP package, but that is not what you want.

However if you start the responder deamon on the server side with a higher priority, you will reduce the effect of server load.

Actually server load does not play a role, as long as it is bellow 100% CPU.

siddhadev
@siddhadev: I am confused. You say that in Java it's not possible to send an ICMP package, and the you provide code. Please explain.
You can't send raw IP packets (necessary for ICMP) without root privilege on many O/S, and Java makes no public methods available to create your own ICMP packets.
Alnitak
+1  A: 

If you have access to the code, then yes, just measure the time between when the request was sent and the receipt of the answer. Bear in mind that the standard timer in Java only has millisecond resolution.

Alternatively, use Wireshark to capture the packets on the wire - that software also records the timestamps against packets.

Clearly in both cases the measured time depends on how fast the other end responds to your original request.

If you really just want to measure network latency and control the far end yourself, use something like the echo 7/udp service that many UNIX servers still support (albeit it's usually disabled to prevent its use in reflected DDoS attacks).

Alnitak
On Windows XP, the currentTimeMillis only has 1/60th of a second accuracy.
Peter Lawrey
+1  A: 

Just use ping - RTT ( round trip time ) is one of the standard things it measures. If the size of the packets you're sending matters then ping also lets you specify the size of the data in each packet.

For example, I just sent 10 packets each with a 1024 byte payload to my gateway displaying only the summary statistics:

ping -c 10 -s 1024 -q 192.168.2.1

PING 192.168.2.1 (192.168.2.1) 1024(1052) bytes of data.

--- 192.168.2.1 ping statistics ---

10 packets transmitted, 10 received, 0% packet loss, time 9004ms

rtt min/avg/max/mdev = 2.566/4.921/8.411/2.035 ms

The last line starting with rtt ( round trip time ) is the info you're probably looking for.

Robert S. Barnes
A: 

Use ping first, but you can measure the RTT by sending a packet and having the other end sending the packet back.

It is important that you measure when the boxes are under typical load because that will tell you the RTT you can expect to typically get.

You can average the latencies over many packets, millions or even billions to get a consistent value.

Peter Lawrey