tags:

views:

720

answers:

8

I'm using Java DatagramSocket class to send a UDP data gram to an endpoint. The datagram must arrive at the endpoint in 60ms intervals.

I'm finding that DatagramSocket.Send can often take > 1 ms (close to 2) to package and send packets no greater than 56 bytes. This causes my packets to be delivered at 62 ms intervals, rather than 60ms intervals.

This is on a windows vista machine. Here is how I'm measuring the time:

              DatagramPacket d = new DatagramPacket(out, out.length, address, port);
              long nanoTime = System.nanoTime();
 socket.send(d);
 long diff = System.nanoTime() - nanoTime;
 System.out.println( out.length + " in " + diff + "ms." );

Does any one have tips or tricks to speed this process?

A: 

How about sending your packets at 58 millisecond intervals?

No matter how you optimize (and there really aren't any opportunities to do so; using the channel-orient NIO will do the same work), some time will be required to send data, and there is likely to be some variability there. If precise timing is required, some strategy that acknowledges the transmission time is required.

Also, a note about the measurement: be sure not to measure the delay until several thousand iterations have been performed. This gives a the optimizer a chance to do its work and give a more representative timing.


At one time, the time resolution on Windows was poor. However, 1 millisecond resolution is now common. Try the following test to see what how precise your machine is.

  public static void main(String... argv)
    throws InterruptedException
  {
    final int COUNT = 1000;
    long time = System.nanoTime();
    for (int i = 0; i < COUNT; ++i) {
      Thread.sleep(57);
    }
    time = System.nanoTime() - time;
    System.out.println("Average wait: " + (time / (COUNT * 1000000F)) + " ms");
  }

On my Windows XP machine, the average wait time is 57.7 ms.

erickson
Thats the obvious answer. The only problem, of course, is that Windows has about a 10ms granularity in keeping time. If I send 10ms early, I'm far too early. Otherwise, I'm late.
LPalmer
Actually the accuracy is about 16 ms.
Peter Lawrey
@Peter: I'm not sure where you are getting your numbers. In my environments, I've demonstrated that the error is about +0.7 ms, and the resolution is 1 ms.
erickson
+2  A: 

You're seeing the time taken to copy the data from user-space into kernel space. It takes even longer to send through the UDP, IP and Ethernet layers and it can take a variable amount of time for a datagram to cross the physical network to its destination.

Assuming you have a network that exhibits no jitter (variance in per-packet transmission time) and your process is running at real-time priority, and nothing else is competing with it for the CPU...

You need to call send every 60ms, no matter how long it takes for the send() method to execute. You cannot wait 60ms between calls. You need to measure how long it takes to perform the body of your loop (send() and whatever else) and subtract that from 60ms to get the wait time.

Nat
Thanks, and yeah, I'm starting the send at 60ms intervals. But the send it's self takes a really long time for such a small ammount of data.
LPalmer
If you're starting the send at 60ms intervals -- rather than waiting 60ms between sends -- it doesn't matter how long it takes to perform the send. It will complete at 60ms intervals.
Nat
+1  A: 

Besides for the obvious and smart-allecky response of "wait only 59 ms," there isn't a whole lot you can actually do. Any operation you take is going to take some amount of time which is not likely to be consistent. As such, there is no way to guarantee that your packets will be delivered at precisely 60 ms intervals.

Remember that it takes time to wrap your tiny little 56 byte message in the headers needed for the UDP and IP layers and still more time to shunt it out to your network card and send it on its way. This adds another 8 bytes for the UDP layer, 20 for the IP layer, and still more for whatever the link layer needs. There is nothing you can do to avoid this.

Also, since you are using UDP, there is no way that you can guarantee that your packets actually arrive, or if they do that they arrive in order. TCP can make these guarantees, but neither can guarantee that they will actually arrive on time. In particular, network congestion may slow down your data en route to the destination, causing it to be late, even compared to the rest of your data. Thus, it is unreasonable to try to use a remote application to control another at precise intervals. You should consider yourself lucky if your signals actually arrive within 2 ms of when you want it to.

James
+2  A: 

You can use the Timer class to schedule an event.

    Timer timer = new Timer();
 TimerTask task = new TimerTask() {
  public void run() {
   //send packet here
  }};
 timer.scheduleAtFixedRate(task, 0, 60);

This will create a recurring event every 60ms to execute the "run" command. All things remaining equal, the packet should hit the wire every 60ms (although, the first packet will be delayed by some amount, and garbage collection/other tasks/etc may slightly delay this number).

James Van Huis
A: 

If you send the packets out in 60ms intervals then theoretically the packets would arrive in 60ms intervals at the destination, however this is not guaranteed. Once the packets hit the link they become the mercy of the network link which could include network traffic and even dropping your packets along the routed path.

Is there a reason the packets must be received exactly 60ms apart? If so, there are other protocols that could help you achieve this.

Mr. Will
A: 

Since you are not using a Real Time Java there is no way make sure you will always send a packet every 60ms. I would set up a timer thread that will do a 'notify' on two other waiting threads that actually send the packet. You could get by with only one thread to send but I am sort of anal about having a backup in case there is a problem.

Javamann
+1  A: 

Use a Timer, as mentioned by James Van Huis. That way, you will at least get the average frequency correct.

Quote from the javadoc :

If an execution is delayed for any reason (such as garbage collection or other background activity), two or more executions will occur in rapid succession to "catch up." In the long run, the frequency of execution will be exactly the reciprocal of the specified period (assuming the system clock underlying Object.wait(long) is accurate).

Also, to answer your actual, but perhaps slightly misguided question: reusing an instance DatagramPacket and just setting a new output buffer shaves of a "massive" microsecond in average, on my machine...

    datagram.setData(out);
    socket.send(datagram);

It reduces the load on the gc slightly so it might be a good idea if you are sending at a high rate.

KarlP
A: 

If you really want to see an uncanny improvement in network performance, on the same hardware, then try installing linux on a dualboot partition and compare against the speed you got from windows vista.

I'd love to know your results, I generally find a significant difference.

crowne