views:

143

answers:

3

I'm trying to understand some behavior I'm seeing in the context of sending UDP packets.

I have two little Java programs: one that transmits UDP packets, and the other that receives them. I'm running them locally on my network between two computers that are connected via a single switch.

The MTU setting (reported by /sbin/ifconfig) is 1500 on both network adapters.

  • If I send packets with a size < 1500, I receive them. Expected.
  • If I send packets with 1500 < size < 24258 I receive them. Expected. I have confirmed via wireshark that the IP layer is fragmenting them.
  • If I send packets with size > 24258, they are lost. Not Expected. When I run wireshark on the receiving side, I don't see any of these packets.

I was able to see similar behavior with ping -s.

ping -s 24258 hostA works but

ping -s 24259 hostA fails.

Does anyone understand what may be happening, or have ideas of what I should be looking for?

Both computers are running CentOS 5 64-bit. I'm using a 1.6 JDK, but I don't really think it's a programming problem, it's a networking or maybe OS problem.

A: 

Losing UDP packets is expected, the protocol doesn't contain any kind of data integrity or arrival reliability. See the entry on wikipedia:

UDP uses a simple transmission model without implicit hand-shaking dialogues for providing reliability, ordering, or data integrity. Thus, UDP provides an unreliable service and datagrams may arrive out of order, appear duplicated, or go missing without notice. UDP assumes that error checking and correction is either not necessary or performed in the application, avoiding the overhead of such processing at the network interface level.

So it's completely normal that UDP packets go missing.

Matias Valdenegro
I'm aware that UDP is unreliable. It's not that I'm losing some packets, it's that if the packet size is 24528, I RECEIVE ALL the packets, but if the size is 24529, I LOSE ALL the packets. There must be some other threshold/limitation that I'm not aware of. That's what I want to find out.
wolfcastle
+1  A: 

Implementations of the IP protocol are not required to be capable of handling arbitrarily large packets. In theory, the maximum possible IP packet size is 65,535 octets, but the standard only requires that implementations support at least 576 octets.

It would appear that your host's implementation supports a maximum size much greater than 576, but still significantly smaller than the maximum theoretical size of 65,535. (I don't think the switch should be a problem, because it shouldn't need to do any defragmentation -- it's not even operating at the IP layer).

The IP standard further recommends that hosts not send packets larger than 576 bytes, unless they are certain that the receiving host can handle the larger packet size. You should maybe consider whether or not it would be better for your program to send a smaller packet size. 24,529 seems awfully large to me. I think there may be a possibility that a lot of hosts won't handle packets that large.

Note that these packet size limits are entirely separate from MTU (the maximum frame size supported by the data link layer protocol).

Dan Moulding
I was not aware that implementations could have a smaller maximum packet size. Do you know how to determine what this value is?I agree 24k is a very large size, and I probably won't be sending that large of packets in the deployed system, I just ran across this during testing. I have full control over the network in the deployed system (all computers/switches/routers). We're using Gigabit ethernet, so if we also use jumbo frames I gather I *should* be able to use a packet size of 9000 (UDP headers + payload) without IP layer fragmenting.
wolfcastle
A: 

I found the following which may be of interest:

Dan's answer is useful but note that after headers you're really limited to 65507 bytes.

Kaleb Pederson