views:

51

answers:

1

I'm using a ServerSocket on my server and Sockets that use ObjectIOStreams to send serializable objects over the network connection. I'm developing an essentially more financial version of monopoly and thus packets being sent and confirmed as sent/received is required. Do I need to implement my own packet loss watcher or is that already taken care of with (Server)Sockets?

I'm primarily asking about losing packets during network blips or whatnot, not full connection error. E.g. siblings move a lead plate between my router and computer's wi-fi adapter.

http://code.google.com/p/inequity/source/browse/#svn/trunk/src/network Code can be found under network->ClientController and network->Server

+2  A: 

Theoretically; yes. There is no way of giving 100 % theoretical guarantee that what is sent on the hardware layer, is received the same way on the receiving end.

Practically however, if you use TCP (Transmission Control Protocol) this stuff has already been taken care of; you won't loose any packets. (If you're using UDP on the other hand (User Datagram Protocol) it's another story, and it may very well be the case that you're loosing packets, or receiving them out of order).

Just looked briefly at your code, and it seems you're using multiple threads. If so you must be utterly careful with synchronization. It could very well be the case that it looks like a packet has been dropped, although, it is simply not handled due to a race condition in the program. (Keep in mind that the gui for instance, runs in its own thread.)

The best way to solve the synchronization, I think, is to put the network loop in a very small read/put-on-synchronized-queue loop, and pick up the received packets from the queue whenever you're sure no other thread will intervene.

aioobe
'Theoretically; yes.' Theoretically no. He is using ServerSocket, which implies TCP, and ObjectInputStream and ObjectOutputStream, ditto. He cannot possibly be using UDP as described. He cannot be experiencing packet loss. He has a bug in his code.
EJP
I see. ServerSocket implies TCP. I didn't think of that. you're right. Thanks.However, I don't believe there is a *theoretical* guarantee that data received is identical to the data that was sent. The hardware could, somewhere along the transmit, misinterpret bits. Most bit-errors would be covered by parity-checks, but some errors could of course go through unnoticed. I know I'm being overly formal (it's a bad habit due to three years of formal method graduate studies).
aioobe
Of course there is a guarantee. See the RFC. It is enforced by checksums.
EJP
I understand what you mean, and I realize that I'm being overly formal, but consider what could theoretically happen if the hardware layer didn't give any guarantees and could flip bits arbitrarily: Lets say you have an array of data D0, which, when sent on the network looks as a stream of bits (including checksums, parity checks and what not), call it BITS0. I try to send some other data D1 (translated to BITS1). The hardware layer accidentally flips most of the bits so the stream is received as BITS0. There would be no way for the receiver to say "hey, that's corrupted, resend please".
aioobe
I haven't been losing packets. Was wondering if I had to account for losing packets. I do have incoming packets (from all clients and their respective connection threads) being dropped to a server queue and processed by one thread.
Joel Garboden
Then you're right on track. No need to worry about droped packets. (since you use tcp, and not udp)
aioobe
Alright, thank you very much! That saves me a -lot- of headache:)
Joel Garboden