I've written a simple application in Java where there are two nodes, each with a ServerSocket open to a port listening for incoming connections. The nodes run two threads each, sending 1000 messages to the other node through a persistent TCP socket created when sending the first message. However, the nodes do not receive all 1000 messages. One may receive 850 while the other only receives 650. This number tends to stay constant over multiple runs.
The sending code is as follows:
public void SendMsg(String dest, Message myMsg) {
Socket sendsock = null;
PrintWriter printwr = null;
try {
if(printwr == null) {
sendsock = new Socket(dest, Main.rcvport);
printwr = new PrintWriter(sendsock.getOutputStream(), true);
}
String msgtosend = myMsg.msgtype.toString() + "=" + Main.myaddy + "=" + myMsg.content + "\n";
printwr.print(msgtosend);
} catch (UnknownHostException ex) {
System.out.println(ex);
//DO: Terminate or restart
} catch (IOException ex) {
System.out.println(ex);
//DO: Terminate or restart
}
}
Performance seems to improve if I use buffwr = new BufferedWriter(printwr) as well and use buffwr.write(...) instead of printwr.print(...), though it doesn't seem to be a complete solution for the data loss. There are no exceptions to show that packets weren't delivered, so according to the sender, they were all sent successfully.
On the receiving end, the accepted connection is treated as follows:
BufferedReader inbuff = new BufferedReader(new InputStreamReader(incoming.getInputStream()));
while(running) {
String rcvedln = inbuff.readLine();
if(rcvedln != null) {
count++;
System.out.println(count);
}
}
Is there an problem with how the readers and writers have been used that could be causing the problem? Thanks.