Our application is a C server (this problem is for the Windows port of said server) that communicates with a Windows Java client. In this particular instance we are sending data to the client, particularly, the message consists of a 7 byte header where the first 3 bytes all have a specific meaning (op type, flags, etc) and the last 4 bytes contain the size of the rest of the message. For some reason I absolutely can't figure out, the third byte in the header is somehow changing; if I put a break point on the send()
I can see that the third byte is what I'm expecting (0xfe
), but when I check in the client, that byte is set to 0. Every other byte is fine. A did some traffic capturing with WireShark and saw that the byte was 0 leaving the server, which I find even more baffling. The third byte is set via a define, ala:
#define GET_TOP_FRAME 0xfe
Some testing I did that further confuses the issue:
- I changed the value from using the define to first
0x64
,0xff
,0xfd
: all came across to the client. - I changed the value from using the define to using
0xfe
itself: the value was zero at the client. - I changed the value of the define itself from
0xfe
to0xef
: the value was zero at the client.
Nothing about this makes a lick of sense. The code goes through several levels of functions, but here is most of the core code:
int nbytes; /* network order bytes */
static int sendsize = 7;
unsigned char tbuffer[7];
tbuffer[0]= protocolByte;
tbuffer[1]= op;
tbuffer[2]= GET_TOP_FRAME;
nbytes = htonl(bytes);
memcpy((tbuffer+3), &nbytes, JAVA_INT);
send(fd, tbuffer, sendsize, 0);
Where fd
is a previously put together socket, protocolByte
, op
, and bytes
are previously set. It then sends the rest of the message with a very similar send command immediately after this one. As I mentioned, if I put a break point on that send function, the tbuffer contains exactly what I expect.
Anybody have any ideas here? I'm completely stumped; nothing about this makes sense to me. Thanks.