views:

514

answers:

2

Hello,

I'm wondering whether there is a programmatic way to obtain a measure of the full bandwidth used when sending data through a TCP stream. Since I cannot seem to know how the network stack would divide the stream into packets, or when it sends a TCP SYN or ACK or many of the things it does in the background for you, I can only get a rough estimate for this.

The only solution I can think of is to actually sniff the interface, but I would like to think that the stack can already collect this stats for me.

This is running in Java under either Windows or Linux (of course, a portable solution would be preferred), but I can JNI-ize a C/C++ answer so that (and OS API calls) is a fine answer too. Thank you!

A: 

Well TCP is a fixed data gram which is specified by the MTU. If you know your MTU, you can figure out how many data grams you have to transmit and TCP follows a standard model for acknowledgment.

Here is a good article on that help figure out the overhead of data transmission, which includes the overhead of Ethernet and the other layers of the stack.

Kitson
Well, not exactly. Run tcpdump on your ssh connection - are all the segments of the same size?
Nikolai N Fetissov
I should clarify to say there is a maximum segment size which is determined by the MTU. If the amount of data the application wishes to send exceeds the MSS determined by the MTU the packet becomes "fragmented". But essentially if you know how much you are wishing to send at a time and the MTU on the network, you can figure out the total amount of overhead needed to transmit your data.
Kitson
Yes ... that would be the *minimum* possible overhead, not the actual one, which depends on how the app uses the network over time. Say I need to transfer 40 bytes one-way every minute. Ignoring connection handshake and tear-down, I'd probably end up with an 80 byte packets (20 for IPv4 header, 20 for TCP header without options, 40 for data). That's already 100% of overhead, not even counting the ACKs coming back that carry no application data at all.
Nikolai N Fetissov
Nikolai is right on it. I can already get an estimate (the minimum possible overhead as he said), but I'd like to be able to measure the actual overhead incurred by TCP's whole enchilada.
Ismael C
@Kitson MTU isn't at all visible with TCP, it's a stream socket protocol. You just need to make sure that you have enough transmit buffer. Also, the answer poster is also completely wrong about TCP. It's like buckets full of water, filling up a bucket with a hole in it with the received information slowly pouring out. If you want X bytes of information when you call recv(), you might get some of one packet, and some of several others. I think you should stop spreading misinformation and talking about subjects which are clearly above your head.
Chris Dennett
A: 

If this TCP stream is the only thing going through your interface, you could just query the interface statistics (bytes sent/received) and measure the time yourself (+do the math).

smilingthax