tags:

views:

14

answers:

0

Hi,

I intend to write an application where I will be needing to calculate the network bandwidth along with latency,packet loss rate. One of the constraint is to passively measure the bandwidth(using the application data itself). What I have read online and understood from few of the existing applications, is that all most all of them use active probing technique(that is generating flow of probe packets) and use the time difference between arrival of first and last packet to calculate the bandwidth. The main problems with such technique is it floods the network with probe packets, which runs longer and even not scalable(As we need to run the application at both the ends). One of the suggestion was to calculate the RTT of a packet by echoing it back to the sender and calculate the bandwidth using the following equation Bandwidth <= (Receive Buffer size)/RTT. I am not sure how accurate this could be as the receiver may not always echo back the packet on time to get the correct RTT.Use of ICMP alone may not always work as many server disable it. My main application runs over a TCP connection so I am interested in using the TCP connection to measure the actual bandwidth offered over a particular period of time. I would really appreciate if anybody could suggest a simple technique(reliable formula) to measure the bandwidth for a TCP connection.