views:

76

answers:

3

I know a simple URLConnection to google can detect if I am connected to the internet, after all I am confident that the internet is all well and fine If I cant connect to google. But what I am looking for at this juncture is a library that can measure how effective my connection to the internet is in terms of BOTH responsiveness and bandwidth available. BUT, I do not want to measure how much bandwidth is potentially available as that is too resource intensive. I really just need to be able to test wether or not I can recieve something like X kB's in Y amount of time. Does such a library already exist?

+1  A: 

It's not really possible to be able to judge this. In today's world of ADSL 2+ with 20-odd Mb/s download speeds, you're largely governed by the speed of everything upstream from you. So if you're connecting to a site in another country, for example, the main bottleneck is probably the international link. If you're connected to a site in the same city as you are, then you're probably limited by that server's uplink speed (e.g. they might be 10MB/s and they'll be serving lots of people at once).

So the answer to the question "can I receive X KB in at most Y seconds" depends entirely on where you're downloading from. And therefore, the best way to answer that question is to actually start downloading from where ever it is you're planning to download, and then time it.

In terms of responsiveness, it's basically the same question. You can do an ICMP ping to the server in question, but many servers will have firewalls that drop ICMP packets without replying, so it's not exactly accurate (besides, if then ping is much less than ~100ms then the biggest contribution to latency probably comes from the server's internal processing, not the actual network, meaning an ICMP ping would be useless anyway).

This is true in general of network characteristics - and the internet in particular (because it's so complex) - you can't reliably measure anything about site X and infer anything about site Y. If you want to know how fast site Y will respond, then you just have to connect to site Y and start downloading.

Dean Harding
Erm, there are areas in the world where you'd be very happy if you can afford a stable 1Mbit connection ;)
BalusC
I would simply be content with opening an inputstream, ensuring that x kB's were read under Y time, then either reporting success or failure. This to me proves that the internet connection I am on is reliable as sometimes, some ISPs will have these little episodes where they are connected to the internet (TCP Connections open up), but practically no or VERY little data comes over the pipe. That is ALL I need to check for, very basic. Although a good library would repeat this test and average it for reliable servers on different ISP back bones (eg: level3.net, some other back bone, etc)
Zombies
@BalusC: As an Australian, I certainly understand what you mean there! I'm just saying that you can't do it *in general* because there's a significant number of people for which the upstream characteristics far outweigh the "theoretical" maximum of the local link. @Zombies: Ah, I understand now. I don't know of such a library, but it shouldn't be hard to do it yourself.
Dean Harding
A: 

Calculating the user's ability to reliably download a given number of bits in a given period of time might be complex -- but you could start with some of the code found at http://commons.apache.org/net/. That can tell you latency and bandwidth, anyway.

Jim Kiley
A: 

The answer may be wrong a millisecond (substitute any other period) after you've measured it.

Look at any application that gives you a "download time remaining" figure. Notice that it's generally incorrect and/or continually updating, and only becomes accurate at the last second.

Basically, so much change is inevitable over any reasonably complex network, such as the internet, that the only real measure is only available after the fact.

Damien_The_Unbeliever