views:

492

answers:

5

Following this thread. http://stackoverflow.com/questions/55709/streaming-large-files-in-a-java-servlet.

Is it possible to find the total internet bandwidth available in current machine thru java?

what i am trying to do is while streaming large files thru servlet, based on the number of parallel request and the total band width i am trying to reduce the BUFFER_SIZE of the stream for each request. make sense?

Is there any pure java way? (without JNI)

A: 

The only way to find available bandwidth is to monitor / measure it. On windows you have access to Net.exe and can get the throughput on each NIC.

Tommy
Please answer based on pure java way
Madhu
**THIS** is a correct answer. There is no pure Java way!
Stephen C
A: 

If you're serving the content through a servlet, then you could calculate how fast each servlet output stream is going. Collect that data for all streams for a user/session, and you could determine at least what the current bandwidth usage is.

A possible way to calculate the rate could be instead of writing the large files through the servlet output stream, write to a new FilterOutputStream that would keep track of your download rates.

TJ
A: 

The concept of "total internet bandwidth available in current machine" is really hard to define. However, tweaking the local buffer size will not affect how much data you can push through to an individual client.

The rate at which a given client can take data from your server will vary with the client, and with time. For any given connection, you might be limited by your local upstream connection to the Internet (e.g., server on DSL) or you might be limited somewhere in the core (unlikely) or the remote end (e.g., server in a data center, client on a dialup line). When you have many connections, each individual connection may have a different bottleneck. Measuring this available bandwidth is a hard problem; see for example this list of research and tools on the subject.

In general, TCP will handle using all the available bandwidth fairly for any given connection (though sometimes it may react to changes in available bandwidth slower than you like). If the client can't handle more data, the write call will block.

You should only need to tweak the buffersize in the linked question if you find that you are seeing low bandwidth and the cause of that is insufficient data buffered to write to the network. Another reason you might tweak the buffer size is if you have so many active connections that you are running low on memory.

In any case, the real answer may be to not buffer at all but instead put your static files on a separate server and use something like thttpd to serve them (using a system call like sendfile) instead of a servlet. This helps ensure that the bottleneck is not on your server, but somewhere out in the Internet, beyond your control.

Emil
A: 

EDIT: Re-reading this, it's a little muddled because it's late here. Basically, you shouldn't have to do this from scratch; use one of the existing highly scalable java servers, since they'll do it better and easier.

You're not going to like this, but it actually doesn't make sense, and here's why:

  • Total bandwidth is independent of the number of connections (though there is some small overhead), so messing with buffer sizes won't help much
  • Your chunks of data are being broken into variable-sized packets anyway. Your network card and protocol will deal with this better than your servlet can
  • Resizing buffers regularly is expensive -- far better to re-use constant buffers from a fixed-size pool and have all connections queue up for I/O rights
  • There are a billion and a half libraries that assist with this sort of server

Were this me, I would start looking at multiplexed I/O using NIO. You can almost certainly find a library to do this for you. The IBM article here may be a useful starting point.

I think the smart money gives you one network I/O thread, and one disk I/O thread, with multiplexing. Each connection requests a buffer from a pool, fills it with data (from a shared network or disk Stream or Channel), processes it, then returns the buffer to the pool for re-use. No re-sizing of buffers, just a bit of a wait for each chunk of data. If you want latency to stay short, then limit how many transfers can be active at a time, and queue up the others.

BobMcGee
+4  A: 

Maybe you can time how long the app need to send one package (the buffer). And if that is larger than x milliseconds, then make your buffer smaller. You can use other values for the original bufferSize and if (stop - start > 700).

I'm only 14 years old. Please don't kill me if it's not what you want.

This is based on the thread you noticed:

ServletOutputStream out = response.getOutputStream();
InputStream in = [ code to get source input stream ];
String mimeType = [ code to get mimetype of data to be served ];
int bufferSize = 1024 * 4;
byte[] bytes = new byte[bufferSize];
int bytesRead;

response.setContentType(mimeType);

while ((bytesRead = in.read(bytes)) != -1) {
    long start = System.currentTimeMillis();
    out.write(bytes, 0, bytesRead);
    long stop = System.currentTimeMillis();
    if (stop - start > 700)
    {
        bufferSize /= 2;
        bytes = new byte[bufferSize];
    }
}

// do the following in a finally block:
in.close();
out.close();
Martijn Courteaux
Note to the viewers: Though the system automatically selected this answer as "accepted" there is not the right answer.
Madhu
thanks, for the accepted answer!
Martijn Courteaux