views:

60

answers:

3

There's a common pattern, when each layer of application, dealing with data from a stream tends to wrap it into a BufferedInputStream, so that at a whole, there's a lot of buffers, filled from buffers, filled from buffers and so on.

I think this is bad practice and want to question: how does it impact the performance? Can this cause bugs?

+3  A: 
David
+1  A: 

It will increase memory footprint due to the extra buffers, but I suspect its rare given the sizes likely involved that it will actually have a significant effect on a given program. Theres the standard rule of not trying to optimise before you need to.

Theres also bound to be a slight processor overhead, but this will be even less significant.

It all depends just how much it is used, if there are many large chains it could be a problem, but I think it unlikely to be a problem.

As David said it is likely an indication of poor design It would probably be more efficient for components to be able to share more complex objects directly, but its all down to specific uses (and I'm having trouble thinking of a reason that you would use multiple buffered streams in such a way).

Fish
+1  A: 

It is indeed very bad practice and can indeed cause bugs. If method A does some reading and then passes the stream to method B which attaches a BufferedInputStream and does some more reading, the BufferedInputStream will fill its buffer, which may consume data that method A is expecting to be still there when method B returns. Data can be lost by method B's BufferedInputStream reading ahead.

As regards overheads, in practice, if the reads/writes are large enough, the intermediate buffers are bypassed anyway, so there isn't nearly as much extra copying as you might think: the performance impact is mostly the extra memory space plus the extra method calls.

EJP