I am writing an OutputStream, just noticed this in the OutputStream interface,
public abstract void write(int b) throws IOException;
This call write one byte to the stream but why it takes integer as an argument?
I am writing an OutputStream, just noticed this in the OutputStream interface,
public abstract void write(int b) throws IOException;
This call write one byte to the stream but why it takes integer as an argument?
according to javadoc for OutputStream, the 24 high-order bits are ignored by this function. i think the method exists for compatibility reasons: therefore you don't need to convert to byte first and you can simply pass an integer.
regards
So you can signal EOF:
"Notice that read() returns an int value. If the input is a stream of bytes, why doesn't read() return a byte value? Using a int as a return type allows read() to use -1 to indicate that it has reached the end of the stream."
http://java.sun.com/docs/books/tutorial/essential/io/bytestreams.html
The Java IOStream classes have been a part of Java since 1.0. These classes only deal with 8 bit data. My guess is that the interface was designed like this so that the one write(int b) method would be called for int, short, byte, and char values. These are all promoted to an int. In fact since most JVMs run on 32 bit machines, the int primitive is the most efficient type to deal with. The compiler is free to store types such as bytes using 32 bits anyway. Interestingly, byte[] really is stored as a sequence of 8 bit bytes. This makes sense since an array could be quite large. However in the case of single primitive values such as int or byte, the ultimate space occupied at runtime doesn't really matter as long as the behavior is consistent with the spec.
More background:
http://www.java-samples.com/showtutorial.php?tutorialid=260
The assumption for the IOStream classes is that the caller only really cares about lowest 8 bits of data even when passing in an int. This is fine as long the caller knows it is really dealing with bytes, but it becomes a problem when underlying data is really text that uses some other character encoding such as multi-byte Unicode. This is why the Reader classes were introduced way back with Java 1.1. If you care about text data and performance, the IOStream classes are faster, but the Reader classes are more portable.
Actually I've been working with bytes a bit lately and they can be annoying. They up-convert to ints at the slightest provocation and there is no designation to turn a number into a byte--for instance, 8l will give you a long value 8, but for byte you have to say (byte)8
On top of that, they will (pretty much) always be stored internally as ints unless you are using an array (and maybe even then.. not sure).
I think they just pretty much assume that the only reason to use a byte is i/o where you actually need 8 bits, but internally they expect you to always use ints.
By the way, a byte can perform worse since it always has to be masked...
At least I remember reading that years ago, could have changed by now.
As an example answer for your specific question, if a function (f) took a byte, and you had two bytes (b1 and b2), then:
f(b1 & b2)
wouldn't work, because b1 & b2 would be up-converted to an int, and the int couldn't be down-converted automatically (loss of precision). So you would have to code:
f( (byte)(b1 & b2) )
Which would get irritating.
And don't bother asking WHY b1 & b2 up-converts--I've been cussing at that a bit lately myself!
Maybe it's because bytes are signed by default, and files store bytes as unsigned values. That is why read()
returns an int - to give 255 instead of -1 for $FF. Same with write(int)
, you can not store $FF as 255 in a byte.