views:

360

answers:

5

Hi,

Maybe isn't a good or relevant question, so please don't kill me.

Java's default encoding is ASCII. Yes? (See my edit)
When a textfile is encoded in UTF-8? How does a Reader know that he has to use UTF-8?

The Readers I talk about are:

  • FileReaders
  • BufferedReaders from Sockets
  • A Scanner from System.in
  • ...

EDIT:

So the encoding is depends on the OS. So that means that not on every OS this is true:

'a'== 97

I hope this question is relevant.
Martijn

+8  A: 

Java's default encoding depends on your OS. For Windows, it's normally "windows-1252", for Unix it's typically "ISO-8859-1" or "UTF-8".

A reader knows the correct encoding because you tell it the correct encoding. Unfortunately, not all readers let you do this (for example, FileReader doesn't), so often you have to use an InputStreamReader.

kdgregory
FYI: the encoding on Windows depends on the OS language. http://msdn.microsoft.com/en-us/library/dd317756%28VS.85%29.aspx But essentially, you are correct - the default encoding is the system encoding - use it at your peril.
McDowell
The system property for default encoding is `file.encoding` and it normally depends on your locale settings (language in Win, as said above, and the `LC_*` environment variables on *nix).
gustafc
@gustafc - thanks for the info; I looked at System.getProperties() and didn't see this listed among the standard properties, so edited it out of my response. I still think it's a bad idea to rely on the default being properly set.
kdgregory
Sadly, FileReader is a convenience class for reading characters in the default encoding from a file, and nothing more. You would THINK it would allow you to select the character set as well, but it doesn't. In fact, you'll notice that the constructor signatures are identical to `FileInputStream`'s constructor signatures.
R. Bemrose
@kdgregory - you are correct about `file.encoding` not being a standard property; it is an implementation detail that developers should not set or read directly: http://bugs.sun.com/view_bug.do?bug_id=4163515
McDowell
A: 

You can start getting the idea here java Charset API

Note that according to the doc,

The native character encoding of the Java programming language is UTF-16

EDIT :

sorry I got called away before I could finish this, maybe I shouldn't have posted the partial answer as it was. Anyway, the other answers explain the details, the point being that the native file charset for each platform together with common alternate charsets will be read correctly by java.

Steve De Caux
While technically correct, this is irrelevant... the native encoding is **only used within Java**. When file I/O is done, the Reader classes use the platform default encoding unless you specify one by using the constructors for an InputStreamReader, which support encoding/charset arguments.
BobMcGee
+5  A: 

For most reader, Java uses whatever encoding & character set your platform does -- this may be some flavor of ASCII or UTF-8, or something more exotic like JIS (in Japan). Characters in this set are then converted to the UTF-16 which Java uses internally.

There's a work-around if the platform encoding is different than a file encoding (my problem -- UTF-8 files are standard, but my platform uses Windows-1252 encoding). Create an InputStreamReader instance that uses the constructor specifying encoding.

Edit: do this like so:

InputStreamReader myReader = new InputStreamReader(new FileInputStream(myFile),"UTF-8");
//read data
myReader.close();

However, IIRC there are some provisions to autodetect common encodings (such as UTF-8 and UTF-16). UTF-16 can be detected by the Byte Order Mark at the beginning. UTF-8 follows certain rules too, but generally the difference b/w your platform encoding and UTF-8 isn't going to matter unless you're using international characters in place of Latin ones.

BobMcGee
No, I'm not French. I'm Dutch. I speak Flamish.
Martijn Courteaux
Hah, I'm actually half-Dutch myself. I forgot about you crazy Flamish guys.
BobMcGee
Of course you are Dutch. It would be "Martin" in French, not "Martijn" :)
Pascal Thivent
Flamish? You'll mean Flemish :)
BalusC
I refuse to refer to a people by a name that sounds like something belonging in a tissue. Flamish, I say! :)
BobMcGee
+5  A: 

How does a Reader know that he have to use UTF-8?

You normally specify that yourself in an InputStreamReader. It has a constructor taking the character encoding. E.g.

Reader reader = new InputStreamReader(new FileInputStream("c:/foo.txt"), "UTF-8");

All other readers (as far as I know) uses the platform default character encoding, which may indeed not per-se be the correct encoding (such as -cough- CP-1252).

You can in theory also detect the character encoding automatically based on the byte order mark. This distinguishes the several unicode encodings from other encodings. Java SE unfortunately doesn't have any API for this, but you can homebrew one which can be used to replace InputStreamReader as in the example here above:

public class UnicodeReader extends Reader {
    private static final int BOM_SIZE = 4;
    private final InputStreamReader reader;

    /**
     * Construct UnicodeReader
     * @param in Input stream.
     * @param defaultEncoding Default encoding to be used if BOM is not found,
     * or <code>null</code> to use system default encoding.
     * @throws IOException If an I/O error occurs.
     */
    public UnicodeReader(InputStream in, String defaultEncoding) throws IOException {
        byte bom[] = new byte[BOM_SIZE];
        String encoding;
        int unread;
        PushbackInputStream pushbackStream = new PushbackInputStream(in, BOM_SIZE);
        int n = pushbackStream.read(bom, 0, bom.length);

        // Read ahead four bytes and check for BOM marks.
        if ((bom[0] == (byte) 0xEF) && (bom[1] == (byte) 0xBB) && (bom[2] == (byte) 0xBF)) {
            encoding = "UTF-8";
            unread = n - 3;
        } else if ((bom[0] == (byte) 0xFE) && (bom[1] == (byte) 0xFF)) {
            encoding = "UTF-16BE";
            unread = n - 2;
        } else if ((bom[0] == (byte) 0xFF) && (bom[1] == (byte) 0xFE)) {
            encoding = "UTF-16LE";
            unread = n - 2;
        } else if ((bom[0] == (byte) 0x00) && (bom[1] == (byte) 0x00) && (bom[2] == (byte) 0xFE) && (bom[3] == (byte) 0xFF)) {
            encoding = "UTF-32BE";
            unread = n - 4;
        } else if ((bom[0] == (byte) 0xFF) && (bom[1] == (byte) 0xFE) && (bom[2] == (byte) 0x00) && (bom[3] == (byte) 0x00)) {
            encoding = "UTF-32LE";
            unread = n - 4;
        } else {
            encoding = defaultEncoding;
            unread = n;
        }

        // Unread bytes if necessary and skip BOM marks.
        if (unread > 0) {
            pushbackStream.unread(bom, (n - unread), unread);
        } else if (unread < -1) {
            pushbackStream.unread(bom, 0, 0);
        }

        // Use given encoding.
        if (encoding == null) {
            reader = new InputStreamReader(pushbackStream);
        } else {
            reader = new InputStreamReader(pushbackStream, encoding);
        }
    }

    public String getEncoding() {
        return reader.getEncoding();
    }

    public int read(char[] cbuf, int off, int len) throws IOException {
        return reader.read(cbuf, off, len);
    }

    public void close() throws IOException {
        reader.close();
    }
}

Edit as a reply on your edit:

So the encoding is depends on the OS. So that means that not on every OS this is true:

'a'== 97

No, this is not true. The ASCII encoding (which contains 128 characters, 0x00 until with 0x7F) is the basis of all other character encodings. Only the characters outside the ASCII charset may risk to be displayed differently in another encoding. The ISO-8859 encodings covers the characters in the ASCII range with the same codepoints. The Unicode encodings covers the characters in the ISO-8859-1 range with the same codepoints.

You may find each of those blogs an interesting read:

  1. The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) (more theoretical of the two)
  2. Unicode - How to get the characters right? (more practical of the two)
BalusC
You forget about EBCDIC... in EBCDIC, 'a' != 97.
Paul Wagland
Haha, you're right. `EBCDIC` is indeed a proprietary (IBM) encoding which existed **next** to `ASCII` in the legacy times, but at end `ASCII` got world domination. Nowadays you'll see `EBCDIC` in old "mainframes" only (S/390, VM and consorts) which are on its turn however configureable to use plain `ASCII`.
BalusC
This is a very thorough posting, and the code is good. I could have sworn that some of the APIs auto-detect common encodings, but that might have been for network stuff only (specifically HTTP).
BobMcGee
Also, the fact that `'a'==97` is always true in Java doesn't necessarily have to do with ASCII directly, but is a effect of Java using UTF-16 internally.
Joachim Sauer
+4  A: 

I'd like to approach this part first:

Java's default encoding is ASCII. Yes?

There are at least 4 different things in the Java environment that can arguably be called "default encoding":

  1. the "default charset" is what Java uses to convert bytes to characters (and byte[] to String) at Runtime, when nothing else is specified. This one depends on the platform, settings, command line arguments, ... and is usually just the platform default encoding.
  2. the internal character encoding that Java uses in char values and String objects. This one is always UTF-16! There is no way to change it, it just is UTF-16! This means that a char representing a always has the numeric value 97 and a char representing π always has the numeric value 960.
  3. the character encoding that Java uses to store String constants in .class files. This one is always UTF-8. There is no way to change it.
  4. the charset that the Java compiler uses to interpret Java source code in .java files. This one defaults to the default charset, but can be configured at compile time.

How does a Reader know that he has to use UTF-8?

It doesn't. If you have some plain text file, then you must know the encoding to read it correctly. If you're lucky you can guess (for example, you can try the platform default encoding), but that's an error-prone process and in many cases you wouldn't even have a way to realize that you guessed wrong. This is not specific to Java. It's true for all systems.

Some formats such as XML and all XML-based formats were designed with this restriction in mind and include a way to specify the encoding in the data, so that guessing is no longer necessary.

Read The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) for the details.

Joachim Sauer