views:

3235

answers:

6

I have a file which is encoded as iso-8859-1, and contains characters such as ô .

I am reading this file with java code, something like:

     File in = new File("myfile.csv");
     InputStream fr = new FileInputStream(in);
     byte[] buffer = new byte[4096];
     while (true) {
          int byteCount = fr.read(buffer, 0, buffer.length);
          if (byteCount <= 0) {
              break;
          }

          String s = new String(buffer, 0, byteCount,"ISO-8859-1");
          System.out.println(s);
     }

However the ô character is always garbled, usually printing as a ? .

I have read around the subject (and learnt a little on the way) e.g.

but still can not get this working

Interestingly this works on my local pc (xp) but not on my linux box.

I have checked that my jdk supports the required charsets (they are standard, so this is no suprise) using :

System.out.println(java.nio.charset.Charset.availableCharsets());
+2  A: 

If you can, try to run your program in debugger to see what's inside your 's' string after it is created. It is possible that it has correct content, but output is garbled after System.out.println(s) call. In that case, there is probably mismatch between what Java thinks is encoding of your output and character encoding of your terminal/console on Linux.

Peter Štibraný
+6  A: 

I suspect that either your file isn't actually encoded as ISO-8859-1, or System.out doesn't know how to print the character.

I recommend that to check for the first, you examine the relevant byte in the file. To check for the second, examine the relevant character in the string, printing it out with

 System.out.println((int) s.getCharAt(index));

In both cases the result should be 244 decimal; 0xf4 hex.

See my article on Unicode debugging for general advice (the code presented is in C#, but it's easy to convert to Java, and the principles are the same).

In general, by the way, I'd wrap the stream with an InputStreamReader with the right encoding - it's easier than creating new strings "by hand". I realise this may just be demo code though.

EDIT: Here's a really easy way to prove whether or not the console will work:

 System.out.println("Here's the character: \u00f4");
Jon Skeet
have used linux file tool to test the type of the file: file --mime FranceJ2.csv FranceJ2.csv: text/plain; charset=iso-8859-1 and also confirmed that I can read it correctly, in say vibut i will follow your suggestions.
Joel
Don't trust tools that are trying to detect character encodings automatically. They're always just based on heuristics, and have to be. They don't know what text your file is really meant to contain.
Jon Skeet
A hexdump of the file yields: 0000000 0df4 000a (any suggestions!?)
Joel
Like Jon suggests in his article, verify the data at each step. If you don't run the code in debugger, you can dump hex bytes to console to make sure you have really data which you expect. (Esp. if it is this small)
Peter Štibraný
As suggested the decimal value of the character is 244. This is mysterious since it suggests that the garbling occurs during the sys.out call, or in the terminal itself. I know that it is not the terminal since i can cat the file and see its content no problem. Hmmm
Joel
@Joel: Any luck with System.console().printf(s) then?
Zach Scrivena
@zach - no i'm afraid it yields the same result. Oddly enough though I have noticed that setting -Dfile.encoding to UTF16 causes it to work, but not if set to UTF8. I do not understand why this would be, and it appears more of a hack than a fix.
Joel
@Joel: What if you redirect program output to a file, and then cat it?
Zach Scrivena
@Zach - same, the chars come out as ?. Also, if i put the debugger on it, they are also show a ?. I am most perplexed.
Joel
Please refer to my Answer below for the code I used to get this working. The suggestion in this post that the problem was due to the System.out call was correct. Thanks for all your help.
Joel
+5  A: 

Parsing the file as fixed-size blocks of bytes is not good --- what if some character has a byte representation that straddles across two blocks? Use an InputStreamReader with the appropriate character encoding instead:

 BufferedReader br = new BufferedReader(
         new InputStreamReader(
         new FileInputStream("myfile.csv"), "ISO-8859-1");

 char[] buffer = new char[4096]; // character (not byte) buffer 

 while (true)
 {
      int charCount = br.read(buffer, 0, buffer.length);

      if (charCount == -1) break; // reached end-of-stream 

      String s = String.valueOf(buffer, 0, charCount);
      // alternatively, we can append to a StringBuilder

      System.out.println(s);
 }

Btw, remember to check that the unicode character can indeed be displayed correctly. You could also redirect the program output to a file and then compare it with the original file.

As Jon Skeet suggests, the problem may also be console-related. Try System.console().printf(s) to see if there is a difference.

Zach Scrivena
+1  A: 

Basically, if it works on your local XP PC but not on Linux, and you are parsing the exact same file (i.e. you transferred it in a binary fashion between the boxes), then it probably has something to do with the System.out.println call. I don't know how you verify the output, but if you do it by connecting with a remote shell from the XP box, then there is the character set of the shell (and the client) to consider.

Additionally, what Zach Scrivena suggests is also true - you cannot assume that you can create strings from chunks of data in that way - either use an InputStreamReader or read the complete data into an array first (obviously not going to work for a large file). However, since it does seem to work on XP, then I would venture that this is probably not your problem in this specific case.

Eek
A: 

OK, finally got it working. As suggested it was related to the System.out encoding to the console:

Here is a working version of the code (demonstrative only!), with the comments showing what works and what does not.

This program was run with no -Dfile.encoding options (although changing this flag to various values (UTF8, UTF16, Cp850) had some effect it is not an option sun has made public , and it only worked when set to UTF16 or Cp850 which I did not want - and felt doing so was an unnecessary hack).

import java.nio.*;
import java.io.*;

public class Test {

    public static void main(String[] args) throws Exception {
            String FE = "ISO8859_1";

            File in =  new File(args[0]);
            InputStream fr = new FileInputStream(in);
            final byte[] buffer = new byte[4096];
            while (true) {
                    int byteCount = fr.read(buffer, 0, buffer.length);
                    if (byteCount <= 0) {
                            break;
                    }

                    String s = new String(buffer, 0, byteCount, FE);

                    // these do not work
                    System.out.println(new String(s.getBytes(FE)));
                    System.console().printf(s);
                    System.console().printf(new String(s.getBytes(FE)));

                    // this works
                    PrintStream ps = new PrintStream(System.out, true, FE);
                    ps.println(s);
            }
    }

}

Joel
This is not a safe solution outside any Western-configured PC.
McDowell
Well, it's safe in the way that it will not "destroy" any characters, it just means that the output will come out garbled (since the input is ISO-8859-1 in the first place). It just means that the surrounding environment must be aware of what character set the output is in, which is the case anyhow.
Eek
Joel
This is dangerous code. You should be using a Reader with the correct character encoding specified. I cringe to think that someone will accidentally use this code as a good example of dealing with character streams.
lycono
+2  A: 

@Joel - your own answer confirms that the problem is a difference between the default encoding on your operating system (UTF-8, the one Java has picked up) and the encoding your terminal is using (ISO-8859-1).

Consider this code:

public static void main(String[] args) throws IOException {
 byte[] data = { (byte) 0xF4 };
 String decoded = new String(data, "ISO-8859-1");
 if (!"\u00f4".equals(decoded)) {
  throw new IllegalStateException();
 }

 // write default charset
 System.out.println(Charset.defaultCharset());

 // dump bytes to stdout
 System.out.write(data);

 // will encode to default charset when converting to bytes
 System.out.println(decoded);
}

By default, my Ubuntu (8.04) terminal uses the UTF-8 encoding. With this encoding, this is printed:

UTF-8

(excuse the extra newline - I'm fighting the editor)

If I switch the terminal's encoding to ISO 8859-1, this is printed:

UTF-8

ôô

In both cases, the same bytes are being emitted by the Java program:

5554 462d 380a f4c3 b40a

The only difference is in how the terminal is interpreting the bytes it receives. In ISO 8859-1, ô is encoded as 0xF4. In UTF-8, ô is encoded as 0xC3B4. The other characters are common to both encodings.

McDowell