views:

283

answers:

3

Does anybody know if there is a simple way to detect character set encoding in Java? It seems to me that some programs have the ability to detect which character set a given piece of data uses, or at least make an aproximation.

I suppose the underlying mechanism would have to decode the data in each character set and pick whichever one has the least undefined characters followed by which character set is more common to break a tie.

Any ideas?

+2  A: 

Maybe this other question's answers could be helpful to you: http://stackoverflow.com/questions/774075/character-encoding-detection-algorithm

Daniel Schneller
A: 

For finding whether data is in any unicode format( UTF-8,UTF-16... etc) you can read the data in byte stream and check the first 4 bytes( BOM size) , and for each encoding it will be different

for eg:

for UTF-8 first 3 bytes will be EF,BB,BF

for encodings other than unicode encodings i am not sure...

sreejith
The optional UTF-8 BOM is only useful if it is present: http://en.wikipedia.org/wiki/Byte_order_mark
trashgod
A: 

Take a look at jchardet, a library ported from the Mozilla browser that specializes in "guessing" the charset of a document.

As an alternative, the cpdetector library, a bit newer, specializes in detecting the code page of a document.

Sylar