views:

527

answers:

4

Hi, I am using Python 3.1, but I can downgrade if needed.

I have an ASCII file containing a short story written in one of the languages the alphabet of which can be represented with upper and or lower ASCII. I wish to:

1) Detect an encoding to the best of my abilities, get some sort of confidence metric (would vary depending on the length of the file, right?)

2) Automatically translate the whole thing using some free online service or a library.

Additional question: What if the text is written in a language where it takes 2 or more bytes to represent one letter and the byte order mark is not there to help me?

Finally, how do I deal with punctuation and misc characters such as space? It will occur more frequently than some letters, right? How about the fact that punctuation and characters can be sometimes mixed - there might be two representations of a comma, two representations for what looks like an "a", etc.?

Yes, I have read the article by Joel Spolsky on Unicode. Please help me with at least some of these items.

Thank you!

P.S. This is not a homework, but it is for self-educational purposes. I prefer using a letter frequency library that is open-source and readable as opposed to the one that is closed, efficient, but gets the job done well.

+1  A: 

If you have an ASCII file then I can tell you with 100% confidence that it is encoded in ASCII. Beyond that try chardet. But knowing the encoding isn't necessarily enough to determine what language it's in.

As for multibyte encodings, The only reliable way to handle it is to hope that it has characters in the Latin alphabet and look for which half of the pair has the NULL. Otherwise treat it as UTF-8 unless you know better (Shift-JIS, GB2312, etc.).

Oh, and UTF-8. UTF-8, UTF-8, UTF-8. I don't think I can stress that enough. And in case I haven't... UTF-8.

Ignacio Vazquez-Abrams
Thanks. Please elaborate on the second paragraph. I guess my knowledge of encodings is not as deep as I thought.
Hamish Grubijan
`>>> u'me私'.encode('utf-16le')``'m\x00e\x00\xc1y'``>>> u'me私'.encode('utf-16be')``'\x00m\x00ey\xc1'``>>> u'me私'.encode('shift-jis')``'me\x8e\x84'``>>> u'me私'.encode('gb2312')``'me\xcb\xbd'``>>> u'me私'.encode('utf-8')``'me\xe7\xa7\x81'`
Ignacio Vazquez-Abrams
Did somebody say UTF-8?!
jathanism
+2  A: 

Essentially there are three main tasks to implement the described application:

  • 1a) Identify the character encoding of the input text
  • 1b) Identify the language of the input text
  • 2) Get the text translated the text, by way of one of the online services' API

For 1a, you may want to take a look at decodeh.py, aside from the script itself, it provides many very useful resources regarding character sets and encoding at large. CharDet, mentioned in other answer also seems to be worthy of consideration.

Once the character encoding is known, as you suggest, you may solve 1b) by calculating the character frequency profile of the text, and matching it with known frequencies. While simple, this approach typically provide a decent precision ratio, although it may be weak on shorter texts and also on texts which follow particular patterns; for example a text in French with many references to units in the metric system will have an unusually high proportion of the letters M, K and C.

A complementary and very similar approach, use bi-grams (sequences of two letters) and tri-grams (three letters) and the corresponding tables of frequency distribution references in various languages.

Other language detection methods involve tokenizing the text, i.e. considering the words within the text. NLP resources include tables with the most used words in various languages. Such words are typically articles, possessive adjectives, adverbs and the like.

An alternative solution to the language detection is to rely on the online translation service to figure this out for us. What is important is to supply the translation service with text in a character encoding it understands, providing it the language may be superfluous.

Finally, as many practical NLP applications, you may decide to implement multiple solutions. By using a strategy design pattern, one can apply several filters/classifiers/steps in a particular order, and exit this logic at different points depending on the situation. For example, if a simple character/bigram frequency matches the text to English (with a small deviation), one may just stop there. Otherwise, if the guessed language is French or German, perform another test, etc. etc.

mjv
+2  A: 

Character frequency is pretty straight forward

I just noticed that you are using Python3.1 so this is even easier

>>> from collections import Counter
>>> Counter("Μεταλλικα")
Counter({'α': 2, 'λ': 2, 'τ': 1, 'ε': 1, 'ι': 1, 'κ': 1, 'Μ': 1})

For older versions of Python:

>>> from collections import defaultdict
>>> letter_freq=defaultdict(int)
>>> unistring = "Μεταλλικα"
>>> for uc in unistring: letter_freq[uc]+=1
... 
>>> letter_freq
defaultdict(<class 'int'>, {'τ': 1, 'α': 2, 'ε': 1, 'ι': 1, 'λ': 2, 'κ': 1, 'Μ': 1})
gnibbler
Metallika, lol. Well, yes, I can compute a what I call "naive" frequency, but how do I compare that distribution to some of the known ones?
Hamish Grubijan
+1  A: 

I have provided some conditional answers however your question is a little vague and inconsistent. Please edit your question to provide answers to my questions below.

(1) You say that the file is ASCII but you want to detect an encoding? Huh? Isn't the answer "ascii"?? If you really need to detect an encoding, use chardet

(2) Automatically translate what? encoding? language? If language, do you know what the input language is or are you trying to detect that also? To detect language, try guess-language ... note that it needs a tweak for better detection of Japanese. See this SO topic which notes the Japanese problem and also highlights that for ANY language-guesser, you need to remove all HTML/XML/Javascript/etc noise from your text otherwise it will heavily bias the result towards ASCII-only languages like English (or Catalan!).

(3) You are talking about a "letter-frequency library" ... you are going to use this library to do what? If language guessing, it appears that using frequency of single letters is not much help distinguishing between languages which use the same (or almost the same) character set; one needs to use the frequency of three-letter groups ("trigrams").

(4) Your questions on punctuation and spaces: depends on your purpose (which we are not yet sure of). If purpose is language detection, the idea is to standardise the text; e.g. replace all runs of not (letter or apostrophe) with a single space, then remove any leading/trailing whitespace, than add 1 leading and 1 trailing space -- more precision is gained by treating start/end of word bigrams as trigrams. Note that as usual in all text processing you should decode your input into unicode immediately and work with unicode thereafter.

John Machin