views:

3795

answers:

7

Dear friends,

The problem is that, as you know, there are thousands of characters in the Unicode chart and I want to convert all the similar characters to the letters which are in English alphabet.

For instance here are a few conversions:

ҥ->H
Ѷ->V
Ȳ->Y
Ǭ->O
Ƈ->C
tђє Ŧค๓เℓy --> the Family
...

and I saw that there are more than 20 versions of letter A/a. and I don't know how to classify them. They look like needles in the haystack.

The complete list of unicode chars is at http://www.ssec.wisc.edu/~tomw/java/unicode.html or http://unicode.org/charts/charindex.html . Just try scrolling down and see the variations of letters.

How can I convert all these with Java? Please help me :(

+3  A: 

The problem with "converting" arbitrary Unicode to ASCII is that the meaning of a character is culture-dependent. For example, "ß" to a German-speaking person should be converted to "ss" while an English-speaker would probably convert it to "beta".

Add to that the fact that Unicode has multiple code points for the same glyphs.

The upshot is that the only way to do this is create a massive table with each Unicode character and the ASCII character you want to convert it to. You can take a shortcut by normalizing characters with accents to normalization form KD, but not all characters normalize to ASCII. In addition, Unicode does not define which parts of a glyph are "accents".

Here is a tiny excerpt from an app that does this:

switch (c)
{
 case 'A':
 case '\u00C0': //  À LATIN CAPITAL LETTER A WITH GRAVE
 case '\u00C1': //  Á LATIN CAPITAL LETTER A WITH ACUTE
 case '\u00C2': //  Â LATIN CAPITAL LETTER A WITH CIRCUMFLEX
 // and so on for about 20 lines...
  return "A";
  break;

 case '\u00C6'://  Æ LATIN CAPITAL LIGATURE AE
  return "AE";
  break;

 // And so on for pages...
}
Dour High Arch
I agree. You should create a dictionary of conversions specifically for your application and expected audience. For example, for a Spanish-speaking audience I would only translate ÁÉÍÓÚÜÑáéíóúü¿¡
Roberto Bonvallet
Roberto there are thousands of characters and I can't do this manual.
Ahmet Alp Balkan
What human language are you using that has "thousands" of characters? Japanese? What would you expect どうしようとしていますか to be converted to?
Dour High Arch
The example you've given is not ideal: U+00DF LATIN SMALL LETTER SHARP S "ß" is not the same Unicode letter as U+03B2 GREEK SMALL LETTER BETA "β".
Joachim Sauer
+2  A: 

You could try using unidecode, which is available as a ruby gem and as a perl module on cpan. Essentially, it works as a huge lookup table, where each unicode code point relates to an ascii character or string.

Daniel Vandersluis
I need something in Java?
Ahmet Alp Balkan
You might be able to get a lookup table from one of these.
Kathy Van Stone
This is an amazing package, but it transliterates the sound of the character, for example it converts "北" to "Bei" because that is what the character sounds like in Mandarin. I think the questioner wants to convert glyphs to what they visually resemble in English.
Dour High Arch
It does do that for latin characters, though. â becomes a, et al.@ahmetalpbalkan I agree with Kathy, you could use it as a resource to build your own lookup table, the logic should be pretty simple. Unfortuantely there doesn't seem to be a java version.
Daniel Vandersluis
+4  A: 

Attempting to "convert them all" is the wrong approach to the problem.

Firstly, you need to understand the limitations of what you are trying to do. As others have pointed out, diacritics (the funny marks above letters to those unlucky enough to only speak English) are there for a reason: they are essentially unique letters in the alphabet of that language with their own meaning / sound etc.: removing those marks is just the same as replacing random letters in an English word. This is before you even go onto consider the Cyrillic languages and other script based texts such as Arabic, which simply cannot be "converted" to English.

If you must, for whatever reason, convert characters, then the only sensible way to approach this it to firstly reduce the scope of the task at hand. Consider the source of the input - if you are coding an application for "the Western world" (to use as good a phrase as any), it would be unlikely that you would ever need to parse Arabic characters. Similarly, the Unicode character set contains hundreds of mathematical and pictorial symbols: there is no (easy) way for users to directly enter these, so you can assume they can be ignored.

By taking these logical steps you can reduce the number of possible characters to parse to the point where a dictionary based lookup / replace operation is feasible. It then becomes a small amount of slightly boring work creating the dictionaries, and a trivial task to perform the replacement. If your language supports native Unicode characters (as Java does) and optimises static structures correctly, such find and replaces tend to be blindingly quick.

This comes from experience of having worked on an application that was required to allow end users to search bibliographic data that included diacritic characters. The lookup arrays (as it was in our case) took perhaps 1 man day to produce, to cover all diacritic marks for all Western European languages.

iAn
iAn thanks for answering. Actually I'm not working with arabic languages or something like that. You know some people use the diacritics as funny characters and I have to remove that as much as I can do. For instance, I said "tђє Ŧค๓เℓy --> the Family" conversion in the example but it seems difficult convert it completely. However, we can make the conversion "òéışöç->oeisoc" in a simple way. But what is the exact way to do this. Creating arrays and replacing manually? Or does this language have native functions about this issue?
Ahmet Alp Balkan
+2  A: 

If the need is to convert "òéışöç->oeisoc", you can use this a starting point :

public class AsciiUtils {
    private static final String PLAIN_ASCII =
      "AaEeIiOoUu"    // grave
    + "AaEeIiOoUuYy"  // acute
    + "AaEeIiOoUuYy"  // circumflex
    + "AaOoNn"        // tilde
    + "AaEeIiOoUuYy"  // umlaut
    + "Aa"            // ring
    + "Cc"            // cedilla
    + "OoUu"          // double acute
    ;

    private static final String UNICODE =
     "\u00C0\u00E0\u00C8\u00E8\u00CC\u00EC\u00D2\u00F2\u00D9\u00F9"             
    + "\u00C1\u00E1\u00C9\u00E9\u00CD\u00ED\u00D3\u00F3\u00DA\u00FA\u00DD\u00FD" 
    + "\u00C2\u00E2\u00CA\u00EA\u00CE\u00EE\u00D4\u00F4\u00DB\u00FB\u0176\u0177" 
    + "\u00C3\u00E3\u00D5\u00F5\u00D1\u00F1"
    + "\u00C4\u00E4\u00CB\u00EB\u00CF\u00EF\u00D6\u00F6\u00DC\u00FC\u0178\u00FF" 
    + "\u00C5\u00E5"                                                             
    + "\u00C7\u00E7" 
    + "\u0150\u0151\u0170\u0171" 
    ;

    // private constructor, can't be instanciated!
    private AsciiUtils() { }

    // remove accentued from a string and replace with ascii equivalent
    public static String convertNonAscii(String s) {
       if (s == null) return null;
       StringBuilder sb = new StringBuilder();
       int n = s.length();
       for (int i = 0; i < n; i++) {
          char c = s.charAt(i);
          int pos = UNICODE.indexOf(c);
          if (pos > -1){
              sb.append(PLAIN_ASCII.charAt(pos));
          }
          else {
              sb.append(c);
          }
       }
       return sb.toString();
    }

    public static void main(String args[]) {
       String s = 
         "The result : È,É,Ê,Ë,Û,Ù,Ï,Î,À,Â,Ô,è,é,ê,ë,û,ù,ï,î,à,â,ô,ç";
       System.out.println(AsciiUtils.convertNonAscii(s));
       // output : 
       // The result : E,E,E,E,U,U,I,I,A,A,O,e,e,e,e,u,u,i,i,a,a,o,c
    }
}

The JDK 1.6 provides the java.text.Normalizer class that can be used for this task.

See an example here

RealHowTo
Unfortunately that will not handle ligatures like Æ.
Dour High Arch
Hmm that may be helpful. Thanks.
Ahmet Alp Balkan
+2  A: 

There is no easy or general way to do what you want because it is just your subjective opinion that these letters look loke the latin letters you want to convert to. They are actually separate letters with their own distinct names and sounds which just happen to superficially look like a latin letter.

If you want that conversion, you have to create your own translation table based on what latin letters you think the non-latin letters should be converted to.

(If you only want to remove diacritial marks, there are some answers in this thread: http://stackoverflow.com/questions/249087/how-do-i-remove-diacritics-accents-from-a-string-in-net However you describe a more general problem)

JacquesB
+1. Here's a Java version of the 'remove diacritics' question: http://stackoverflow.com/questions/1016955/method-to-substitute-foreign-for-english-characters-in-java; see Michael Borgwardt's and devio's answers
Jonik
+3  A: 

Reposting my post from: here

This method works fine in java.

It basically converts all accented characters into their deAccented counterparts followed by their combining diacritics. Now you can use a regex to strip off the diacritics.

import java.text.Normalizer;
import java.util.regex.Pattern;

public String deAccent(String str) {
    String nfdNormalizedString = Normalizer.normalize(str, Normalizer.Form.NFD); 
    Pattern pattern = Pattern.compile("\\p{InCombiningDiacriticalMarks}+");
    return pattern.matcher(nfdNormalizedString).replaceAll("");
}
hashable
@hashable Thanks for that! I hope it works :)
Ahmet Alp Balkan
InCombiningDiacriticalMarks doesn't convert all cyrillics. For example Општина Богомила is untouched. It would be nice if one could convert it to Opstina Bogomila or something
iwein
It doesn't transliterate at all. It merely removes decomposed diacritical marks ("accents"). The previous step (Form.NFD) breaks down á in a + ', i.e. decomposing the accented character into an unaccented character plus a diacritical mark. This would convert cyrillic Ѽ into Ѡ but not further.
MSalters
+3  A: 

Since the encoding that turns "the Family" into "tђє Ŧค๓เℓy" is effectively random and not following any algorithm that can be explained by the information of the Unicode codepoints involved, there's no general way to solve this algorithmically.

You will need to build the mapping of Unicode characters into latin characters which they resemble. You could probably do this with some smart machine learning on the actual glyphs representing the Unicode codepoints. But I think the effort for this would be greater than manually building that mapping. Especially if you have a good amount of examples from which you can build your mapping.

To clarify: a few of the substitutions can actually be solved via the Unicode data (as the other answers demonstrate), but some letters simply have no reasonable association with the latin characters which they resemble.

Examples:

  • "ђ" (U+0452 CYRILLIC SMALL LETTER DJE) is more related to "d" than to "h", but is used to represent "h".
  • "Ŧ" (U+0166 LATIN CAPITAL LETTER T WITH STROKE) is somewhat related to "T" (as the name suggests) but is used to represent "F".
  • "ค" (U+0E04 THAI CHARACTER KHO KHWAI) is not related to any latin character at all and in your example is used to represent "a"
Joachim Sauer