We are building a (Java) web project with Eclipse. By default Eclipse uses Cp1252
encoding on Windows machines (which we use).
As we also have developers in China (in addition to Europe), I started to wonder if that is really the encoding to use.
My initial thought was to convert to UTF-8
, because "it supports all the character sets". However, is this really wise? Should we pick some other encoding instead? I see couple of issues:
1) How do web browser interpret the files by default? Does it depend on what language version one is using? What I am after here is that should we verbosely declare the encoding schemes used:
- XHTML files can set the encoding verbosely using
<?xml version='1.0' encoding='UTF-8' ?>
declarations. - CSS files can set this by
@CHARSET "UTF-8";
. - JavaScript files do not have in-file declarations, but one can globally define
<meta http-equiv="Content-Script-Type" content="text/javascript; charset=utf-8">
or<script type="text/javascript" charset="utf-8">
for specific scripts.
What if we leave CSS file without @CHARSET "UTF-8";
declaration? How does the browser decide how it is encoded?
2) Is it wise to use UTF-8, because it is so flexible. By locking our code into Cp1252
(or maybe ISO-8859-1
) I can ensure that foreign developers don't introduce special characters into files. This effectively prevents them from inserting Chinese comments, for example (we should use 100% english). Also, allowing UTF-8 can sometimes allow developers accidentally introduce some strange characters, that are difficult/impossible to perceive with human eye. This occurs when people, for example, copy-paste text or happen to press some weird keyboard combination accidentally.
It would seem that allowing UTF-8 in the project just brings problems...
3) For internatioanlization, I initially considered UTF-8 a good thing ("how can you add translations if the file encoding doesn't support the characters one needs?"). However, as it turned out, Java Resource Bundles (.properties files) must be encoded with ISO-8859-1, because otherwise they might break. Instead, the international characters are converted into \uXXXX
notation, for example \u0009
and the files are encoded with ISO-8859-1
. So... we are not even able to use UTF-8 for this.
For binary files... well, the encoding scheme doesn't really matter (I suppose one can say it doesn't even exist).
How should we approach these issues?