views:

306

answers:

2

Server is PHP5 and HTML charset is latin1 (iso-8859-1). With regular form POST requests, there's no problem with "special" characters like the em dash (–) for example. Although I don't know for sure, it works. Probably because there exists a representable character for the browser at char code 150 (which is what I see in PHP on the server for a literal em dash with ord).

Now our application also provides some kind of preview mechanism via ajax: the text is sent to the server and a complete HTML for a preview is sent back. However, the ordinary char code 150 em dash character when sent via ajax (tested with GET and POST) mutates into something more: %E2%80%93. I see this already in the apache log.

According to various sources I found, e.g. http://www.tachyonsoft.com/uc0020.htm , this is the UTF8 byte representation of em dash and my current knowledge is that JavaScript handles everything in Unicode.

However within my app, I need everything in latin1. Simply said: just like a regular POST request would have given me that em dash as char code 150, I would need that for the translated UTF8 representation too.

That's were I'm failing, because with PHP on the server when I try to decode it with either utf8_decode(...) or iconv('UTF-8', 'iso-8859-1', ...) but in both cases I get a regular ? representing this character (and iconv also throws me a notice: Detected an illegal character in input string ).

My goal is to find an automated solution, but maybe I'm trying to be überclever in this case?

I've found other people simply doing manual replacing with a predefined input/output set; but that would always give me the feeling I could loose characters.

The observant reader will note that I'm behind on understanding the full impact/complexity with things about Unicode and conversion of chars and I definitely prefer to understand the thing as a whole then a simply manual mapping.

Update based on Delands question about single-byte character necessity:

Truth is, I don't know if I need it. Currently, I've two ways to pass data to server and get back:

  1. client latin1 -> normal post request -> latin1 on server, sends back complete page in latin1, characters ok

  2. client latin1 -> ajax request (get or post) -> latin1 gets converted to utf8 -> i try to convert utf8 back to latin1 -> send latin1 HTML fragment to client to be displayed inline -> special characters fail

The second way fails because the conversion from utf8->latin1 doesn't work as described above with utf8_decode/icon.

My ultimate goal is simply to present a preview of the data the user has entered. I require the server round trip for the HTML rendering and other data evaluation which is has to be done.

The solution

Alans answer is the solution: latin1 gets treated as windows-1252 in the back and this is also what Word (at least my 2007 here) seems to use when copy&pasting stuff between it and the browser.

Further interesting link (from Alans wikipedia article) is to the HTML 5 Syntax:

8.2.2.2: User agents must at a minimum support the UTF-8 and Windows-1252 encodings, but may support more.

...

When a user agent would otherwise use an encoding given in the first column of the following table to either convert content to Unicode characters or convert Unicode characters to bytes, it must instead use the encoding given in the cell in the second column of the same row. When a byte or sequence of bytes is treated differently due to this encoding aliasing, it is said to have been misinterpreted for compatibility.

...

Input encoding: ISO-8859-1 -> Replacement encoding: windows-1252

+1  A: 

Pages with guides on how UTF-8 works:

http://azabani.com/15

http://wikipedia.org/wiki/UTF-8

Put simply, there isn't an easy mapping of "extended" ASCII sets like ISO-8859-1 (which limit at 255 code points) and Unicode (which owns 1114112 code points, where over 100000 are used). Please give me more detail about why a single-byte charset is needed; maybe I can help you get around this limitation. UTF-8 is the most efficient and flexible choice for encoding text, and should be used wherever possible.

Delan Azabani
Thanks for your blog entry, very informative about composed/decomposed charcters.I've updated my question regarding your inquiry about the single-byte conversion.
mark
+3  A: 

ISO-8859-1 does not support the em-dash character. You're actually using one of one of Microsoft's extended code pages, probably windows-1252. It's effectively a superset of latin1, so browsers tend to to use that when a page is served as ISO-8859-1 (which is why your characters display correctly). But if you're going to use extended characters like the em-dash, you should specify windows-1252 as the charset wherever you can. Or, even better, specify UTF-8 everywhere.

Alan Moore
That's it. The key to success is to know that latin1 silently gets treated as windows-1252. You rock, thanks.
mark