views:

1238

answers:

3

I'm currently downloading an HTML page, using the following code:

Try
 Dim req As System.Net.HttpWebRequest = DirectCast(WebRequest.Create(URL), HttpWebRequest)
 req.Method = "GET"
 Dim resp As Net.HttpWebResponse = DirectCast(req.GetResponse(), Net.HttpWebResponse)
 Dim stIn As IO.StreamReader = New IO.StreamReader(resp.GetResponseStream())
 Dim strResponse As String = stIn.ReadToEnd

 ''Clean up
 stIn.Close()
 stIn.Dispose()
 resp.Close()

 Return strResponse

Catch ex As Exception
 Return ""
End Try

This works well for most pages, but for some (eg: www.gap.com), I get the response incorrectly encoded.
In gap.com, for example, I get "’" as "?"
And not to mention what happens if I try to load google.cn...

What am I missing here, to get .Net to encode this right?

My worst fear is that i'll actually have to read the meta tag inside the HTML that specified the encoding, and then re-read (re-encode?) the whole stream.

Any pointers will be greatly appreciated.


UPDATE:

Thanks to John Saunders' reply, i'm a bit closer. The HttpWebResponse.ContentEncoding property seems to always come in empty. However, HttpWebResponse.CharacterSet seems useful, and with this code, i'm getting closer:

Dim resp As Net.HttpWebResponse = DirectCast(req.GetResponse(), Net.HttpWebResponse)
Dim respEncoding As Encoding = Encoding.GetEncoding(resp.CharacterSet)
Dim stIn As IO.StreamReader = New IO.StreamReader(resp.GetResponseStream(), respEncoding)

Now Google.cn comes in perfectly, with all the chinese characters.
However, Gap.Com is still coming in wrong.

For Gap.com, HttpWebResponse.CharacterSet is ISO-8859-1, the Encoding i'm getting through GetEncoding is {System.Text.Latin1Encoding}, which says "ISO-8859-1" in it's body name, AND the Content-Type META tag in the HTML specified "charset=ISO-8859-1".

Am I still doing something wrong?
Or is GAP doing something wrong?

A: 

I believe that the HttpWebResponse has a ContentEncoding property. Use it in the constructor of your StreamReader.

John Saunders
A: 

Daniel, Some pages not even return a value in the CharacterSet, so this approach is not so reliable. Sometimes not even the browsers are able to "guess" which Encoding to use, so I think you can't do a 100% enconding recogniton.

In my particular case, as I deal with spanish or portuguese pages, I use the UTF7 encoding and it is working fine for me (áéíóúñÑêã... etc).

May be you can first load a table of CharacterSet codes and their corresponding Encoding. And in case the CharacterSet is empty, you can provide a Default encoding.

The detectEncodingFromByteOrderMarks parameter in the StreamReader constructor, may help a little as it automatically detect or infers some encodings from the very first bytes.

Romias
+1  A: 

Hi Daniel - Gap's site is wrong. The specific problem is that their page claims an encoding of Latin1 (ISO-8859-1), while the page uses character #146 which is not valid in ISO-8859-1.

That character is, however, valid in the Windows CP-1252 encoding (which is a superset of ISO 8859-1). In CP-1252, character code #146 and is used for the right-quote character. You'll see this as an apostrophe in "Youll find Petites and small sizes" in today's text on the Gap.com home page.

You can read http://en.wikipedia.org/wiki/Windows-1252 for more details. Turns out this kind of thing is a common problem on web pages where the content was originally saved in the CP-1252 encoding (e.g. copy/pasted from Word).

Moral of the story here: always store internationalized text as Unicode in your database, and always emit HTML as UTF8 on your web server!

Justin Grant