views:

86

answers:

3

I have a downloader program that download pages from internet . the encoding of each page is different , some are in UTF-8 and some are Unicode. For example : a that shows 'a' character ; pages full of this characters .We should convert this encodings to normal text .

I used the UnicodeEncoding class in c# , but they do not help me .

How can i decode this encodings to real characters? Is there a class or method that converting this ?

Thanks .

+2  A: 

That is html-encoded; try HtmlDecode? (you'll need a reference to System.Web.dll)

Marc Gravell
A: 

You're getting confused between HTML/XML escaping and UTF-8/Unicode.

If the page is valid XML, life will be easier - you can just parse it as any other XML document, and then just get the relevant text nodes... all the XML escaping will be "unescaped" when you get the text.

If it's arbitrary - and possibly invalid - HTML then life is a bit harder. You may well want to normalize it into valid HTML first, then parse it and again ask for the text nodes.

If you can give us a more concrete example, it will be easier to advise you.

The HtmlDecode method suggested in other answers may very well be all you need - but you should definitely try to understand what's going on first. For example, you may well want to only decode certain fragments of the HTML - if you decode the whole document, then you could end up with text which looks it contains like HTML tags, but actually just contained text in the original document.

Jon Skeet
+3  A: 

Text in html pages which are in the form of starting with & and ending with ;, are HTML encoded.

You can decode these by using:

string html = ...; //your html
string decoded = System.Web.HttpUtility.HtmlDecode( html );

Also see Characters in string changed after downloading HTML from the internet for code on how to make sure you download the page in the correct character set.

Mikael Svenson