views:

117

answers:

4

Good evening my dear programming gods!

I've been trying this for quite a while now, but can't figure it out :-(
I'm trying to export data to Excel via a *.csv file.
It works great so far, but I have some encoding problems when opening the files in Excel
(original string on the left, EXCEL result on the right):

Messwert(µm / m) ==> Messwert(µm / m)

Dümme Mässöng ==> Dümme Mässöng

Notepad++ tells me that the file is encoded "ANSI as UTF8"(WTF?)

So here are different ways I tried to get a valid result: obvious implementation:

tWriter.Write(";Messwert(µm /m)");

more sophisticated one (tried probably a dozen or more encoding combinations:)

tWriter.Write(Encoding.Default.GetString(Encoding.Unicode.GetBytes(";Messwert(µm /m)")));
tWriter.Write(Encoding.ASCII.GetString(Encoding.Unicode.GetBytes(";Messwert(µm /m)")));

and so on

Anyone out there having the same problem? ( I guess so, because google gave me some threads where similar questions have been asked, but not answered)

Anybody willing to come to my rescue?

Thanks a lot in advance!

Whole source code for the method creating the data:

    MemoryStream tStream = new MemoryStream();
    StreamWriter tWriter = new StreamWriter(tStream);
    tWriter.Write("\uFEFF");

    tWriter.WriteLine(string.Format("{0}", aMeasurement.Name));
    tWriter.WriteLine(aMeasurement.Comment);
    tWriter.WriteLine();
    tWriter.WriteLine("Zeit in Minuten;Messwert(µm / m)");

    TimeSpan tSpan;
    foreach (IMeasuringPoint tPoint in aMeasurement)
    {
        tSpan = new TimeSpan(tPoint.Time - aMeasurement[0].Time);
        tWriter.WriteLine(string.Format("{0};{1};", (int)tSpan.TotalMinutes, getMPString(tPoint)));
    }

    tWriter.Flush();
    return tStream;

Generated CSV file:

Dümme Mössäng
Testmessung die erste

Zeit in Minuten;Messwert(µm / m)
0;-703;
0;-381;
1;1039;
1;1045;
2;1457;
2;1045;
+1  A: 

This solution is written up as a fix for a Java application however you should be able to do something similar in C#. You may also want to look at the documentation on the StreamWriter class, in the remarks it refers to the Byte Order Mark (BOM).

confusedGeek
great hint. Learned just another detail about encoding. Unfortunately though it didn't do the trick for me. I've posted the source and the resulting CSV above
yas4891
in combination with the other "answer" this riddle has been solved for me. Thanks a lot again to both of you!
yas4891
to add one final line to this: Stackoverflow is going to make me (even more) lazzy, if you guys keep up that pace!
yas4891
A: 

"ANSI as UTF8"(WTF?)

NotePad++ is probably correct. The encoding is UTF8 (i.e., correct Unicode header), but only contains ANSI data (i.e., é is not encoded in correct UTF8 way, which would mean two bytes).

Or: it is the other way around. It is ANSI (no file header BOM), but the encoding of the individual characters is, or looks like, UTF8. This would explain the ü and other characters expanding in more than one other character. You can fix this by forcing the file to be read as Unicode.

If it's possible to post (part of) your CSV, we may be able to help fixing it at the source.

Edit

Now that we've seen your code: can you remove the StreamWriter and replace it with a TextWriter? Also, remove the hand-encoding of the BOM, it is not necessary. When you create a TextWriter, you can specify the encoding (don't use ASCII, try UTF8).

Abel
Hi abel (yet again :-)) I've done as you demanded. This time it is less of a problem because it is my own work and not the copyright of the company I'm working for at the moment
yas4891
@yas4891: I actually hoped you could upload it somewhere, preferably as binary (i.e. zipped/rarred), because by copying it here, the important part, the byte order and the BOM, get lost. But I see now that you include the whole source, sorry, forget what I said ;-)
Abel
No problem mate. You helped me a lot already, so you're always welcome
yas4891
+1  A: 

I'd suggest you open up the text file in a hex editor, and see what it really is. The BOM for UTF-16 is 0xFEFF, which the writing code is apparently writing to the stream - but the rest of the writing doesn't specify an encoding to use - it would use the default encoding of the StreamWriter, which is UTF-8. There appears to be a mix up of encodings.

When you pop open the file in hex view, if you see lots of 0x00 between the characters, you're working with UTF-16, which is Encoding.Unicode in C#. If there are no 0x00 between chars, the encoding is probably UTF-8.

If the latter case, just fix up the BOM to be EF BB BF rather than FE FF, and read normally with UTF-8 encoding.

Mark H
Bingo! That's it. A simple StreamWriter tWriter = new StreamWriter(tStream, Encoding.Unicode);did the trick.Thanks a lot
yas4891
A: 

try the following:

using (var sw = File.Create(Path.Combine(txtPath.Text, "UTF8.csv")))
{
  var preamble = Encoding.UTF8.GetPreamble();
  sw.Write(preamble, 0, preamble.Length);
  var data = Encoding.UTF8.GetBytes("懘荧,\"Hello\",text");
  sw.Write(data, 0, data.Length);
}

It writes the proper UTF8 preamble to the file before writing the UTF8 encoded CSV.

Trevor Germain