views:

151

answers:

4

I'm having a problem comparing strings in a Unit Test in C# 4.0 using Visual Studio 2010. This same test case works properly in Visual Studio 2008 (with C# 3.5).

Here's the relevant code snippet:

byte[] rawData = GetData();
string data = Encoding.UTF8.GetString(rawData);

Assert.AreEqual("Constant", data, false, CultureInfo.InvariantCulture);

While debugging this test, the data string appears to the naked eye to contain exactly the same string as the literal. When I called data.ToCharArray(), I noticed that the first byte of the string data is the value 65279 which is the UTF-8 Byte Order Marker. What I don't understand is why Encoding.UTF8.GetString() keeps this byte around.

How do I get Encoding.UTF8.GetString() to not put the Byte Order Marker in the resulting string?

Update: The problem was that GetData(), which reads a file from disk, reads the data from the file using FileStream.readbytes(). I corrected this by using a StreamReader and converting the string to bytes using Encoding.UTF8.GetBytes(), which is what it should've been doing in the first place! Thanks for all the help.

+3  A: 

Well, I assume it's because the raw binary data includes the BOM. You could always remove the BOM yourself after decoding, if you don't want it - but you should consider whether the byte array should consider the BOM to start with.

EDIT: Alternatively, you could use a StreamReader to perform the decoding. Here's an example, showing the same byte array being converted into two characters using Encoding.GetString or one character via a StreamReader:

using System;
using System.IO;
using System.Text;

class Test
{
    static void Main()
    {
        byte[] withBom = { 0xef, 0xbb, 0xbf, 0x41 };
        string viaEncoding = Encoding.UTF8.GetString(withBom);
        Console.WriteLine(viaEncoding.Length);

        string viaStreamReader;
        using (StreamReader reader = new StreamReader
               (new MemoryStream(withBom), Encoding.UTF8))
        {
            viaStreamReader = reader.ReadToEnd();           
        }
        Console.WriteLine(viaStreamReader.Length);
    }
}
Jon Skeet
You're right that the raw data includes the BOM. It shouldn't, so I'm fixing that part.A philosophical follow-up question: Why does the `String.Equals` method take the BOM into account? Why isn't it simply ignored when doing a string comparison or treated as metadata and not as the "meat" of the string?
Skrud
@Skrud: You've got distinct character sequences. The raw String.Equals method compares ordinal sequences, with no further consideration. It's possible that some of the other string comparisons available (culture aware etc) may ignore BOMs - I'm not sure. Given that it's a strange character in some ways, I'm not really convinced it's appropriate to just ignore it arbitrarily. Put it this way: the equality failure showed that you had some bad data, so the behaviour has led to you improving your code. That's a good thing, no?
Jon Skeet
Absolutely. Which is the point of testing in the first place. :-)
Skrud
A: 

I believe the extra character is removed if you Trim() the decoded string

JoeGeeky
A: 

There is a slightly more efficient way to do it than creating StreamReader and MemoryStream:

1) If you know that there is always a BOM

string viaEncoding = Encoding.UTF8.GetString(withBom, 3, withBom.Length - 3);

2) If you don't know, check:

string viaEncoding;
if (withBom.Length >= 3 && withBom[0] == 0xEF && withBom[1] == 0xBB && withBom[2] == 0xBF)
    viaEncoding = Encoding.UTF8.GetString(withBom, 3, withBom.Length - 3);
else
    viaEncoding = Encoding.UTF8.GetString(withBom);
Tergiver
A: 

But why has this changed from .net 2.0/3.0/3.5 to 4? In earlier .Net framework versions String.StartsWith() would ignore the BOM char when doing the comparison... which I would agree is wrong. But I have not found this change documented anywhere.

John Hughes