I'm not sure ASCIIEncoding.GetBytes is going to do it, because it only supports the range 0x0000 to 0x007F.
You tell the string contains only bytes. But a .NET string is an array of chars, and 1 char is 2 bytes (because a .NET stores strings as UTF16). So you can either have two situations for storing the bytes 0x42 and 0x98:
- The string was an ANSI string and contained bytes and is converted to an unicode string, thus the bytes will be 0x00 0x42 0x00 0x98. (The string is stored as 0x0042 and 0x0098)
- The string was just a byte array which you typecasted or just recieved to an string and thus became the following bytes 0x42 0x98. (The string is stored as 0x9842)
In the first situation on the result would be 0x42 and 0x3F (ascii for "B?"). The second situation would result in 0x3F (ascii for "?"). This is logical, because the chars are outside of the valid ascii range and the encoder does not know what to do with those values.
So i'm wondering why it's a string with bytes?
- Maybe it contains a byte encoded as a string (for instance Base64)?
- Maybe you should start with an char array or a byte array?
If you realy do have situation 2 and you want to get the bytes out of it you should use the UnicodeEncoding.GetBytes call. Because that will return 0x42 and 0x98.
If you'd like to go from a char array to byte array, the fastest way would be Marshaling.. But that's not really nice, and uses double memory.
public Byte[] ConvertToBytes(Char[] source)
{
Byte[] result = new Byte[source.Length * sizeof(Char)];
IntPtr tempBuffer = Marshal.AllocHGlobal(result.Length);
try
{
Marshal.Copy(source, 0, tempBuffer, source.Length);
Marshal.Copy(tempBuffer, result, 0, result.Length);
}
finally
{
Marshal.FreeHGlobal(tempBuffer);
}
return result;
}