I'm working on an application in C#, and need to read and write from a particular datafile format. The only issue at the moment is that the format uses strictly single byte characters, and C# keeps trying to throw in Unicode when I use a writer and a char array (which doubles filesize, among other serious issues). I've been working on modifying the code to use byte arrays instead, but that causes a few complaints when feeding them into a tree view and datagrid controls, and it involves conversions and whatnot.
I've spent a little time Googling, and there doesn't seem to be a simple typedef I can use to force the char type to use byte for my program, at least not without causing a extra complications.
Is there a simple way to force a C# .Net program to use ASCII-only and not touch Unicode?
Edit: Aright. Thanks guys, got this almost working. Using the ASCIIEncoding on the BinaryReader/Writers ended up fixing most of the problems (a few issues with an extra char being prepended to strings occurred, but I fixed that up). I'm having one last issue, which is very small but could be big: In the file, a particular char (prints as the Euro sign) gets converted to a ? when I load/save the files. That's not an issue in texts much, but if it occurred in a record length, it could change the size by kilobytes (not good, obviously). I think it's caused by the encoding, but if it came from the file, why won't it go back?
Edit2: The precise problem/results are such: Original file: 0x80 (euro) Encodings: ASCII: 0x3F (?) UTF8: 0xC280 (A-hat euro) Neither of those results will work, since anywhere in the file, it can change (if an 80 changed to 3F in a record length int, it could be a difference of 65*(256^3)). Not good. I tried using a UTF8 encoding, figuring that would fix the issue pretty well, but it's now adding that second character, which is even worse.