Well, that depends what you mean by ‘Unicode’. As the answers so far say, pretty much any character “is Unicode”.
Windows abuses the term ‘Unicode’ to mean the UTF-16LE encoding that the Win32 API uses internally. You can detect UTF-16 by looking for the Byte Order Mark at the front, bytes FF FE
for UTF-16LE (or FE FF
for UTF-16BE). It's possible to have UTF-16 text that is not marked with a BOM, but that's quite bad news as you can only detect it by pure guesswork.
Pure guesswork is what the IsTextUnicode
function is all about. It looks at the input bytes and, by seeing how often common patterns turn up in it, guesses how likely it is that the bytes represent UTF-16LE or UTF-16BE-encoded characters. Since every sequence of bytes is potentially a valid encoding of characters(*), you might imagine this isn't very predictable or reliable. And you'd be right.
See Windows i18n guru Michael Kaplan's description of IsTextUnicode
and why it's probably not a good idea.
In general you would want a more predictable way of guessing what encoding a set of bytes represents. You could try:
- if it begins
FE FF
, it's UTF-16LE, what Windows thinks of as ‘Unicode’;
- if it begins
FF FE
, it's UTF-16BE, what Windows equally-misleadingly calls ‘reverse’ Unicode;
- otherwise check the whole string for invalid UTF-8 sequences. If there are none, it's probably UTF-8 (or just ASCII);
- otherwise try the system default codepage.
(*: actually not quite true. Apart from the never-a-characters like U+FFFF, there are also many sequences of UTF-16 code units that aren't valid characters, thanks to the ‘surrogates’ approach to encoding characters outside the 16-bit range. However IsTextUnicode
doesn't know about those anyway, as it predates the astral planes.)