Depends on the representation of the string.
Once upon a time, we had simple string representations (e.g., ASCII) in which all the character codes took up a single unit of space in the string (8 bits ignoring the topmost).
[There were earlier string representations of 6 and 9 bits, but they had the same property
of being fixed-sized units).
Handling non-English langauges (Eastern Europe, Asia,...) caused people to propose various kinds of so-called "double-byte character strings" (DBCS), in which common charcters occupied a single unit, (pretty much the same set as the ASCII characters) now almost universally 8 bits, but the other characters are encoded as two bytes, the first of which occupies part of the 8 bits space which ASCII doesn't need, and a second byte, provding a character encoding scheme that has ~~ 15 bit characters.
Tearing apart such strings is messy because the routine that does so has to understand the exact DBCS encoding scheme, and pick up 1 or 2 bytes at a time in accordance.
Along came Unicode, to solve the problem by providing 16 bit characters. Most modern progamming languages (Java, C#) provide those 16 bit characters as the basis of their string representations. Life got a lot easier (if we ignore the fact that even 16 bit unicode sometimes allows two sequential charcters to be composed to form what amounts to another characater already defined in the set).
The committee that enhances Unicode, however, couldn't resist, and extended Unicode beyond the 16 bits. We're now stuck back with the dumb DBCS scheme (actually worse, some take several bytes, IIRC) that Unicode was supposed to fix. So, to process strings in those
modern langauges, you again have to understand when a byte represents a single characater, and when it represents a lead-in to a multi character sequence.
If you're lucky, the string you have is composed only of 16 bit single characters in Unicode. If not, you'll need to consult your Unicode manual and pray that you have a Unicode string management library to help you do this right.
This last bit is such a colossal hassle, that a lot of coders punt and stick with Unicode-as-single-wide characters. Works in Europe. Not recommended in Asia.