Hex digits are very easy to convert to binary:
// C++98 guarantees that '0', '1', ... '9' are consecutive.
// It only guarantees that 'a' ... 'f' and 'A' ... 'F' are
// in increasing order, but the only two alternative encodings
// of the basic source character set that are still used by
// anyone today (ASCII and EBCDIC) make them consecutive.
unsigned char hexval(unsigned char c)
{
if ('0' <= c && c <= '9')
return c - '0';
else if ('a' <= c && c <= 'f')
return c - 'a' + 10;
else if ('A' <= c && c <= 'F')
return c - 'A' + 10;
else abort();
}
So to do the whole string looks something like this:
void hex2ascii(const string& in, string& out)
{
out.clear();
out.reserve(in.length() / 2);
for (string::const_iterator p = in.begin(); p != in.end(); p++)
{
unsigned char c = hexval(*p);
p++;
if (p == in.end()) break; // incomplete last digit - should report error
c = c << 4 + hexval(*p);
out.push_back(c);
}
}
You might reasonably ask why one would do it this way when there's strtol
, and using it is significantly less code (as in James Curran's answer). Well, that approach is a full decimal order of magnitude slower, because it copies each two-byte chunk (possibly allocating heap memory to do so) and then invokes a general text-to-number conversion routine that cannot be written as efficiently as the specialized code above. Christian's approach (using istringstream) is five times slower than that. Here's a benchmark plot - you can tell the difference even with a tiny block of data to decode, and it becomes blatant as the differences get larger. (Note that both axes are on a log scale.)
Is this premature optimization? Hell no. This is the kind of operation that gets shoved in a library routine, forgotten about, and then called thousands of times a second. It needs to scream. I worked on a project a few years back that made very heavy use of SHA1 checksums internally -- we got 10-20% speedups on common operations by storing them as raw bytes instead of hex, converting only when we had to show them to the user -- and that was with conversion functions that had already been tuned to death. One might honestly prefer brevity to performance here, depending on what the larger task is, but if so, why on earth are you coding in C++?
Also, from a pedagogical perspective, I think it's useful to show hand-coded examples for this kind of problem; it reveals more about what the computer has to do.