views:

200

answers:

4

Can you write preprocessor directives to return you a std::string or char*?

For example: In case of integers:

#define square(x) (x*x)

int main()
{
   int x = square(5);
}

I'm looking to do the same but with strings like a switch-case pattern. if pass 1 it should return "One" and 2 for "Two" so on..

+1  A: 

A #define preprocessor directive does substitute the string of characters in the source code. The case...when construct you want is still not trivial:

#define x(i) ((i)==1?"One":((i)==2?"Two":"Many"))

might be a start -- but defining something like

static char* xsof[] = ["One", "Two", "Many"];

and

#define x(i) xsof[max(0, min((i)-1, (sizeof xsof / sizeof xsof[0] - 1)))]

seems more reasonable and better-performing.

Edit: per Chris Lutz's suggestion, made the second macro automatically adjust to the xsof definition; per Mark's, made the count 1-based.

Alex Martelli
Change the `2` in the macro to `(sizeof xsof / sizeof xsof[0] - 1)` and you've got a great, maintainable solution.
Chris Lutz
Shouldn't the `(i)` in the given macro really be `(i) - 1`?
Michael Burr
@Michael, ah yes, he'd said he wanted to count from one -- editing to fix, tx. @Chris, good idea, editing to incorporate it.
Alex Martelli
@Michael - In that case, better to do `max(1, min((i), sizeof xsof / sizeof xsof[0])) - 1` (or is that just me?) or incorporate `"Zero"` into `xsof`
Chris Lutz
A: 

You cannot turn integers into strings so 1 ---> "One", 2 ---> "Two", etc except by enumerating each value.

You can convert an argument value into a string with the C preprocessor:

#define STRINGIZER(x)   #x
#define EVALUATOR(x)    STRINGIZER(x)
#define NAME(x)         EVALUATOR(x)

NAME(123)    // "123"

#define N   123
#define M   234

NAME(N+M)    // "123+234"

See also SO 1489932.

Jonathan Leffler
+2  A: 

You don't want to do this with macros in C++; a function is fine:

char const* num_name(int n, char const* default_=0) {
  // you could change the default_ to something else if desired

  static char const* names[] = {"Zero", "One", "Two", "..."};
  if (0 <= n && n < (sizeof names / sizeof *names)) {
    return names[n];
  }
  return default_;
}

int main() {
  cout << num_name(42, "Many") << '\n';
  char const* name = num_name(35);
  if (!name) { // using the null pointer default_ value as I have above
    // name not defined, handle however you like
  }
  return 0;
}

Similarly, that square should be a function:

inline int square(int n) {
  return n * n;
}

(Though in practice square isn't very useful, you'd just multiply directly.)


As a curiosity, though I wouldn't recommend it in this case (the above function is fine), a template meta-programming equivalent would be:

template<unsigned N> // could also be int if desired
struct NumName {
  static char const* name(char const* default_=0) { return default_; }
};
#define G(NUM,NAME) \
template<> struct NumName<NUM> { \
  static char const* name(char const* default_=0) { return NAME; } \
};
G(0,"Zero")
G(1,"One")
G(2,"Two")
G(3,"Three")
// ...
#undef G

Note that the primary way the TMP example fails is you have to use compile-time constants instead of any int.

Roger Pate
This makes me wonder if this problem is achievable with template metaprogramming in some deviously clever way...
Chris Lutz
@Chris: Sure, you'd have to use `NumName<compile_time_constant>::name()` (call an inline static function) and specialize for all the values you care about. I don't see the need, however.
Roger Pate
@Roger - I was wondering more if that would allow us to do neat tricks to simplify larger numbers into combinations of smaller numbers, but the template system doesn't allow for ranges (like, say, `template<> struct NumName<21-29>`) so it ends up not being very useful.
Chris Lutz
@Chris: The template system certainly allows for that, though not directly. You'd typically do that by inheriting `NumName<I>` from `NumNameImpl<I, HorribleExpression<I>::value>`. You can now specialize `NumName<int>`, `HorribleExpression<int>` and/or `NumNameImpl<int, int>`
MSalters
+1  A: 

I have seen this...

#define STRING_1() "ONE"
#define STRING_2() "TWO"
#define STRING_3() "THREE"
...

#define STRING_A_NUMBER_I(n) STRING_##n()

#define STRING_A_NUMBER(n) STRING_A_NUMBER_I(n)  

I belive this extra step is to make sure n is evaluated, so if you pass 1+2, it gets transformed to 3 before passed to STRING_A_NUMBER_I, this seems a bit dodge, can anyone elaborate?

matt
Boost uses this method for doing its preprocessor code generation stuff. It was fun to decipher how all of that worked. Also, @Chris Lutz, it will work if "STRING_##n" had the open/close brackets on it
Grant Peters
can you define `STRING_X` to a unique number here?
Dave18
@Dave: sorry I don't know what you mean exactlyalso, you would have to make sure you are passing in an actual number, for example, you can't pass 1+2, you have to pass 3
matt
well I think its the reason why *Roger Pate* defined it with two parameters to give each a value, instead of being default.
Dave18