views:

424

answers:

9

Taken from http://www.ocf.berkeley.edu/~wwu/riddles/cs.shtml

It looks very compiler specific to me. Don't know where to look for?

+1  A: 

One word, __cplusplus.

KennyTM
A: 

You could try preprocessor directives, but that might not be what they are looking for.

zipcodeman
+1  A: 

Just look to see if the __STDC__ and __cplusplus compiler macros are defined.

Ignacio Vazquez-Abrams
+18  A: 

Simple enough.

#include <stdio.h>
int main(int argc, char ** argv) {
#ifdef __cplusplus
printf("C++\n");
#else
printf("C\n");
#endif
return 0;
}

Or is there a requirement to do this without the official standard?

bmargulies
Just made the output agree with the question.
anon
That's correct. The C++ 1998 standard says: "The name `__cplusplus` is defined to the value `199711L` when compiling a C++ translation unit". (Section 16.8: `[cpp.predefined]`)
stakx
Of course, a really nasty C implementation could define __cplusplus.
anon
Nit-pick - that isn't a program; it is just the operational core of a program. :D
Jonathan Leffler
`__cplusplus` is defined to be 199711L *only when* the compiler is completely conforming. gcc, for example, still defines `__cplusplus` to be 1.
KennyTM
@stakx, KennyTM: Be careful. That number will change in C++0x.
Jason
@Neil, in the C99 standard, section 6.10.8.5 explicitly forbid the implementation from defining __cplusplus. But nothing of the sort in C89.
+20  A: 

We had to do a similar assignment at school. We were not allowed to use preprocessor (except for #include of course). The following code uses the fact that in C, type names and structure names form separate namespaces whereas in C++ they don't.

#include <stdio.h>
typedef int X;
int main()
{
    struct X { int ch[2]; };
    if (sizeof(X) != sizeof(struct X))
        printf("C\n");
    else
        printf("C++\n");
}
avakar
+1 for not using the preprocessor
William
+10  A: 
puts(sizeof('a') == sizeof(int) ? "C" : "C++");
Matthew Slattery
+1 That's the difference I was trying to remember!
anon
This is actually not guaranteed to work. `sizeof(int)` can be 1 if `CHAR_BIT` is at least 16.
avakar
Cool, could you explain why?
Oak
Oak, in C, character literals are of type `int`, in C++ they're of type `char`.
avakar
A: 

I'm guessing the intent is to write something that depends on differences between the languages themselves, not just predefined macros. Though it's technically not absolutely guaranteed to work, something like this is probably closer to what's desired:

int main() { 
    char *names[] = { "C", "C++"};

    printf("%s\n", names[sizeof(char)==sizeof('C')]);
    return 0;
}
Jerry Coffin
Same rant as for Matthew Slattery, this may not work on all platforms.
avakar
@avakar:Theoretically sort of true -- that's why I said It's not absolutely guaranteed to work. At the same time, the basic design of the C I/O library depends on EOF being different from any value of unsigned char. That implies that sizeof(int)>sizeof(unsigned char). Even though there was an idea that sizeof(char)==sizeof(int) should be allowed, it's really impossible to make C work that way, so the code above really does work dependably.
Jerry Coffin
Don't confuse the result of `sizeof` with the type's allowed range of values.
jamesdlin
Jerry, I don't think you're correct. `EOF` is required to be negative, so it's true that it must be different from any value of `unsigned char`. However I don't see how it implies `sizeof(int)>sizeof(unsigned char)`. `int` is not required to hold all values of `unsigned char`, only those values that can be read through `getchar` (which need not span the entire range of `unsigned char`). On PDP-10, where `CHAR_BIT == 36`, `sizeof(char)` indeed equals `sizeof(int)`.
avakar
@Jamesdlin:For unsigned char, all bits are required to participate in the value representation (i.e. no padding bits, signaling bits, etc., are allowed. This means for unsigned char, the range *must* be defined directly by the number of bits. For int that's not required. In theory, that would mean it could use the same number of bits, but have a smaller range, except that having a smaller range is directly prohibited.
Jerry Coffin
@avakar:Sorry, but no. The standard specifically requires that you can any stream of unsigned chars, write them to a file, and get the same values when you read them back. Thus, you must be able to read any value of unsigned char with getchar(). As an aside, I never heard of an implementation of C for the PDP-10 that used 36-bit chars. Most used 7-bit chars, stored 5 per word. That's enough for ASCII, but not the standard -- but I doubt there was ever a conforming implementation of C for the PDP-10.
Jerry Coffin
@avakar is correct in that this is not guaranteed to work: it is possible to have `sizeof(int) == 1` and have all `int` values be valid character returns from `getchar()`, in which case `EOF` can also be a valid character, and testing whether `EOF` indicates end-of-file needs to be performed with `feof()` (and testing for error with `ferror()`). And Crays are an example of `sizeof(int) == 1`: they used to have all the integer types be 64 bits wide.
jk
@jlk:Like I said, in theory, it's sort of correct. In reality, an implementation chars the same size as ints would break so much code nobody would ever use it. As far as Crays go, the idea of all integer types having the same size is an urban legend -- a rumor that remains widespread despite repeated testing showing that if it was ever true, it was only with a compiler that was never released to the outside world (and no evidence that it ever existed internally either).
Jerry Coffin
+12  A: 

I know of 6 approaches:

  1. Abuse C++'s automatic typedefs and ambiguity with sizeof.
  2. Abuse C++ struct/class equivalence and default constructors.
  3. Abuse // comments. (This won't work with C99.)
  4. sizeof differences with char literals. Note that this isn't guaranteed to be portable since it's possible for some hypothetical platform could use bytes with more than 8 bits, in which case sizeof(char) could be the same as sizeof(int).
  5. Abuse differences in the way C and C++'s grammars parse the ternary operator. (This one isn't strictly legal, but some compilers are lax.)
  6. Abuse differences in when lvalue=>rvalue conversions happen.

(You also could check for the __cplusplus preprocessor macro (or various other macros), but I think that doesn't follow the spirit of the question.)

I have implementations for all of these at: http://www.taenarum.com/csua/fun-with-c/c-or-cpp.c

Edit: Added method 6, and added a note that method 5 isn't strictly legal.

jamesdlin
I'm pretty sure the standard requires a char to be 1 byte, and a byte to be 8 bits.
jalf
No, `char` by definition is 1 byte, and a byte must be *at least* 8 bits, but it can be more. That's why `CHAR_BIT` exists.
jamesdlin
It requires char to be 1 byte. The number of bits in a byte is implementation-defined.
avakar
+1, didn't know about the difference in grammar, very nice.
avakar
+1  A: 

Here's the program:

#include <stdio.h>

int main(int argc, char *argv[])
{
    printf("This is %s\n", sizeof 'a' == sizeof(char) ? "C++" : "C");
    return 0;
}

And here is some nice reading on C and C++ differences.

Dmitry