Whatever has been done by the program before it causes undefined behavior is of course already done.
So the printf()
would have sent the "0\n" to the stdout
stream. Whether that data actually made it to the device depends on whether or not that stream is unbuffered, buffered, or line-buffered.
Then again, I suppose that it's possible that undefined behavior executed subsequent to the completed, well-defined actions might cause damage to the extent that it appears that the well-defined behavior didn't complete correctly. I guess kind of like one of those "if a tree falls in the woods...." things.
Update to address the belief that future undefined behavior means all bets are off even before a program starts executing...
Here's what the C99 standard has to say about modifying the value of an object more than once between sequence points:
Between the previous and next sequence point an object shall have its stored value
modified at most once by the evaluation of an expression.
And the standard also has this to say about access to an object:
access
<execution-time action> to read or modify the value of an object
NOTE 1 Where only one of these two actions is meant, ``read'' or ``modify'' is used.
NOTE 2 "Modify'' includes the case where the new value being stored is the same as the previous value.
NOTE 3 Expressions that are not evaluated do not access objects.
I don't think that modifying an object more than once between sequence points is 'undefined behavior' at translation time, since objects aren't accessed/modified at translation time.
Even so, I agree that a compiler that diagnoses this undefined behavior at compile time would be a good thing, but I also think this question is more interesting if taken to apply only to programs that have been successfully compiled. So let's change the question a little bit to give a situation where the compiler can't diagnose undefined behavior at translation time:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[])
{
int c[] = { 0, 1, 2, 3 };
int *p1 = &c[0];
int *p2 = &c[1];
if (argc > 1) {
p1 = &c[atoi(argv[1])];
}
if (argc > 2) {
p2 = &c[atoi(argv[2])];
}
printf("before: %d, %d\n", *p1, *p2);
printf("after: %d, %d\n", ++(*p1),++(*p2)); /* possible undefined behavior */
return 0;
}
In this program the undefined behavior can't even be known to exist at translation time - it only occurs if the input to the program indicates that the same array element should be processed (or a different type of undefined behavior can occur if the input specifies invalid index values).
So lets pose the same question with this program: what does the standard say about the what might happen to the first printf()
results or side-effects?
If the inputs provide valid index values the undefined behavior can only happen after the first printf()
. Assume the input is argv[1] == "1"
and argv[2] == "1"
: the compiler implementation does not have the freedom to determine before the first printf()
that since undefined behavior will happen at some point in the program it's allowed to skip the first printf()
and go right to its undefined behavior of formatting the hard disk (or whatever other horrors might happen).
Given that the compiler agrees agrees to translate a program, the promise of future undefined behavior doesn't give the compiler the freedom to do whatever it wants before that undefined behavior actually takes place. Of course, as I mentioned before, the damage done by the undefined behavior could possibly destroy the previous results - but those results had to have happened.