views:

303

answers:

4

I seem to recall that it is not safe to trust the value of $@ after an eval. Something about a signal handler having a chance to set $@ before you see it or something. I am also too tired and lazy right now to track down the real reason. So, why is it not safe to trust $@?

A: 

I don't think I ever heard of this being an issue. Specifically, the perlvar manpage explicitly suggests that $@ be used to get the errors from the last eval().

It might also be argued that signal handlers shouldn't muck with $@ (or include eval() statements).

zigdon
Can we stop down voting this guy for quoting the gospel when it is the gospel's fault he is wrong?
Chas. Owens
Heh, thanks Chas ;)
zigdon
He's not quoting anything, and perlvar isn't wrong. The confusion is that people don't always know what the *last* `eval` is because they are only thinking about their code, not everything else that runs. It's the global nature of `$@` that is the problem, which Try::Tiny explains quite well. It might be true that zigdon has '[n]ever heard of this being an issue', but that's really just a big red flag for "I don't know anything about this issue but let me answer based on ignorance". There's nothing redeemable in this answer, and it *should be* downvoted or deleted.
brian d foy
+12  A: 

The Try::Tiny docs have a pretty good list of eval/$@ shortcomings. I think you might be refering to the Localizing $@ silently masks errors section in there.

rafl
+13  A: 

The Try::Tiny perldoc has the definitive discussion of the trouble with $@:

There are a number of issues with eval.

Clobbering $@

When you run an eval block and it succeeds, $@ will be cleared, potentially clobbering an error that is currently being caught.

This causes action at a distance, clearing previous errors your caller may have not yet handled.

$@ must be properly localized before invoking eval in order to avoid this issue.

More specifically, $@ is clobbered at the beginning of the eval, which also makes it impossible to capture the previous error before you die (for instance when making exception objects with error stacks).

For this reason try will actually set $@ to its previous value (before the localization) in the beginning of the eval block.

Localizing $@ silently masks errors

Inside an eval block die behaves sort of like:

sub die {
        $@ = $_[0];
        return_undef_from_eval();
}

This means that if you were polite and localized $@ you can't die in that scope, or your error will be discarded (printing "Something's wrong" instead).

The workaround is very ugly:

my $error = do {
        local $@;
        eval { ... };
        $@;
};

...
die $error;

$@ might not be a true value

This code is wrong:

if ( $@ ) {
        ...
}

because due to the previous caveats it may have been unset.

$@ could also be an overloaded error object that evaluates to false, but that's asking for trouble anyway.

The classic failure mode is:

sub Object::DESTROY {
        eval { ... }
}

eval {
        my $obj = Object->new;

        die "foo";
};

if ( $@ ) {

}

In this case since Object::DESTROY is not localizing $@ but still uses eval, it will set $@ to "".

The destructor is called when the stack is unwound, after die sets $@ to "foo at Foo.pm line 42\n", so by the time if ( $@ ) is evaluated it has been cleared by eval in the destructor.

The workaround for this is even uglier than the previous ones. Even though we can't save the value of $@ from code that doesn't localize, we can at least be sure the eval was aborted due to an error:

my $failed = not eval {
        ...

        return 1;
};

This is because an eval that caught a die will always return a false value.

mobrule
Yeah, I think that is what I was remembering.
Chas. Owens
+7  A: 

$@ has the same problems that every global variable has: when something else sets it, it's reset across the entire program. Any eval might set $@. Even if you don't see an eval nearby, you don't know who else might call one (subroutines, tied variables, and so on).

brian d foy