views:

218

answers:

5

Visual Studio added code analysis (/analyze) for C/C++ in order to help identify bad code. This is quite a nice feature but when you deal with and old project you may be overwhelmed by the number of warnings.

Most of the problems are generating because the old code is doing some ASSERT at the beginning of the method or function.

I think this is the ASSERT definition used in the code (from afx.h)

#define ASSERT(f)          DEBUG_ONLY((void) ((f) || !::AfxAssertFailedLine(THIS_FILE, __LINE__) || (AfxDebugBreak(), 0)))

Example code:

ASSERT(pBytes != NULL);
*pBytes = 0; // <- warning C6011: Dereferencing NULL pointer 'pBytes'

I'm looking for an easy, clean and safe solution to solve these warnings that does not imply disabling these warnings. Did I mention that there are lots of occurrences in current codebase?

A: 

You seem to assume that ASSERT(ptr) somehow means that ptr is not NULL afterwards. That's not true, and the code analyzer doesn't make that assumption.

MSalters
This is true but it does not answer the question.
Sorin Sbarnea
+1  A: 

PREFast is telling you that you have a defect in your code; don't ignore it. You do in fact have one, but you have only skittered around acknowleging it. The problem is this: just because pBytes has never been NULL in development & testing doesn't mean it won't be in production. You don't handle that eventuality. PREfast knows this, and is trying to warn you that production environments are hostile, and will leave your code a smoking, mutilated mass of worthless bytes.

/rant

There are two ways to fix this: the Right Way, and a hack.

The right way is to handle NULL pointers at runtime:

void DoIt(char* pBytes)
{
    assert(pBytes != NULL);
    if( !pBytes )
        return;
    *pBytes = 0;
}

This will silence PREfast.

The hack is to use an annotation. For example:

void DoIt(char* pBytes)
{
    assert(pBytes != NULL);
    __analysis_assume( pBytes );
    *pBytes = 0;
}

EDIT: Here's a link describing PREfast annotations. A starting point, anyway.

John Dibling
I was afraid that this was the only solution as it does require quite a lot of code to add.
Sorin Sbarnea
Another option might be to redefine ASSERT to include the annotation. Better still would be to redefine ASSERT so that it runs in release and handles the NULL pointers in some meaningful way.
John Dibling
Er, you're blindly assuming that `pBytes` *can* in fact be `NULL`. That's not necessarily a valid assumption. PREfast just plays it safe: It often gives false positives. That's why it's not enabled by default. It tells you what it finds suspicious, and it's up to you to verify that it's correct, and if so, fix the issue.But saying that you actually need to handle the NULL case just because PREfast says so is pure rubbish. Handle it if it can actually logically occur.
jalf
Asserts are meant to catch logic errors: things that can never logically happen. And those are by definition impossible to recover from. You'd be trying to recover from a problem you haven't identified. How do you know what to do to recover then?Assuming the assert in this case are used properly (not as error handling but to guard against logic errors during development), and assuming that the value can not logically be `NULL`, there is absolutely no reason to handle the case where it is `NULL`.
jalf
@jalf: Using a pointer without checking it is reckless. This is Programming 101.
John Dibling
@jalf: I noticed that you did not d/v the other response which only asserted what I asserted -- that the pointer might in fact be NULL -- and didn't even provide the means to silence PREfast. Did my rant touch a nerve?
John Dibling
@John: So this code is reckless? `int i; int* p = *p = 42;`Care to explain to me how that can go wrong?
jalf
@John: Yes, it did touch a nerve in fact. I think not understanding your own code is a bad programming practice. Blindly checking against each and every error condition, even those that can not occur, does not help you write more robust software. You achieve robust software by handling the errors that *can* occur, without the vast added complexity of handling all the error cases that *cannot* occur. More complexity does not yield safer code.
jalf
@jalf: Your posted example won't fail. Unfortunately the real world isn't so simple. In the real world things change without your control or knowledge all the time. Other people call your functions, sometimes incorrectly. Other modules interact with your module in different ways. Socket protocols change. Etc. So tell me - Monday morning when the OPRA server you're connected to starts spitting out gibberish, should your code crash? Or report an error and try to deal with it, perhaps by shutting down gracefully?
John Dibling
@John: Exactly, my posted example won't fail. Therefor "using a pointer without checking it" is not *always* reckless. If you can prove that it is safe, there is no need to handle the case where it fails. That is my point. Obviously, if you can *not* prove that it is always safe, then you're right, it must be handled. My point is simply that *if* it is safe, you are wasting your time handling a nonexistent issue. (You might as well try to verify that 2+2=4, or that when I call function `foo()`, `bar()` doesn't get called instead.)
jalf
@John is correct - you *must* check the incoming pointer value, at least for NULL, because the ASSERT() goes away in a release build.This is *NOT* a PREfast false positive.
Michael Howard-MSFT
@John Dibling - A private method in a release build should not have to check every one of its parameters every time. That is a waste of resources. The method should be called correctly in the first place. Every time. If you can't verify by inspection that a private method is being used correctly, then you're not coding properly. If you can never assume that parameters to a private method will be correct, then what on earth is ASSERT for? In that case, we should just make the release build the same as the debug build, and check the same values against NULL over and over again all the time.
Jeffrey L Whitledge
@Jeffrey: Since when are we talking about private methods?
John Dibling
@John Dibling - Since the OP put an ASSERT in the code. If you don't control the calling code then you have to check the parameters in the release build.
Jeffrey L Whitledge
@Jeffrey: Are you saying that ASSERTs don't belong in methods called by code we don't control?
John Dibling
@Michael Howard-MSFT: How can you tell? Assert has nothing to do with it. Some code *a few lines *above* might have ensured that the pointer will never be NULL, without PREfast catching it. I (or anyone else) never claimed that the *assert* did anything to prevent it.
jalf
@John: Sounds about right to me. If you can't control how and from where the function is called, then obviously you need to check the parameters. Someone could call it with bad parameters. What @Jeffrey and I are saying is that *if* you control every entry point into the code, then you can ensure that the parameters will always be valid, and *then* an assert is perfectly appropriate: we just need to ensure that our logic is sound, nothing more.
jalf
+1  A: 

/analyze is not guaranteed to yield relevant and correct warnings. It can and will miss a lot of issues, and it also gives a number of false positives (things it identifies as warnings, but which are perfectly safe and will never actually occur)

It is unrealistic to expect to have zero warnings with /analyze.

It has pointed out a situation where you dereference a pointer which it can not verify is always valid. As far as PREfast can tell, there's no guarantee that it will never be NULL.

But that doesn't mean it can be NULL. Just that the analysis required to prove that it's safe is too complex for PREfast.

You may be able to use the Microsoft-specific extension __assume to tell the compiler that it shouldn't produce this warning, but a better solution is to leave the warning. Every time you compile with /analyze (which need not be every time you compile), you should verify that the warnings it does come up with are still false positives.

If you use your asserts correctly (to catch logic error during programming, guarding against situations that cannot happen, then I see no problem with your code, or with leaving the warning. Adding code to handle a problem that can never occur is just pointless. You're adding more code and more complexity for no reason (if it can never occur, then you have no way of recovering from it, because you have absolutely no clue what state the program will be in. All you know is that it has entered a code path you thought impossible.

However, if you use your assert as actual error handling (the value can be NULL in exceptional cases, you just expect that it won't happen), then it is a defect in your code. Then proper error handling (exceptions, typically) is needed.

Never ever use asserts for problems that are possible. Use them to verify that the impossible isn't happening. And when /analyze gives you warnings, look at them. If it is a false positive, ignore it (don't suppress it, because while it's a false positive today, the code you check in tomorrow may turn it into a real issue).

jalf
I'm not trying to start a war with you, but I do believe this is very bad advice: "a better solution is to leave the warning. Every time you compile with /analyze [...], you should verify that the warnings it does come up with are still false positives." My reasoning in the next comment...
John Dibling
It's my opinion that builds should be required to compile with zero errors and zero warnings on the highest warning levels (including /analyze) before a check-in is allowed. Many reasons, but one very simple one is if warnings are allowed to remain then you may end up with hundreds or thousands of warnings. It is unreasonable to expect that each one be checked for validity every time a production build is made. It simply won't happen, defeating the purpose of the warnings in the first place.
John Dibling
@John: That is why you usually do not compile with /analyze. But of course, if you prefer zero warnings even with /analyze, feel free to silence warnings once they've been confirmed to be false positives. My main point is simply that you do not need to handle problems that can never occur. Whether or not you silence the warning for it is up to you. I'd prefer keeping it, and just run /analyze only occasionally, because PREfast is not *supposed* to be accurate or yield zero false positives. It is supposed to be paranoid, and so I want to *once in a while* verify its output.
jalf
+1 You have it exactly correct.
Jeffrey L Whitledge
A: 

remember, ASSERT() goes away in a retail build so the C6011 warning is absolutely correct in your code above: you must check that pBytes is non-null as well as doing the ASSERT(). the ASSERT() simply throws your app into a debugger if that condition is met in a debug bug.

I work a great deal on /analyze and PREfast, so if you have other questions, please feel to let me know.

Michael Howard-MSFT
You're assuming too much about the OP's code. Yes, the assert goes away in a release build, so we can't rely on the assert alone to verify that the pointer is non-null. But presumably, if the assert is used, then it is because the programmer knows *by other means* that the pointer can never be null. If that is the case, if the assert is used properly, then this is simply a false positive. If he does indeed rely on the assert to catch null pointers that *actually* occur as you suspect, then he's abusing the assert, and should replace it with proper error handling.
jalf
A: 

My co-author David LeBlanc would tell me this code is broken anyway, assuming you're using C++, you should use a reference rather than a pointer, and a ref can't be NULL :)

Michael Howard-MSFT