views:

137

answers:

6

what are the ideas of preventing buffer overflow attacks? and i heard about Stackguard,but until now is this problem completely solved by applying stackguard or combination of it with other techniques?

after warm up, as an experienced programmer

Why do you think that it is so difficult to provide adequate defenses for buffer overflow attacks?

Edit: thanks for all answers and keeping security tag active:)

+1  A: 

Buffer overflow exploits can be prevented. If programmers were perfect, there would be no unchecked buffers, and consequently, no buffer overflow exploits. However, programmers are not perfect, and unchecked buffers continue to abound.

Adi
+1  A: 

Only one technique is necessary: Don't trust data from external sources.

Ben Voigt
Even the best developers still make mistakes. Even the best development environments can benefit from activities to discover unexpected vulns.
atk
Sure, but they need to be designed around this simple truth to be effective. Taint-markings work. stack checksumming and so forth only slow the attack down, they don't eliminate the vulnerability.
Ben Voigt
I agree with your post, but it's simply not sufficient. Good developers don't fully understand the full meaning. Process and tools are necessary to verify correct implementation. I could tell you, "always write correct code", but that wouldn't teach you how to write correct code or discover that it's incorrect. Taint markings indicate what hasn't been validated, not indicate (in)correct validation. I agree, DEP and ASLR make it harder, not always impossible. But that's what security is - making things harder to the point that they're either infeasible or not worth the effort/expenditure.
atk
This is like Microsoft telling people "Don't click on links" or Adobe saying "Don't download files". The fact remains, shit happens. great programmers can introduce problems and you need to make it hard on the attacker.
Rook
You need two things in concert. One, developers that understand security. I disagree with atk, this stuff isn't that hard to understand and it should be a prerequisite for being labelled "good" in a security-sensitive environment. But good developers are still human and still miss things, especially when overworked. That's why process ultimately can break down as well. OTOH a systematic formally-provably-correct approach focuses the attention of your best programmers and catches interactions they might have missed. And that's what type systems are.
Ben Voigt
In the end, though, even code that is provably correct at the logic level doesn't assure correct operation at the electronic level, so statistical defenses will never become completely worthless.
Ben Voigt
I should have said, "good developers don't *always* understand...". You may consider it easy, but I've seen plenty of otherwise very good developers who simply don't get it. Limiting the definition of "good" may make your argument, but it doesn't solve the problem in the real world: good developers make mistakes, and strong typing, while helpful at limiting the attack surface doesn't eliminate vulns. And other than BSD, I'm unaware of any complex, consumer or business oriented product that has ever been proven correct - and BSD still had vulns after it was "proven" correct.
atk
What concepts are we really talking about understanding? I think it's only this: the concept of a corner case, and the concept that an external facing API can be called by other code than the client they just wrote. Both are essential to development of reusable code, the fact that this level of understanding is vital to security as well is somewhat of a bonus. Yes there are some developers who don't quite grasp this but I wouldn't call them good.
Ben Voigt
Now, that leaves the task of making sure that corner cases are dealt with properly every time, and that's where automated static analysis is beneficial. Taint support in the type system is a form of formal static analysis that has very good performance in terms of false negatives, and false positives are likely to be a problem only when retrofitting a large existing codebase that's been through a solid review process.
Ben Voigt
I've never argued that taint support is a bad idea. I've argued that putting all your eggs in one basket is a bad idea. Security programs should obey the concept of defense in depth, just like security in applications or security in the real world.
atk
I definitely agree with that. Only if resources are limited (and aren't they always), then some cost benefit analysis needs to be done. ALSR is fully automated, so the cost is very low. But what next? There were some suggestions to do fuzz testing and unit test overflows, but I would argue that taint is a better way of spotting where overflow checks need to be done than any feasible amount of fuzzing.
Ben Voigt
Remember that in a way NX bit is an implementation of "Don't trust data from external sources", if one believes that trust is a prerequisite for allowing code to execute. It could even be seen as a very simple form of taint marking (This page of code contains data, which may include untrusted input -- don't allow it to execute).
Ben Voigt
+1  A: 

There's a bunch of things you can do. In no particular order...

First, if your language choices are equally split (or close to equally split) between one that allows direct memory access and one that doesn't , choose the one that doesn't. That is, use Perl, Python, Lisp, Java, etc over C/C++. This isn't always an option, but it does help prevent you from shooting yourself in the foot.

Second, in languages where you have direct memory access, if classes are available that handle the memory for you, like std::string, use them. Prefer well exercised classes to classes that have fewer users. More use means that simpler problems are more likely to have been discovered in regular usage.

Third, use compiler options like ASLR and DEP. Use any security related compiler options that your application offers. This won't prevent buffer overflows, but will help mitigate the impact of any overflows.

Fourth, use static code analysis tools like Fortify, Qualys, or Veracode's service to discover overflows that you didn't mean to code. Then fix the stuff that's discovered.

Fifth, learn how overflows work, and how to spot them in code. All your coworkers should learn this, too. Create an organization-wide policy that requires people be trained in how overruns (and other vulns) work.

Sixth, do secure code reviews separately from regular code reviews. Regular code reviews make sure code works, that it passes functional tests, and that it meets coding policy (indentation, naming conventions, etc). Secure code reviews are specifically, explicitly, and only intended to look for security issues. Do secure code reviews on all code that you can. If you have to prioritize, start with mission critical stuff, stuff where problems are likely (where trust boundaries are crossed (learn about data flow diagrams and threat models and create them), where interpreters are used, and especially where user input is passed/stored/retrieved, including data retrieved from your database).

Seventh, if you have the money, hire a good consultant like Neohapsis, VSR, Matasano, etc. to review your product. They'll find far more than overruns, and your product will be all the better for it.

Eighth, make sure your QA team knows how overruns work and how to test for them. QA should have test cases specifically designed to find overruns in all inputs.

Ninth, do fuzzing. Fuzzing finds an amazingly large number of overflows in many products.

Edited to add: I misread the question. THe title says, "what are the techniques" but the text says "why is it hard".

It's hard because it's so easy to make a mistake. Little mistakes, like off-by-one errors or numeric conversions, can lead to overflows. Programs are complex beassts, with complex interactions. Where there's complexity there's problems.

Or, to turn the question back on you: why is it so hard to write bug-free code?

atk
Count up the number of times you said "hire an expert service" and you're talking real money. That's why none of them will advocate for a systemic approach like taint checks in the type system, because a comprehensive solution doesn't bring you repeat consulting fees.
Ben Voigt
I said to hire an expert once. I said spend money externally one other time. Taint checks help, but don't prevent problems. They're just reminders to check input if you haven't already validated it; they're not checks that your validation is correct or even applicable. Besides, when you're creating software, you're talking real money. When your validating quality of any kind in that software, you're talking real money. When your software has a specialized feature, you need real money for expertise to be sure it's right. Why should it be different when you're dealing with security?
atk
I'm just saying that the experts have a financial interest in you not learning tools to systematically manage security yourself.
Ben Voigt
I've never met a security expert who dislikes the idea of the developers learning how to do it right. But in any environment, the experts are *experts*. If the money's available to spend (whether hiring as a consultant or as a full time employee, or a volunteer to an open source project), they're useful in pointing out stuff that non-experts don't see. Experts also tend to keep more up to date on bleeding edge trends than non-experts.
atk
A: 

In modern exploitation the big three are:

ASLR

Canary

NX Bit

Modern builds of GCC applies Canaries by default. Not all ASLR is created equally, Windows 7, Linux and *BSD have some of the best ASLR. OSX has by far the worst ASLR implementation, its trivial to bypass. Some of the most advanced buffer overflow attacks use exotic methods to bypass ASLR. The NX Bit is by far the easist method to byapss, return-to-libc style attacks make it a non-issue for exploit developers.

Rook
And putting a swiss-cheese-shaped steel plate over your window stops a lot of the incoming fire, but it doesn't guarantee that no bullets get through.
Ben Voigt
@Ben Voigt Right, but have you tried to write an exploit for an application with all three of these systems? Its fucking hard, and only a handful of exploits have been able to do it. The most recent was using a heap overflow to read memory, to then use a danging pointer attack to write and then execute memory. This won the last pwn2own for windows7. Fucking hardcore attack chaining. Stack base buffer overflow exploits are fucking dead.
Rook
I guess I should be more clear. These techniques are fully automated now and almost free, so by all means use them. But when security counts, don't trust them.
Ben Voigt
+1  A: 

There's no magic bullet for security: you have to design carefully, code carefully, hold code reviews, test, and arrange to fix vulnerabilities as they arise.

Fortunately, the specific case of buffer overflows has been a solved problem for a long time. Most programming languages have array bounds checking and do not allow programs to make up pointers. Just don't use the few that permit buffer overflows, such as C and C++.

Of course, this applies to the whole software stack, from embedded firmware¹ up to your application.

¹ For those of you not familiar with the technologies involved, this exploit can allow an attacker on the network to wake up and take control of a powered off computer. (Typical firewall configurations block the offending packets.)

Gilles