views:

370

answers:

17

There's a lot of security advice out there to tell programmers what not to do. What in your opinion are the best practices that should be followed when coding for good security?

Please add your suggested security control / design pattern below. Suggested format is a bold headline summarising the idea, followed by a description and examples e.g.:

Deny by default

Deny everything that is not explicitly permitted...

Please vote up or comment with improvements rather than duplicating an existing answer. Please also put different patterns and controls in their own answer rather than adding an answer with your 3 or 4 preferred controls.

edit: I am making this a community wiki to encourage voting.

+3  A: 

Principle of Least Privilege -- a process should only hold those privileges it actually needs, and should only hold those privileges for the shortest time necessary. So, for example, it's better to use sudo make install than to su to open a shell and then work as superuser.

Charlie Martin
+3  A: 

White listing

Opt in what you know you accept

(Yeah, I know, it's very similar to "deny by default", but I like to use positive thinking.)

PEZ
+2  A: 

Model threats before making security design decisions -- think about what possible threats there might be, and how likely they are. For, for example, someone stealing your computer is more likely with a laptop than with a desktop. Then worry about these more probable threats first.

Charlie Martin
A: 
Charlie Martin
I know this is standard advice but I must say I dislike it. In most cases P is completely unknown and H is either unknown or assumed to be large...so basically WAG x WAG = WAG (WAG = wild-assed guess)
frankodwyer
Frank, that's really not true; both P and H can be estimated with some rigor. You have to work from a threat model first though, so you can examine the attacks independently. But even a rough estimate gives better guidance than *no* priority scheme.
Charlie Martin
+1  A: 

Express risk and hazard in terms of cost. Money. It concentrates the mind wonderfully.

Charlie Martin
A: 

Separate concerns. Architect your system and design your code so that security-critical components can be kept together.

Charlie Martin
+2  A: 

Limit the "attack surface". Expose your system to the fewest attacks possible, via firewalls, limited access, etc.

Charlie Martin
+1  A: 

Well understanding of underlying assumptions on crypto building blocks can be important. E.g., stream ciphers such as RC4 are very useful but can be easily used to build an insecure system (i.e., WEP and alike).

PolyThinker
+2  A: 

Remember physical security. If someone can take your hard drive, that may be the most effective attack of all.

(I recall an intrusion red team exercise in which we showed up with a clipboard and an official-looking form, and walked away with the entire "secure" system.)

Charlie Martin
+3  A: 
Charlie Martin
+1  A: 

If you encrypt your data for security, the highest risk data in your enterprise becomes your keys. Lose the keys, and data is lost; compromise the keys and all your data is compromised.

Charlie Martin
+1  A: 

Reuse proven code

Use proven encryption algorithms, cryptographic random number generators, hash functions, authentication schemes, access control systems, rather than rolling your own.

frankodwyer
+2  A: 

Hire security professionals

Security is a specialized skill. Don't try to do it yourself. If you can't afford to contract out your security, then at least hire a professional to test your implementation.

Bill the Lizard
Even if you do have security skills and are capable of doing it yourself, you should hire an additional security professional to review your implementation as dictated by the principle of "separation of concerns".
ceretullis
Good point. A security professional should not be testing his own work.
Bill the Lizard
+2  A: 

Design security in from the start

It's a lot easier to get security wrong when you're adding it to an existing system.

Bill the Lizard
+1  A: 

Isolation. Code should have strong isolation between, eg, processes in order that failures in one component can't easily compromise others.

Charlie Martin
+4  A: 

All these ideas that people are listing (isolation, least privilege, white-listing) are tools.

But you first have to know what "security" means for your application. Often it means something like

  1. Availability: The program will not fail to serve one client because another client submitted bad data.
  2. Privacy: The program will not leak one user's data to another user
  3. Isolation: The program will not interact with data the user did not intend it to.
  4. Reviewability: The program obviously functions correctly -- a desirable property of a vote counter.
  5. Trusted Path: The user knows which entity they are interacting with.

Once you know what security means for your application, then you can start designing around that.

One design practice that doesn't get mentioned as often as it should is Object Capabilities.

Many secure systems need to make authorizing decisions -- should this piece of code be able to access this file or open a socket to that machine.

Access Control Lists are one way to do that -- specify the files that can be accessed. Such systems though require a lot of maintenance overhead. They work for security agencies where people have clearances, and they work for databases where the company deploying the database hires a DB admin. But they work poorly for secure end-user software since the user often has neither the skills nor the inclination to keep lists up to date.

Object Capabilities solve this problem by piggy-backing access decisions on object references -- by using all the work that programmers already do in well-designed object-oriented systems to minimize the amount of authority any individual piece of code has. See CapDesk for an example of how this works in practice.

DARPA ran a secure systems design experiment called the DARPA Browser project which found that a system designed this way -- although it had the same rate of bugs as other Object Oriented systems -- had a far lower rate of exploitable vulnerabilities. Since the designers followed POLA using object capabilities, it was much harder for attackers to find a way to use a bug to compromise the system.

Mike Samuel
+1. But you didn't put everything in one answer at a time tch tch.
Charlie Martin
A: 

KISS (Keep It Simple, Stupid)

If you need to make a very convoluted and difficult to follow argument as to why your system is secure, then it probably isn't secure.

Formal security designs sometimes refer to a thing called the TCB (Trusted Computing Base). But even an informal design has something like this - the security enforcing part of your code, the part you can't avoid relying on. This needs to be well encapsulated and as simple and small as possible.

frankodwyer