views:

3533

answers:

35

There are the obvious wtfs that make the headlines such as SQL injection, authentication in JavaScript but are there other more fundamental and common errors programmmers tend to make when writing applications without considering security?

+19  A: 

Storing passwords, for that matter, any other sensitive information, as clear text.

mattruma
You need to differentiate between, for example, using hashed passwords fields in a users login field (always a good idea) and using some kind of obfuscation scheme which doesn't add security (not a good idea)."Another lesson is about security by obscurity. [...] All .fetchmailrc password encryption would have done is give a false sense of security to people who don't think very hard. The general rule here is: 17. A security system is only as secure as its secret. Beware of pseudo-secrets." - Eric S Raymondhttp://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s09.html
Stefan Kangas
+34  A: 

The first three that leap to mind are:

  1. Not validating every bit of data from the client
  2. Not considering filesystem permissions when working with files
  3. Leaving database access too wide-ranging (read/write when a user only need read access)

Another one that's not so much a security issue, although it is security-related, is complete and abject failure to grok the difference between hashing a password and encrypting it. Most commonly found in code where the programmer is trying to provide unsafe "Remind me of my password" functionality.

ZombieSheep
isn't hashing a method of encrypting ?
Claudiu
Encryption is reversible (if you have key), hashing is not. So if you hash password, no-one can deduce it from the hash. If you merely encrypt it, and the key would be the same for several passwords, you could in theory attack a different encrypted password. At least I think this is what he meant.
J S
When he says EVERY bit of data, take that literally. I have heard of people placing SQL injection attacks into third gen bar codes, like semacode, where non-numeric characters are allowed.
Bratch
Data validation (or lack thereof) I would have to put as the #1 cause of security vulnerabilities. Validate EVERY bit of data you bring into your program, REGARDLESS OF ITS SOURCE. Even if it's in an encrypted binary file that YOU wrote, validate it anyway.
Bob Somers
+1 for not trusting user input. And please, salt those hashes.
Thanatos
+2  A: 

Thinking there are programmers who don't need to care about security.

In some circles: Using String concatenation with user-supplied data to build SQL statements.

Joachim Sauer
+8  A: 

I've seen a number of cases where where input validation is done exlusively with JavaScript on the client.

Jack Ryan
Most def. It's very common for beginning developers not to realize how easily client-side validation can be hacked. The golden rule is client-side validation is for usability only. For security, all input must be validated on the server-side, always.
urig
You know, sometimes you just gotta shake your head in amazement...
Mark Brittingham
+12  A: 
  1. Thinking, that any case cannot exist. You have to consider EVERY situation.
  2. Don't think that this will never happen. Murphy's next behind you.
  3. Thinking the user sees the application like the programmer sees it.
furtelwart
Re: #1 - "A good programmer looks both ways before crossing a one-way street."
Dave Sherohman
@Dave - Thats awesome.
Kyle Rozendo
A: 

Not handling every returned error code, especially for system calls, and not handling exceptions. Forgetting to check the value of errno in C and C++ programs is also another no-on.

+4  A: 

Thinking that they can consider every situation.

This problem takes many forms; from C programmers thinking they can be careful enough to use stdio or the standard string library, to C++ programmers thinking all they need is safe pointers; to PHP programmers relying on the define/include hack; to Perl programmers thinking -T is enough.

Protecting against security mistakes requires that you plan for what happens when it fails; What is the impact if this module is broken?

It's real simple: If you don't run any part of your application as root, then a user cannot exploit your application to get root.

Many programmers make a similar err, by thinking that some applications cannot be modularized and segregated, and that of course, they happen to be working on one of those very applications.

geocar
Can you expand on what you mean by the second paragraph - or just the C section of it, for example? In particular, what do you suggest C programmers use if they can't be careful enough using the the stdio or string libaries?
Jonathan Leffler
Almost every string library is better than the built-in one. I recommend either vstr or ustr lately, but I used to use djb's string library and was quite happy with it. There's a good comparison here: http://www.and.org/vstr/comparison
geocar
For stdio replacements, you'll tend to find them near the string libraries; vstr is very good, for example.
geocar
+7  A: 

Not knowing what a threat model is. Given there is an endless amount of security to be applied it's vital to know where to start given your setup and business.

dove
+11  A: 

Using directly GET or POST parameters in the code without validating it's content before using it.

Daok
This is in so many tutorials. Really is horrific.
Philip Morton
And it's not just limited to GET/POST. Even if you're using your own custom-designed, encrypted, "secure" protocol that (as far as you know) only your own proprietary binary can speak, you still must validate all data received from the client.
Dave Sherohman
Is there a catch all that will fix this? E.g. addslashes($_POST['user_input']);Is this approach safe?
jsims281
@jsums281 no, generally you must adapt this to **where** the data will be used. If you store it in a database use its relevant string escaping function.
phq
+3  A: 

I would add these:

  • Lacking proper understanding from a security point of view, of the underlying platform upon which to build the application;
  • Using a third-party library hastily without properly investigating or understanding the consequences the use of some parts of such a library may or may not have---this usually happens when we are in dire need of some functionality and find a third-party library that provides it, and, having found it, are in a hurry to use it without actually spending ample time going through the complete docs for the library;
  • Being clueless about application and platform security in general, and how lack thereof affects your application, in particular;
ayaz
+4  A: 

What? No one mentioned SO's friend BufferOverflow! It's second to cross site IMHO.

Robert Gould
You could always use a language which prevents overflows from happening in the first place...
Spence
+50  A: 

There are a lot of good tactical suggestions already. Let me offer a strategic observation. Most applications are written from the default security perspective of "allow all". This means a programmer will start coding everything wide open and then (hopefully) will start to consider elements that need to be secured (these tactical suggestions which have already been made are terrific examples of that).

This happens across the board. Everything from operating systems to fat clients to web-based thin clients are constructed this way. That's a primary reason why every Tuesday Microsoft comes out with a set of patches. Vulnerabilities continue to be discovered and must be remediated in a never-ending stream of patches.

My strategic suggestion is this: start coding from a default perspective of "deny all". Every architectural element, every layer, every object, every method, every variable...construct them to be inaccessible to anything unless you expressly allow it. This will slow down your productivity a little (at least at first). However, once you become accustomed to thinking this way, your delivered code will be vastly more secure.

Another analogy to this is when a programmer decides that unit testing is a good thing. Or maybe even TDD if you want to go to the extreme end of that spectrum. It takes a truckload more work to write the unit test first and then write the code to make the test pass than it takes to just write the code. But the end result is an order of magnitude more stable, and one could argue that overall less time is spent tracking down defects when a smart investment in unit testing is made.

Ed Lucas
Precisely! Assume all inputs and actions are hostile, and treat them as such in code.
The Wicked Flea
Nicely put. There is a reason, for example, why Visual Studio templates are written as private. It is to make you aware of it's accessibility level. You, not the studio, are responsible to change it's accessibility level. I'd also add that Unit Testing isn't just writing them, but running ALL of them each time no matter what's changed. I've seen people use them as an UI. They are not UI, they are tests, if you want UI that way, write a Console App.
Chris
A: 

I'm not professional, but I think the biggest security mistake I have made was by 'not' encrypting password before saving them the Mysql db

Omar Abid
You shouldn't be encrypting and saving passwords anyway, you should be hashing them.
Jon Tackabury
A: 

Download and try software or freeware (for programming or any other purpose) untrusted source, while he is default using administrator account with full power.

Dennis Cheung
+4  A: 

Programmers trust too much! It's not just data from the user that should be checked, it's anything from a database, filesystem, configuration file, anything that's external to the application.

Richard Ev
+1  A: 

The fundamental problem is that ... programmers think security of a application is only in its login page asking "login name & password"

This ignorance is the big mistake the programmers make.

Murthy
+1  A: 

Using the SA account to talk to a MSSQL Database while just execute rights on sp's should be enough.

Assuming that since you are on a closed system security is not important since it is not connected to the internet. Like internal users never do anything wrong or bad.

Coentje
A: 

Not documenting code and making it human readable. (Of course, this is a common non-security related mistake as well.) A programmer will never be able to catch every possible security mistake, but making it so other developers can follow their code and understand it means another set of eyes is more likely to be able to help.

Clear, documented code generally forces the programmer to actually think about what they're trying to accomplish as well, which usually leads to direct benefits like proper input validation.

azollman
+5  A: 

Not thinking about security when designing the application..

I really don't agree with that down vote. Security is something you need to think before coding, not after. And from my experience, programmer usually code the application, and then, makes it safe.
Agreed. It's 10X harder to add security to an existing application than to design it in up front.
Bill the Lizard
+1  A: 
  1. Thinking it will never happen to you!
  2. Not sanitizing user input
  3. Not requiring strongish passwords (you can be secure without irritating your users)
  4. Not using the compiler or run time effectively (windows xp sp2 had bits recompiled using the C++ compiler's /GS switch to help with buffer overruns)
Chris
+3  A: 

Some specific gotchas--rather than the usual generic "watch out for CSRF" advice--for people building AJAX-powered sites:

Returning live JavaScript with user data. (Any site can include your endpoint as a SCRIPT tag and act on behalf of your user, if he or she is signed in. Examples: Facebook, 30Boxes.)

Failing to disallow GET on URLs that should only accept POST. (Any site can include your endpoint as an image and act on behalf of your user, if he or she is signed in. Examples: Netflix, Amazon.)

Returning an HTML page instead of JSON data if the user is not signed in. (Any site can include your endpoint as a script, watch onError, and tell if your user is signed in. Examples: Twitter, Google, and many, many others.)

Kent Brewster
+3  A: 

The most common security mistakes I see come from people trying to protect from external threats and completely ignoring internal threats. You can have the most secure computers and databases in the world, but if the sensitive data is in a print-out on your desk, your an easy target.

Think about the recent news stories about unauthorized phone company employees accessing Obama's call history, unauthorized US State Department employees accessing Senator Clinton's passport file, ChoicePoint employees stealing customer information, and so on. Everyone likes to trust the people on the inside.

I once had to work with a third party to install secure machines in a bank. It wasn't my choice, but that's how it is. The machines were compromised in two minutes because a random guy in the machine room asked them for the root password and they gave it to them, despite explicit instructions to give it to no one but me. That was just inexperience and incompetence, but it violated the bank's requirement for security, and it was all internal.

brian d foy
A: 

Improper Error Handling I've seen this several times. The app crashes for whatever reason, and in order to have a clue of what happened the developer leave the stacktrace reachable to the end user.

It is not enough to show an "Try again later" if there's a detail button that shows the stacktrace. With this information the end user may know the technology, API, library, version, structure, even database information, that eventually may lead to steal information.

The best practice for this is to generate a log number, and provide that number in the detail. If a report is needed that number could be provided and used to research in the log files

A good example of PROPERLY handled error is SO Kitty "Wait a minute, I'll fix it!".. plus is cute.

OscarRyz
What's wrong with this answer?.. Impropert Error Handling is a security concern.
OscarRyz
+4  A: 

Technical Debt

Hugo S Ferreira
A: 

When validating form fields, checking only the length of the string to see if the form field was filled in even though blank spaces count as part of the length of the string. So someone fills in all blank spaces for a username or for an address.

Paul Mendoza
+3  A: 

Implementing security themselves is one of the most common mistakes programmers make. Even security professionals get it wrong sometimes, and I think a lot of programmers don't understand how hard security is to get right. Using Open Source security software is free and cheap (in terms of time). Implementing it yourself takes time, and you're almost certain to get it wrong.

Bill the Lizard
+2  A: 

Designing security with too many "gates".

Imagine security as a fence with gates with guards at the gates. The best security is a very high fence or under a mountain, with a single path inside and one or more gates on the path with dedicated, paranoid guards.

The other way is a PHP web application where every individual PHP file has code to check the user authorization cookie. This code is trivially easy to forget or to get the permission check wrong. This is a bunch of gates side by side without a fence and some of the gates are unguarded.

Zan Lynx
+9  A: 

Here's what I can come up with:

  • Failure to review code. Needless to say, most of the insecure code written today, exists because of the absence of code reviews by the right people. Code reviews for security are different from code reviews for functionality.
  • Assuming that you wont be attacked. "Hey! We're behind a firewall, right? And this is a protected network."
  • Forcing users to use the application in an insecure manner. For instance, most banking sites never displayed the browser's chrome until phishing sites sprang up.
  • Not understanding trust boundaries. Often, this is seen as blind trust imposed on client-side input by the server-side code.
  • Lack of clear separation of concerns - the client ends up assuming the role of the server. This is often seen as passwords being validated in JavaScript, or privileges being enforced at the client etc.
  • Ignoring the scenario of request tampering. Worse, arguing that the said requests cannot be tampered with.
  • Ignorance of the privileges required in the runtime environment. This often manifests as "notes" in the documentation on what privileges are to be provided, without sufficient details on why they are required. I've come across an applet that required read+write privileges on the entire filesystem to be granted to any codebase (even ones downloaded from the Internet), just to upload a file to the server.
  • Insecure defaults. For instance, the system administrator password should be set manually after installation to a strong password since the installation did not cover the scenario of weak administrator passwords. Yes Microsoft, I'm looking at you.
  • Home grown cryptography. Bill just noted this. Just because the smartest developer in the company cannot crack does not mean that an attacker cannot. Good security is built on the foundation of years of effort in the field by mathematicians and scientists. Only a fool would think that he can come with something better, while under a bathroom shower.
Vineet Reynolds
+2  A: 

Biggest mistake is to throw security issues out of scope of system development and put it on shoulders of everybody else. Common practice is to state that security is a responsibility of system administrators. Quite common also is blaming clients for low attention to security and vogue requirements regarding this. While there is a point of course it is IT department responsibility to elevate this topic and draw attention to the question.

Din
+4  A: 

Using security through obscurity -- if any part of your security strategy includes the phrase "Nobody will ever think of looking here!", or "Nobody will ever think of doing it this way!", it's not secure.

Graeme Perrow
+2  A: 

Probably some of the most common errors are:

  • trying to identify and block bad stuff, instead of only allowing good stuff (a.k.a. allow by default instead of deny by default). It's a lot easier to identify the 3 or 4 things your code is supposed to do, and code to make sure it only does those, than it is to identify the zillions of things it isn't supposed to do and code to stop them happening.

  • treating security as something to be added later, instead of designing it in at the start. For example, thinking that security is something that can bolted on at the end with encryption or a firewall, or a penetration test, rather than something that is part and parcel of the whole design and QA process

  • implementing your own authentication/encryption/random number generator rather than using a proven system

  • assuming an attacker will be running your code and will only do things you expect - e.g. that the client accessing your server will be the one you wrote, rather than the one anyone can write.

  • trusting too many things, leading to transitive trust issues. e.g. when you have component A trusts component B trusts component C, then any hole in component C is a hole in the entire system. Part of your thought process should be 'normally we control component C, but what if an attacker owns component C?'.

frankodwyer
+1  A: 

Check out OWASP Top 10

dr. evil
+1  A: 

Creating a backdoor for administration purposes with an unique hardcoded password.

jmservera
A: 

1) Thinking they understand security in the first place
2) Using an API without reading up about what they are trying to achieve i.e DESCryptoProvider ( see 1 ) - An API is NOT an invitation to code. Just because you can doesn't mean you have to
3) Roll your own i.e 'I'm cleverer than you' stuff
4) Thinking 'no-one will ever do that so I don't need to worry'
5) Following on from 4 - misunderstanding human beings
6) Thinking that SSL solves all their problems
7) Thinking that Rainbow Tables are a problem - read this if you disagree

If there's kudos, desire and money to be gained then someone will hack your system.

zebrabox
+1  A: 

A big mistake programmers make regarding security is thinking that security can be "added", 'ok, now we have to make this secure!'. This is wrong!

Security is built from archicture and then in designing the system. Once you have arrive to coding, most of the security's fate of the project is already written!

Ariel