views:

1004

answers:

16

Dear web developers, we know that each executable file can be reverse engineered (disassembled, decompiled). No mater how strong security you will implement, anyway if crackers want to, they do crack!!! Just that is a question of time.

What about websites? May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)? If no, than what is the reason?

+23  A: 

May we say that website can be completely safe from attacks of hackers?

No. Even the most secure technology in the world is vulnerable to social engineering attacks, for one thing.

bcat
I never heard that term before, but its perfect. Thanks for sharing the link as well.
Doug Neiner
If no then how people deal with money, do banking on line? So, it turns out, that processes are not secure?
Narek
Indeed. But if we're careful, we can make them *secure enough* for most purposes.
bcat
Nothing is totally secure, not even a 2 foot thick steel door.
Stephen C
I can vouch that the systems I run are secure against social engineering... But there are far worse flaws in them (Depending on Linux kernel and my distro tree being secure, etc).
Longpoke
@Longpoke: Let's say someone is holding a loaded gun to your head and demanding your root password. Is your system still secure? If so, you're much a braver person than I am, but I think you're also quite the exception to the rule.
bcat
Is that social engineering? That's physical force IMO.
Longpoke
That's a fair point. I'm not sure there's really a difference, though. Physical force and simple trickery are both attempts to compromise security by manipulating the people in control.
bcat
There is a fine line between security problems caused by logic faults and problems caused physically. Physical security is a much greater monster and I have no clue how to tackle it. How will you know if a ninja sneaks into your server farm and rootkits the hardware of each server?However... if it's not possible for people to _find_ your hardware, they can never physically compromise it. And yes this case does exist ;)
Longpoke
No it doesn't. At best, you can make it very difficult to find the hardware. You can never make it impossible.
Thomas
s/possible/humanly possible
Longpoke
+25  A: 

Yes it is always possible to do. There is always a way in.

It's like my grandfather always said:

Locks are meant to keep the honest people out

mezoid
Wow. Very wise.
cimnine
+1 I really liked that quote.
Fábio Antunes
I would have said "mildly dishonest people" or "unmotivated dishonest people"; honest (honorable) people don't need to be persuaded to be honest - they are; otherwise they would not be honest.
Software Monkey
I've heard this quote in a (IMHO) better variant: "Locks are meant to keep the honest people honest".
erikkallen
Yeah, that makes more sense.
Mk12
Yes, but honest people will not try and break the locks. Most dishonest people won't care about them.
RCIX
+3  A: 

Websites suffer greatly from injection and cross site scripting attacks

Cross-site scripting carried out on websites were roughly 80% of all documented security vulnerabilities as of 2007

Also part of a website (in some web sites a great deal) is sent to the client in the form of CSS, HTML and javascript, which is the open for inspection by anyone.

Matthew Lock
All of a website is sent to the client in the form of CSS, HTML and javascript, which is the open for inspection by anyone.
Karl
+2  A: 

Not to nitpick, but your definition of "good hosting" does not assume the HTTP service running on the host is completely free from exploits.

Popular web servers such as IIS and Apache are often patched in order to protect against such exploits, which are often discovered the same way exploits in local executables are discovered.

For example, a malformed HTTP request could cause a buffer overrun on the server, leading to part of its data being executed.

Chris
+5  A: 

The key thing to remember is that websites are usually part of a huge and complex system and it doesn't really matter if the hacker enters the system through the web application itself or some other part of the entire infrastructure. If someone can get access to your servers, routers, DNS or whatever, they can bring down even the best web application. In my experience a lot of systems are vulnerable in some way or another. So "completely secure" means either "we're trying really hard to secure the platform" or "we have no clue whatsoever, but we hope everything is okay". I have seen both.

Brian Rasmussen
If your confidentiality relies on the transport layer... your site was never secure in the first place.
Longpoke
+6  A: 

You can easily write a webapp that is mathematically proven to be secure... But that proof will only hold as long as the underlying operating system, interpreter|compiler, and hardware are secure, which is never the case.

Longpoke
What does it mean to be "mathematically proven to be secure"?
bcat
As in, there is no way to compromise it in the realm of anything humans can comprehend. Maybe I should rephrase and say "formally proven".Ex: I make a page that takes two arguments, x and y, and the expected behavior is that it returns 0 if x=y, else 1. The code for this page is formally proven to work if it does exactly as expected and can't possibly do anything else (in the context of the programming language).
Longpoke
OK, thanks for the clarification.
bcat
@Longpoke: I admire your optimism. There was a recent breakthrough that a team had managed to demonstrate that a microkernel was secure; it took a team of a few people a few years to deal with a few thousand lines of code. Proving that a web server is secure would be vastly harder. With a proven-secure web server, you might then be able to prove that your web page served by it is safe - but not before.
Jonathan Leffler
Link to secure microkernel commentary: http://www.schneier.com/blog/archives/2009/10/proving_a_compu.html
Jonathan Leffler
See, the thing is one layer of abstraction can be "secure", but once you take into the account the layers it relies on, it's a whole new story. For instance, did you guys verify that you're compiler produces 100% expected output, and that the hardware was bug free, and that the hardware didn't rely on properties of physics that had unknown discrepancies leading to undefined hardware behavior (okay, well this last one is too far). This always brings me to thinking, "why aren't safer processors like the Java ones used more?"
Longpoke
looks interesting, I will read :)
Longpoke
A: 

The fact is hackers are always one step ahead of developers, you can never ever consider a site to be bullet proof and 100% safe. You just avoid malicious stuff as much as you can !! In fact, you should follow whitelist approach rather than blacklist approach when it comes to security.

Sarfraz
Aren't hacker developers too in a way? ;)
cimnine
My point a view they are. And the most talent ones.
Fábio Antunes
*Always* one step ahead? I was under the impression that the developer had to write (or at least design) the software before it could be compromised. ;-)
James
Whitelist rather than blacklist? On a public website that would just be plain silly.
DanSingerman
Is that a "fact" that "hackers" are *always* one step ahead of developers? It's dangerous when you start throwing in terms like "always" and "fact" into a subjective argument.
mrduclaw
ofcourse a hacker may be developer too.Developer develops something, hacker finds a way to security hole, developer is now one step back.Yes, whitelist approach, i have read this in many security books, articles.
Sarfraz
+1  A: 

Can I crack your site? Sure, I'll just hire a few suicide bombers to blow up your servers. Or... I'll blow up those power plants that power up your site, or I do some sort of social engineering, and DDOS attacks would quite likely be effective in a large scale not to mention atom bombs...

Short answer: yes.

rFactor
Blowing up something isn't the same as cracking it. You can't get any useful info from it then.
Matthew Lock
Oh, yeah. My bad. However, the site would still not be safe. :)
rFactor
+1  A: 

This might be the wrong website to discuss that. However, it is widely known that security and usability are inversely related. See this post by Bruce Schneier for example (which refers to another website, but on Schneier's blog there's a lot of interesting readings on the issue).

lorenzog
+2  A: 

It's not possible to make anything 100% secure.

All that can be done is to make something hard enough to break into, that the time and effort spent doing so makes it not worth doing.

Mez
+5  A: 

To sum up and add to the posts that precede:

  1. Web as a shared resource - websites are useful so long as they are accessible. Render the web site unaccessible, and you've broken it. Denial of service attacks add up to flooding the server so that it can no longer respond to legitimate requests will always be a factor. It's a game of keep away - big server sites find ways to distribute, hackers find ways to deluge.
  2. Dynamic data = dynamic risk - if the user can input data, there's a chance for a hacker to be a menance. Today the big concepts are cross-site scripting and SQL injection, but once one avenue for cracking is figured out, chances are high that another mechanism will rise. You could, conceivably, argue that a totally static site can be secure from this, but then how many useful sites fit that bill?
  3. Complexity = the more complex, the harder to secure - given the rapid change of technology, I doubt that any web developer could say with 100% confidence that a modern website was secure - there's too much unknown code. Taking the host aside (the server, network protocols, OS, and maybe database), there's still all the great new libraries in JEE and .Net. And even a less enterprise-y architecture will have some serious complexity that makes knowing all potential inputs and outputs of the code prohibitively difficult.
  4. The authentication problem = by definition, the web site lets a remote user do something useful on a server that is far away. Knowing and trusting the other end of the communication is an old challenge. These days server side authenitication is relatively well implemented an understood and (so far as I know!) no one's managed to hack PKI. But getting user authentication ironed out is still quite tricky. It's doable, but it's a tradeoff between difficulty for the user and for configuration, and a system with a higher risk of vulnerability. And even a strong system can be broken when users don't follow the rules or when accidents happen. All this doesn't apply if you want to make a public site for all users, but that severely limits the features you'll be able to implement.

I'd say that web sites simply change the nature of the security challenge from the challenges of client side code. The developer does not need to be as worried about code replication, but the developer does need to be aware of the risks that come from centralizing data and access to a server (or collection of servers). It's just a different sort of problem.

bethlakshmi
+1  A: 

Assuming the server itself isn't comprimised, and has no other clients sharing it, static code should be fine. Things usually only start to get funky when there's some sort of scripting language involved. After all, I've never seen a comprimised "It Works!" page

OpenSS
+1  A: 

Saying 'completely secure' is a bad thing as it will state two things:

  1. there has not been a proper threat analysis, because secure enough would be the 'correct' term
  2. since security is always a tradeoff it means that the a system that is completely secure will have abysmal usability and the site will be a huge resource hog as security has been taken to insane levels.

So instead of trying to achieve "complete security" you should;

  1. Do a proper threat analysis
  2. Test your application (or have someone professional test it) against common attacks
  3. Apply best practices, not extreme measures
Kimvais
A: 

The short of it is that you have to strike a balance between ease of use and security, much of the time, and decide what provides the optimal level of both for your purposes.

An excellent case in point is passwords. The easy way to go about it is to just have one, use it everywhere, and make it something easy to remember. The secure way to go about it is to have a randomly generated variable-length sequence of characters across the encoding spectrum that only the user himself knows.

Naturally, if you go too far on the easy side, the user's data is easy to pick off. If you go too far on the side of security, however, practical application could end up leading to situations that compromise the added value of the security measures (e.g. people can't remember their whole keychain of passwords and corresponding user names, and therefore write them all down somewhere. If the list is compromised, the security measures that had been put into place are for naught. Hence, most of the time a balance gets struck and places ask that you put a number in your password and tell you not to do anything stupid like tell it to other people.

Even if you remove the possibility of a malicious person with the keys to everything leaking data from the equation, human stupidity is infinite. There is no such thing as 100% security.

Kaji
A: 

May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)?

Well if we're going to start putting constraints on the attacker, then of course we can design a completely secure system: we just have to bar all of the attacker's attacks from the scenario.

If we assume the attacker actually wants to get in (and isn't bound by the rules of your engagement), then the answer is simply no, you can't be completely safe from attacks.

mrduclaw
A: 

Yes, it's possible for a website to be completely secure, for a reasonable definition of 'complete' that includes your original premise that the hosting is not vulnerable. The problem is the same as with any software that contains defects; people create software of a complexity that is slightly beyond their capability to manage and thus flaws remain undetected until it's too late.

You could start smaller and prove all your work correct and safe as you construct it, remaking any off-the-shelf components that haven't been designed to that stringent degree of quality, but unfortunately that leaves you at a massive commercial disadvantage compared to the people who can write 99% safe software in 1% of the time. Therefore there's rarely a good business reason for going down this path.

Kylotan