One of my clients uses McAfee ScanAlert (i.e., HackerSafe). It basically hits the site with about 1500 bad requests a day looking for security holes. Since it demonstrates malicious behavior it is tempting to just block it after a couple bad requests, but maybe I should let it exercise the UI. Is it a true test if I don't let it finish?
If its not hurting the performance of the site, I think its a good thing. If you had 1000 clients to the same site all doing that, yeah, block it.
But if the site was built for that client, I think its fair enough they do that.
Isn't it a security flaw of the site to let hackers throw everything in their arsenal against the site?
Well, you should focus on closing holes, rather than trying to thwart scanners (which is a futile battle). Consider running such tests yourself.
It's good that you block bad request after a couple of trials, but you should let it continue. If you block it after 5 bad requests you won't know if the 6th request wouldn't crash your site.
EDIT: I meant that some attacker might send only one request but similar to one of those 1495 that You didn't test because you blocked., and this one request might chrash your site.
How is a service specifically designed to test for security holes so you can fix them exhibiting "malicious behaviour"? Are you exhibiting malicious behaviour when you test your own code for vulnerabilities?
Preventing security breaches requires different strategies for different attacks. For instance, it would not be unusual to block traffic from certain sources during a denial of service attack. If a user fails to provide proper credentials more than 3 times the IP address is blocked or the account is locked.
When ScanAlert issues hundreds of requests which may include SQL injection--to name one--it certainly matches what the site code should consider "malicious behavior".
In fact, just putting UrlScan or eEye SecureIIS in place may deny many such requests, but is that a true test of the site code. It's the job of the site code to detect malicious users/requests and deny them. At what layer is the test valid?
ScanAlert presents in two different ways: the number of requests which are malformed and the variety of each individual request as a test. It's seems like the 2 pieces of advice that emerge are as follows:
- The site code should not try to detect malicious traffic from a particular source and block that traffic, because that is a futile effort.
- If you do attempt such a futile effort, as least make an exception for requests from ScanAlert in order to test lower layers.