views:

431

answers:

7

A simple stupid "UPDATE table SET something=another WHERE (always true)" in accident will easily destroy everything in the database. It could be a human mistake, an SQL injection/overflow/truncation attack, or a bug in the code who build the WHERE causes.

Are popular databases provide a feature that protect tables by limit maximum number of row could be updated in one SQL statement?

I mean some kind of defensive setting that apply to pre-table access right on database: no-way to bypass, less code to program, no human mistake(unless grant yourself too much access right).

+1  A: 

I don't know of anything.

I'm not sure that this would solve anything. How can the database distinguish between a SQL injection attack and a nightly batch update that happens to exceed your limit?

One assumption is the auto commit is set to true. If the SQL injection attack isn't committed, you always have the opportunity to roll it back, assuming that you're looking at logs and such.

I think the real answer is better layering of apps, validation, binding, etc. You can't do SQL injection if those measures are in place.

duffymo
For example, there are three db user-role: user, supervisor, admin.user can only update one row each time. supervisor can batch update upto 500, admin can do more. Or, for another example, you cannot do full table update unless obtained a security token.
Dennis Cheung
A: 

You can wrap the update in a transaction and prompt the user (letting them know how many row are going to be updated) before committing.

Jim Anderson
That's coding, and it can fail if bug. I am looking for something more pure-configuration that even can protect myself from query analyzer.
Dennis Cheung
Okay. David B's trigger approach may be more appropriate. I have seen the transaction approach applied a lot within applications, but of course it won't help you in your case.
Jim Anderson
+6  A: 

You can add a trigger that checks how many rows are being updated (count the Inserted magic trigger table), and RAISEERROR if that's too many rows.

David B
Could you please show me any example?
Dennis Cheung
But the do you want realize determination of batch's legitimate?
TcKs
Interesting approach. I like it.
Jim Anderson
If the batch is legitimate it would have to disable the trigger first. That extra action would require forethought which would make it much less likely for an accident to happen.
Tom H.
+1 clever. I tend to avoid triggers when possible but this is a nice, (hopefully) light-weight use for them
Michael Haren
A: 

I understand your reasons, but how do you want handle batches, which are legitimate?

If you do manualy some changes, and you want be able "undo" the changes, use transactions. If you want be able recontruct data, use archive of changes. But you are not able create check for "this is correct/incorrect batch" only from batch with 100% correct results.

TcKs
It is just a brainstorm. For example, reduce the change anyone do a stupid mistake in phpmyadmin or sql server management studio. For example, if DB was support it natively, we don't have to repeat the checking, patterns in our code.
Dennis Cheung
A: 

You just need to write stored procedures and only expose those to users. And you will not work in a priviledged account in normal situations. Only connect as admin when needed.

Ken Yao
I think the OP wants a method that will work even on ad hoc queries against the underlying tables.
Jim Anderson
That's fine - the user can be allwoed to run queries only.
Tony Andrews
+1  A: 

The short answer is "no"...

Oracle allows you set define profiles that can be assigned to users to limit usage of resources such as CPU, logical reads. However, it isn't intended for your purpose, it is more about managing resources in a multi-user environment.

Perhaps more importantly, it also has flashback table so that unintended changes can be easily undone.

Most of your scenarios should be dealt with by other means:

  • human mistake: most users should not be granted update privileges on tables, they should be forced to call APIs (typically via an application) to perform updates. DBAs must be very careful when accessing live databases - never mind row limits, they can drop the table altogether!
  • Injection attack: these can and should be prevented from occuring
  • Code bugs: these should be caught through proper testing

If your data is important, it should be properly protected and looked after as above, and a maximum row update limit is unnecessary; if your data isn't important enough to protect as above, then why worry?

Tony Andrews
+1  A: 

As David B was first to point out, you can do this with a trigger. It's a good practice to start your triggers with a @@ROWCOUNT test anyway. So imagine:

CREATE TRIGGER dbo.trg_myTrigger_UD ON dbo.myTable FOR UPDATE, DELETE
AS
IF @@ROWCOUNT <> 1 RETURN

This would kick out any updates and/or deletes that try to affect more than one row.

As a general rule, I start mine with a rowcount test of <> 0. The point being if the trigger was kicked off by something that actually affected no rows (UPDATE table SET col1 = 'hey' WHERE 1 = 0) then there's no point in running through the trigger code as it won't do anything anyway.

esabine