views:

1472

answers:

10

So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause.

I've been all up and down the mgmt studio options, but can't find a confirm option. I know other tools for other databases have it.

+1  A: 

Try using a BEGIN TRANSACTION before you run your DELETE statement.

Then you can choose to COMMIT or ROLLBACK same.

Galwegian
+8  A: 

Under Tools>Options>Query Execution>SQL Server>ANSI, you can enable the Implicit Transactions option which means that you don't need to explicitly include the Begin Transaction command.

The obvious downside of this is that you might forget to add a Commit (or Rollback) at the end, or worse still, your colleagues will add Commit at the end of every script by default.

You can lead the horse to water...

You might suggest that they always take an ad-hoc backup before they do anything (depending on the size of your DB) just in case.

CJM
I agree with all of these suggestions and the best part of it is the 'lead a horse to water'. Without something concrete in place to stop the user, all of these will fail at some point. Thanks for the advice.
Greg J
This usually ends up with users who reflexively confirm without thinking.
le dorfier
A: 

That is why I believe you should always:

1 Use stored procedures that are tested on a dev database before deploying to production

2 Select the data before deletion

3 Screen developers using an interview and performance evaluation process :)

4 Base performance evaluation on how many database tables they do/do not delete

5 Treat production data as if it were poisonous and be very afraid

Dining Philanderer
I think 3 and 4 are a bit patronising, but 2 and 5 are sound pieces of advice.
CJM
Meant to be funny, not patronising... My apologies if that was not obvious...
Dining Philanderer
+1  A: 

Put on your best Trogdor and Burninate until they learn to put in the WHERE clause.

The best advice is to get the muckety-mucks that are mucking around in the database to use transactions when testing. It goes a long way towards preventing "whoops" moments. The caveat is that now you have to tell them to COMMIT or ROLLBACK because for sure they're going to lock up your DB at least once.

Jason Lepack
+1  A: 

In SSMS 2005, you can enable this option under Tools|Options|Query Execution|SQL Server|ANSI ... check SET IMPLICIT_TRANSACTIONS. That will require a commit to affect update/delete queries for future connections.

For the current query, go to Query|Query Options|Execution|ANSI and check the same box.

This page also has instructions for SSMS 2000, if that is what you're using.

As others have pointed out, this won't address the root cause: it's almost as easy to paste a COMMIT at the end of every new query you create as it is to fire off a query in the first place.

Dave DuPlantis
+6  A: 

I'd suggest that you should always write SELECT statement with WHERE clause first and execute it to actually see what rows will your DELETE command delete. Then just execute DELETE with the same WHERE clause. The same applies for UPDATEs.

Mr. Brownstone
I agree and that is definitely a best practice, and if everyone followed that practice I wouldn't be typing right now :)
Greg J
+1  A: 

So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause

Probably the only solution will be to replace someone with someone else ;). Otherwise they will always find their workaround

Eventually restrict the database access for that person and provide them with the stored procedure that takes the parameter used in the where clause and grant them access to execute that stored procedure.

kristof
+2  A: 

First, this is what audit tables are for. If you know who deleted all the records you can either restrict their database privileges or deal with them from a performance perspective. The last person who did this at my office is currently on probation. If she does it again, she will be let go. You have responsibilites if you have access to production data and ensuring that you cause no harm is one of them. This is a performance problem as much as a technical problem. You will never find a way to prevent people from making dumb mistakes (the database has no way to know if you meant delete table a or delete table a where id = 100 and a confirm will get hit automatically by most people). You can only try to reduce them by making sure the people who run this code are responsible and by putting into place policies to help them remember what to do. Employees who have a pattern of behaving irresponsibly with your busness data (particulaly after they have been given a warning) should be fired.

Others have suggested the kinds of things we do to prevent this from happening. I always embed a select in a delete that I'm running from a query window to make sure it will delete only the records I intend. All our code on production that changes, inserts or deletes data must be enclosed in a transaction. If it is being run manually, you don't run the rollback or commit until you see the number of records affected.

Example of delete with embedded select

delete a --select a.* from from table1 a join table 2 b on a.id = b.id where b.somefield = 'test'

But even these techniques can't prevent all human error. A developer who doesn't understand the data may run the select and still not understand that it is deleting too many records. Running in a transaction may mean you have other problems when people forget to commit or rollback and lock up the system. Or people may put it in a transaction and still hit commit without thinking just as they would hit confirm on a message box if there was one. The best prevention is to have a way to quickly recover from errors like these. Recovery from an audit log table tends to be faster than from backups. Plus you have the advantage of being able to tell who made the error and exactly which records were affected (maybe you didn't delete the whole table but your where clause was wrong and you deleted a few wrong records.)

For the most part, production data should not be changed on the fly. You should script the change and check it on dev first. Then on prod, all you have to do is run the script with no changes rather than highlighting and running little pieces one at a time. Now inthe real world this isn't always possible as sometimes you are fixing something broken only on prod that needs to be fixed now (for instance when none of your customers can log in because critical data got deleted). In a case like this, you may not have the luxury of reproducing the problem first on dev and then writing the fix. When you have these types of problems, you may need to fix directly on prod and you should have only dbas or database analysts, or configuration managers or others who are normally responsible for data on the prod do the fix not a developer. Developers in general should not have access to prod.

HLGEM
A: 

Isn't there a way to give users the results they need without providing raw access to SQL? If you at least had a separate entry box for "WHERE", you could default it to "WHERE 1 = 0" or something.

I think there must be a way to back these out of the transaction journaling, too. But probably not without rolling everything back, and then selectively reapplying whatever came after the fatal mistake.

Another ugly option is to create a trigger to write all DELETEs (maybe over some minimum number of records) to a log table.

le dorfier
+1  A: 

Lock it down:

REVOKE delete rights on all your tables.

Put in an audit trigger and audit table.

Create parametrized delete SPs and only give rights to execute on an as needed basis.

Cade Roux