views:

1194

answers:

26

You know the one I am talking about.

We have all been there at some point. You get that awful feeling of dread and the realisation of oh my god did that actually just happen.

Sure you can laugh about it now though, right, so go on and share your SQL Server mishaps with us.

Even better if you can detail how you resolved your issue so that we can learn from our mistakes together.

So in order to get the ball rolling, I will go first……..

It was back in my early years as a junior SQL Server Guru. I was racing around Enterprise Manager, performing a few admin duties. You know how it is, checking a few logs, ensuring the backups ran ok, a little database housekeeping, pretty much going about business on autopilot and hitting the enter key on the usual prompts that pop up.

Oh wait, was that a “Are you sure you wish to delete this table” prompt. Too late!

Just to confirm for any aspiring DBA’s out there, deleting a production table is a very very bad thing!

Needless to say a world record was promptly set for the fastest database restore to a new database, swiftly followed by a table migration, oh yeah. Everyone else was none the wiser of course but still a valuable lesson learnt. Concentrate!

+45  A: 

I suppose everyone has missed the WHERE clause off a DELETE or UPDATE at some point...

anon
luckily not in production environment :-)
Greco
yup.. changed all of the nodes in the intranet tree to my name when testing "Rename Folder".. Luckily I did follow best practices WRT backups.
Ryan Emerle
Yep, been there, done that my friend.
John Sansom
Sure why not. ;)
JohannesH
Which is why MySQL has the --i-am-a-dummy compile flag.
epochwolf
Being paranoid, I usually write a SELECT first, run it, check it, then change to a DELETE.
Jack Sleight
Aye, I've done it against our development environment many-a-time. It was quite embarassing I must admit.
Kezzer
Thats why I always turn off auto-commit on production. The number of times I've had '32,675 rows updated'...
DanSingerman
Yep, updated everyone's entry in the company's user database to become ME, mwahahaha!! (felt pretty bad actually)
GoodEnough
From the users table, no less....
jro
I didn't miss the where clause just forgot to highlight it, on the userslogin table of course, so all logins for all users got changed to the same one. Thank God for audit tables as it took me less than five minutes to fix.
HLGEM
+2  A: 

I've seen plenty other people miss a WHERE clause.

Myself, I always type the WHERE clause first, and then go back to the start of the line and type in the rest of the query :)

Alnitak
I was very careful to do that, until one time a bad combination of closing quote marks made the where clause appear not to be there. 600 deleted rows later, I realised my error.
endian
+1  A: 

Columns are nullable, and parameter values fail to retrieve the correct information...

Greco
+5  A: 

Inserted 5 million test persons into a production database. The biggest mistake in my opinion was to let me have write access to the production db in the first place. :P Bad dba!

JohannesH
+6  A: 

My biggest SQL Server mistake was assuming it was as capable as Oracle when it came to concurrency.

Let me explain.

When it comes to transactional isolation level in SQL Server you have two choices:

  1. Dirty reads: transactions can see uncommitted data (from other transactions); or
  2. Selects block on uncommitted updates.

I believe these come from ANSI SQL.

(2) is the default isolation level and (imho) the lesser of two evils. But it's a huge problem for any long-running process. I had to do a batch load of data and could only do it out of hours because it killed the website while it was running (it took 10-20 minutes as it was inserting half a million records).

Oracle on the other hand has MVCC. This basically means every transaction will see a consistent view of the data. They won't see uncommitted data (unless you set the isolation level to do that). Nor do they block on uncommitted transactions (I was stunned at the idea an allegedly enterprise database would consider this acceptable on a concurrency basis).

Suffice it to say, it was a learning experience.

And ya knkow what? Even MySQL has MVCC.

cletus
SQL Server has this also since version 2005 read up on snapshot isolation
SQLMenace
Tool X didn't have Feature Y, but I assumed it did and made myself look stupid, so to save face I blamed it on the tool.
endian
What is next cursors are slow in SQL Server but not in Oracle? :-)
SQLMenace
No way man! SQLServer rools!! Oracle sux!!!
mackenir
SQL is cheaper. For the money, Oracle should program itself
adolf garlic
+6  A: 

I changed all of the prices to zero on a high-volume, production, eCommerce site. I had to take the site down and restore the DB from backup.. VERY UGLY.

Luckily, that was a LOOONG time ago.

Ryan Emerle
it was last week right ;-)
John Sansom
+4  A: 

Thankfully we only ever make one cock-up before you realise that using transactions really is very, very trivial. I've amended thousands of records on accident before, luckily roll-back is there...

If you're querying the live environment without having thoroughly tested your scripts then I think embarrassing should really be foolhardy or perhaps unprofessional.

Ed Blackburn
Agreed. I've never understood why SQL Management Studio/other SQL tool don't automagically wrap everything in a transaction, just like my Sybase plugin for emacs did years ago.
endian
Some good advice! Use transactions if querying the live environment or an account without elevated privileges.
John Sansom
+5  A: 

forgetting to highlight the WHERE clause when updating or deleting

scripting procs and checking drop dependent objects first and running this on production

SQLMenace
oh god, yes, defiantly been there
Cruachan
+1  A: 

I worked with a junior developer once who got confused and called "Drop" instead of "Delete" on a table.

It was a long night working to get the backup restored...

Edit: I should have mentioned, this was in a production environment, and the table was full of data...

Guy Starbuck
why do you need a restore? if you had the table DDL in source control all you needed was to run that little piece of code
SQLMenace
Perhaps the table had records in it.
mackenir
Testing, testing...
adolf garlic
+2  A: 

Like zabzonk said, forgot the WHERE clause on an update or two in my day.

orthod0ks
+1  A: 

The biggest mistake was giving developers "write" access to the production DB many DEV and TEST records were inserted / overwritten and backup- ed too production until it was wisely suggested (by me!) to only allow read access!

WACM161
A good rule to have. No access to production at all is often preferential ;-)
John Sansom
+4  A: 
  1. Take a big database (terabyte+) with over 1500 tables, almost all of which are dependant on other tables.
  2. Run a DTS package to copy some new schema objects in as part of an upgrade / data consolidation.
  3. Leave the "Include all dependent objects" and "Drop destination objects first" ticked.
  4. Boom.
  5. Cry.
CodeBadger
How long did that take you to fix?
Chris Marisic
I'm glad to say we were amid downtime and with some seat-of-the-pants trickery, restores, replication hacks, that sort of thing we only had to extend a 16 hour downtime by a couple of hours. Was a long day and a painful lesson though! I'll never forget the words "where has the 'user' table gone? :)
CodeBadger
#3 did that also, quick restore point in time to 5 minutes ago made a quick fix
SQLMenace
I don't know much about databases yet, but can you tell me why that one had over 1500 _tables_? "Our life is frittered away by detail... Simplify, simplify, simplify!" -- Henry David Thoreau
Joey Adams
Huge application...not very well designed, tons of customisations for each major customer...yep, probably too big, but it's not hard to manage that many tables so long as you don't rely on management studio or, in those days, enterprise manager (cough) to navigate the database!
CodeBadger
+4  A: 

Starting off a restore from last week onto a production instance instead of the development instance. Not a good morning.

Jonathan
+1  A: 

Sort of SQL-server related. I remember learning about how important it is to always dispose of a SqlDataReader. I had a system that worked fine in development, and happened to be running against the ERP database. In production, it brought down the database because I assumed it was enough to close SqlConnection, and had hundreds, if not thousands of open connections.

Daniel
+2  A: 

We had an old application that didn't handle syncing with our HR database for name updates very efficiently, mainly due to the way they keyed in changes to titles. Anyway, a certain woman got married, and I had to write a database change request to update her last name, I forgot the where clause and everyone in said application's name was now Allison Smith.

RandomNoob
+5  A: 

I was working on the payment system on a large online business. Multi-million Euro business.

  • Get a script from a colleague with a few small updates.
  • Run it on production.
  • Get an error report 30 minutes later from helpdesk, complaining about no purchases last 30 minutes.
  • Discover that all connections are waiting on a table lock to be released
  • Discover that the script from my colleague started with an explicit BEGIN TRANSACTION and expected me to manually type COMMIT TRANSACTION at the end.
  • Explain to boss why 30 minutes of sales were lost.
  • Blame myself for not reading the script documentation properly.
Jonas Lincoln
+1  A: 

At the start of my co-op term I ended up expiring access to everyone who used this particular system (which was used by a lot of applications in my Province). In my defense, I was new to SQL Server Management Studio and didn't know that you could 'open' tables and edit specific entries with a sql statement.

I expired all the user access with a simple UPDATE statement (access to this application was given by a user account on the SQL box as well as a specific entry in an access table) but when I went to highlight that very statement and run it, I didn't include the WHERE clause.

A common mistake I'm told. The quick fix was unexpire everyones accounts (including accounts that were supposed to be expired) until the database could be backed up. Now I either open tables and select specific entries with SQL or I wrap absolutely everything inside a transaction followed by an immediate rollback.

Dalin Seivewright
+3  A: 

One of my favorites happened in an automated import when the client changed the data structure without telling us first. The Social Security number column and the amount of money we were to pay the person got switched. Luckily we found it before the system tried to pay someone his social security number. We now have checks in automated imports that look for funny data before running and stop it if the data seems odd.

HLGEM
+1  A: 

Our IT Ops decided to upgrade to SQL 2005 from SQL 2000.

The next Monday, users were asking why their app didn't work. Errors like:

DTS Not found etc.

This lead to a nice set of 3 Saturdays in the office rebuilding the packages in SSIS with a good overtime package :)

ck
+1  A: 

Not, exactly a "mistake" but back when I was first learning PHP and MYSQL I would spend hours daily, trying to figure out why my code was not working, not knowing that I had the wrong password/username/host/database credentials to my SQL database. You cant believe how much time I wasted on that, and to make it even worse this was not a one time incident. But LOL, its all good, it builds character.

teh_noob
+1  A: 

I once, and only once, typed something similar to the following:

psql> UPDATE big_table SET foo=0; WHERE bar=123

I managed to fix the error rather quickly. Since that and another error my updates always start out as:

psql> UPDATE table SET WHERE foo='bar';

Much easier to avoid errors that way.

LoranceStinson
do you think this is the real reason linq was made? don't they put the sql backwards?
johnny
sorry, I don't program under Windows and have not touched .net so I would not know.
LoranceStinson
+1  A: 

This was before the days when Google could help. I didn't encounter this problem with SQL Server, but with it's ugly older cousin Sybase.

I updated a table schema in a production environment. Not understanding at the time that stored procedures that use SELECT * must be recompiled to pickup new fields, I proceeded to spend the next eight hours trying to figure out why the stored procedure that performed a key piece of work kept failing. Only after a server reboot did I clue in.

Losing thousands of dollars and hundreds of (end user) man-hours at your flagship customer's site is quite an educational experience. Highly recommended!!

hunterjrj
Ouch, the tough lessons are what you learn the most from though hey.
John Sansom
+1  A: 

A healthy amount of years ago I was working on a clients site, that had a nice script to clear the dev environment of all orders, carts and customers.. to ease up testing, so I of course put the damn script on the productions server query analyzer and ran it.

Took some 5-6 minutes to run too, I was bitching about how slow the dev server was until the number of deleted rows came up. :)

Fortunately I had just ran a full backup since I was about to do an installation..

Sciolist
+1  A: 

Beyond the typical where clause error. Ran a drop on an incorrect DB and thus had to run a restore. Now I triple check my server name. Thankfully I had a good backup.

Russ960
+1  A: 

I set the maximum server memory to 0. I was thinking at the time that would automatically tell SQL server to use all available memory (It was early). No such luck. SQL server decided to use only 16 MB and I had to connect in single user mode to get the setting changed back.

notandy
A: 

Hit "Restore" instead of "Backup" in Management Studio.

iandisme