tags:

views:

10675

answers:

86

I am intending this to be an entry which is a resource for anyone to find out about aspects of sql that they may have not run into yet, so that the ideas can be stolen and used in their own programming. With that in mind...

What sql tricks have you personally used, that made it possible for you to do less actual real world programming to get things done?

[EDIT]

A fruitful area of discussion would be specific techniques that allow you to do operations on the database side, that make it unnecessary to pull the data back to the program, then update/insert it back to the database.

[EDIT]

The bounty button showed up today. The question had 18 upvotes + 9 upvotes for my answer. So that's roughly 270 rep points. I decided to double it, so 540 was the value. The slider bar that lets you specify the value, only goes up to 500, so 500 it is.

We have some pretty good ideas in here. I am hoping the promise of the bounty will bring some more entries in. I expect to pick one before the week expires.

I recommend that you flesh out your answer where possible to make it easy for the reader to understand the value that your technique provides. Visual examples work wonders. The winning answer will have good examples.

My thanks to everyone who shared an idea with the rest of us.

+23  A: 

Besides normalization (the obvious one), setting my foreign key on update and on delete clauses correctly saves me time, particularly using ON DELETE SET NULL and ON UPDATE CASCADE

R. Bemrose
+1 I was just going to say "on cascade foo" myself
annakata
Could you expand this, or provide some links? I am not sure what the benefit is.
EvilTeach
leaning on cascade saves you from having to write the kind of error-prone event handling triggers (or god forbid, business logic) to manage this yourself
annakata
They are essentially automatic triggers. on update cascade watches the other table and if its primary key changes, it updates any foreign keys pointing to it as well.
R. Bemrose
on delete cascade watches the other table, and if a row is deleted from it, any rows in this table that reference the deleted rows would also be deleted. There are other settings for these clauses, but not all of them are standardized between RDBMS software.
R. Bemrose
Ok. Yes. This is a good one too.Thanks.
EvilTeach
I can't believe this got voted up so many times! DELETE CASCADE is not good IMO; you are not in full control of deletion behaviour. UPDATE CASCADE should rarely be required if you use surrogate keys.
Mitch Wheat
@Mitch ditto. All the draawbacks of triggers, with indirection.
le dorfier
Since people don't seem to like ON DELETE CASCADE (and to be honest, I don't usually use it), I've changed the examples.
R. Bemrose
+1  A: 

Using Oracle hints for a select last effective date query.

For instance, exchange rates for a currenсy change several times a day and there is no regularity in it. Efficient rate for a given moment is the rate published last, but before that moment.

You need to select efficient exchange rate for each transaction from a table:

CREATE TABLE transactions (xid NUMBER, xsum FLOAT, xdate DATE, xcurrency NUMBER);
CREATE TABLE rates (rcurrency NUMBER, rdate DATE, rrate FLOAT);
CREATE UNIQUE INDEX ux_rate_currency_date ON rates (rcurrency, rdate);

SELECT  (
    SELECT /*+ INDEX_DESC (r ux_rate_currency_date) */
     rrate
    FROM rates r
    WHERE r.rcurrency = x.xcurrency
     AND r.rdate <= x.xdate
     AND rownum = 1
    ) AS eff_rate, xsum, date
FROM    transactions x

This is not recommended by Oracle, as you rely on index to enforce SELECT order.

But you cannot pass an argument to a double-nested subquery, and have to do this trick.

P.S. It actually works in a production database.

Quassnoi
I don't think you can rely on an index to enforce any kind of order. You need an explicit order by for that.
EvilTeach
It is a trick, it's not recommended by Oracle and you can easily read it in my post. ORDER BY does not work in this case, it takes a double-nested query to limit an ordered set by ROWNUM, and you cannot pass a selected field to a double-nested query in SELECT clause, only to single-nested.
Quassnoi
+42  A: 

Hi, I personally use the CASE statement a lot. Here are some links on it, but I also suggest googling.

4 guys from Rolla

Microsoft technet

Quick example:

SELECT FirstName, LastName, Salary, DOB, CASE Gender 
                                            WHEN 'M' THEN 'Male' 
                                            WHEN 'F' THEN 'Female' 
                                         END 
FROM Employees
Ric Tokyo
What aspect of the case statement, lets you avoid more program code?
EvilTeach
@EvilTeachSELECT FirstName, LastName, Salary, DOB, CASE Gender WHEN 'M' THEN 'Male' WHEN 'F' THEN 'Female' ENDFROM Employeesan example of presentation handling with CASE
Ric Tokyo
ok, pop that in your answer so it is easy for others to see :)
EvilTeach
Improper use of this will actually INCREASE the code you will write later though...in this example you are dealing with presentation concerns at the database. Seperation of Concerns will help you later...especially when you need to track down where the value is coming from.
Webjedi
@Webjedi, In many cases, using CASE enables me not to do any code at all! All I sometimes do is to build a cube and prepare it with CASE. (Business Intelligence in SQL 2005)..you can always go back and see how you are getting the data by having a look at your SQL. but I get your point.
Ric Tokyo
Here's one place it saves code: http://john-sheehan.com/blog/slightly-more-dynamic-order-by-in-sql-server-2005/
John Sheehan
While I like that approach in small scope...its a longterm nightmare to maintain from my experience. Maint. all the possible case scenarios and then keeping those in sync with your app is a pain. A new sort scenario now requires changes in app and SP.
Webjedi
I love case coding...but what Webjedi says make sense.
crosenblum
+19  A: 

I have found it very useful to interact with the database through views, which can be adjusted without any changes to code (except, of course SQL code).

JosephStyons
Yep, avoiding code changes is a good thing.Thanks.
EvilTeach
I'm less happy with this suggestion. Too often I end up tracing down performance issues in parallel through multiple execution units (views). And if I change a view to benefit my query, what else might it screw up? I don't like creating coupling among execution optimizations through views.
le dorfier
Interesting point
EvilTeach
Aren't views slow, becuase they are not compiled? In MS SQl Server anyway. Stored procedures are better, though not as easy to use with a SELECT I suppose...
Magnus Smith
Any time you find yourself writing similar joins/conditions in several queries, factor this into a view. Why? DRY: the same reason that it's better to factor repeated code into a function and call it. Views are no slower than the original query on PostgreSQL, nor (I'm sure) on other modern DBs.
j_random_hacker
+35  A: 

I like to use SQL to generate more SQL.

For example, I needed a query to count the number of items across specific categories, where each category is stored in its own table. I used the the following query against the master category table to generate the queries I needed (this is for Oracle):

select 'select '
    || chr(39) || trim(cd.authority) || chr(39) || ', ' 
    || chr(39) || trim(category) || chr(39) || ', '
    || 'count (*) from ' || trim(table_name) || ';'
from   category_table_name ctn
     , category_definition cd
where  ctn.category_id = cd.category_id
and    cd.authority = 'DEFAULT'
and    category in ( 'CATEGORY 1'
                   , 'CATEGORY 2'
                   ...
                   , 'CATEGORY N'
                   )
order by cd.authority
       , category;

This generated a file of SELECT queries that I could then run:

select 'DEFAULT', 'CATEGORY 1', count (*) from TABLE1; 
select 'DEFAULT', 'CATEGORY 2', count (*) from TABLE4; 
...
select 'DEFAULT', 'CATEGORY N', count (*) from TABLE921;
Patrick Cuff
Ya. I use this technique from time to time to enable/disable constraints
EvilTeach
Would you kindly add an example to your answer to make it easier for others to visualize the concept?
EvilTeach
@EvilTeach - I agree, Would be interested to see this too
Zeus
Yo dawg I herd u like database so I put sql in ur sql so u can select while you select
Haoest
Nice for doing something "meta", e.g. en/disabling constraints as EvilTeach suggested. But if you need to use this for querying *data in your tables* then your DB design is broken. In your case, items from all categories should be stored in a single table, using a field to identify the category.
j_random_hacker
This is nice, but I think it can be done better. Use dbms_xmlgen.getxml. With dbms_xml.getxml you don't have to spool and execute the generated select statements. See here for an similar example: http://stackoverflow.com/questions/390945/identify-a-table-with-maximum-rows-in-oracle/391026#391026
tuinstoel
Try this: select authority,category,to_number(extractvalue(xmltype( dbms_xmlgen.getxml (' select count(*) c from ' || table_name)),'/ROWSET/ROW/C')) countrows from category_table_name t, category_definition d where t.category_id = d.category_id
tuinstoel
@tuinstoel; thanks for the tip :)
Patrick Cuff
+5  A: 

This doesn't necessarily save you coding time, but this missing indexes query can save you the time of manually figuring out what indexes to create. It is also helpful because it shows actual usage of the indexes, rather than the usage you 'thought' would be common.

http://blogs.msdn.com/bartd/archive/2007/07/19/are-you-using-sql-s-missing-index-dmvs.aspx

+1  A: 

Way back, I wrote dynamic SQL in a C program that took a table as an argument. It would then access the database (Ingres in those days) to check the structure, and using a WHERE clause, load any matching row into a dynamic hash/array table.

From there, I would just lookup the indices to the values as I used them. It was pretty slick, and there was no other SQL code in the source (also it had a feature to be able to load a table directly into a tree).

The code was a bit slower than brute force, but it optimized the overall program because I could quickly do partitioning of the data in the code, instead of in the database.

Paul.

Paul W Homer
+87  A: 

This statement can save you hours and hours of programming

insert into ... select ... from

For example:
INSERT INTO CurrentEmployee SELECT * FROM Employee WHERE FireDate IS NULL; will populate your new table with existing data. It avoids the need to do an ETL operation or use multiple insert statements to load your data.

Conrad
Ya, that cuts back the programming alright.Do you know any techiques to pull indexes/constraints into the new table cheaply?
EvilTeach
See this answer for an Oracle-specific solution: http://stackoverflow.com/questions/260679/is-there-an-easy-way-to-clone-the-structure-of-a-table-in-oracle#260771
BQ
Very nice, I will try that out.
EvilTeach
@Conrad: Could you care to explain this a little bit more? Thank you.
christian studer
@christian, for example, "INSERT INTO CurrentEmployee SELECT * FROM Employee WHERE FireDate IS NULL;" will populate your new table with existing data. It avoids the need to do an ETL operation or use multiple insert statements to load your data.
BQ
Pop that example into your answer so it is easy to see
EvilTeach
Ehm, trick? I thought this is supposed to be common SQL knowledge?
splattne
One persons knowledge is another persons trick. The goal here is to put out a bunch of possibly usable ideas, so that other people can learn of the existance of techniques that they can explore to help increase their productivity. Knowledge becomes common when shared.
EvilTeach
Argh @BQ - don't ever let me see you do select * again! I agree though, I didn't realise things like this counted as tricks. Kinda like saying "select blah" is a neat way of getting data out of a database.
Unsliced
@argh ya, true. If you have never seen it before its a new trick. I put one on order by in earlier this morning.
EvilTeach
I have to agree with splattne; this is how I learned to write INSERT statements. I only later learned about VALUES(), so I would hardly classify this as a trick. Then again, the majority of responses aren't tricks either, but they're all hella useful.
Ryan Riley
instead of WHERE FireDate IS NULL, WHERE 1=0 :-)
Cherian
+50  A: 

My old office-mate was an extreme sql enthusiast. So whenever I would complain "Oh dear, this sql stuff is so hard, I don't think there's any way to solve this in sql, I'd better just loop over the data in C++, blah blah," he would jump in and do it for me.

Mark Harrison
Oh boy, I just spit out my coffee. That is funny. I call that working smarter not harder.
Cj Anderson
Meh. I tried this. Didn't work - my company's SQL guru is also really good at teaching it.
Erik Forbes
LOL! This is in the realm of "social hacking" instead of actually employing code to solve the problem yourself.
Bill Karwin
Works well at keeping your salary low too.
Kibbee
Not very good for job security either.
jesal
+1! The geek equivalent of "I bet this car can't do 120kph!" Great comments too.
j_random_hacker
+1 I wish i could do the same
Ric Tokyo
+8  A: 

This doesn't save "programming" time, per se, but sure can save a lot of time in general, if you're looking for a particular stored proc that you don't know the name of, or trying to find all stored procs where something is being modified, etc. A quick query for SQL Server to list stored procs that have a particular string somewhere within them.

SELECT ROUTINE_NAME, ROUTINE_DEFINITION 
FROM INFORMATION_SCHEMA.ROUTINES 
WHERE ROUTINE_DEFINITION LIKE '%foobar%' 
AND ROUTINE_TYPE='PROCEDURE'

Same for Oracle:

select name, text
from user_source u
where lower(u.text) like '%foobar%'
and type = 'PROCEDURE';
Beska
@Colin: Cool! I was wishing at the time I posted I could provide the same useful hint for some other DBs.
Beska
+1. INFORMATION_SCHEMA is actually very portable and works on many DBs. On PostgreSQL all stored procs have a ROUTINE_TYPE of 'FUNCTION' however.
j_random_hacker
+6  A: 

Take advantage of SQL's ability to output not just database data but concatinated text to generate more SQL or even Java code.

  • Generate insert statements
    • select 'insert .... values(' + col1 ... + ')' from persontypes
  • Generate the contents of an Enum from a table.
    • ...
  • Generate java Classes from table names
    • select 'public class ' + name + '{\n}' from sysobjects where...

EDIT: Don't forget that some databases can output XML which saves you lots of time reformatting output for client applications.

Chris Nava
I'm inclinded to disagree - violates SOC. How often do you see messed up systems that have HTML all over their SQL expresssions?
le dorfier
I didn't intend to imply that your program should do this. It's a trick I use to generate boilerplate code that I then edit *in development*.
Chris Nava
+76  A: 

I think the most useful one that I have used, is the WITH statement.

It allows subquery reuse, which makes it possible to write with a single query invocation, what normally would be two or more invocations, and the use of a temporary table.

The with statement will create inline views, or use a temporary table as needed in Oracle.

Here is a silly example

WITH 
txssnInfo AS
(
    SELECT SSN, 
           UPPER(LAST_NAME), 
           UPPER(FIRST_NAME), 
           TAXABLE_INCOME,          
           CHARITABLE_DONATIONS
    FROM IRS_MASTER_FILE
    WHERE STATE = 'TX'                 AND -- limit to texas
          TAXABLE_INCOME > 250000      AND -- is rich 
          CHARITABLE_DONATIONS > 5000      -- might donate too

),
doltishApplicants AS
(
    SELECT SSN, 
           SAT_SCORE,
           SUBMISSION_DATE
    FROM COLLEGE_ADMISSIONS
    WHERE SAT_SCORE < 100          -- Not as smart as some others.
),
todaysAdmissions AS
(
    SELECT doltishApplicants.SSN, 
           TRUNC(SUBMISSION_DATE)  SUBMIT_DATE, 
           LAST_NAME, FIRST_NAME, 
           TAXABLE_INCOME
    FROM txssnInfo,
         doltishApplicants
    WHERE txssnInfo.SSN = doltishApplicants.SSN

)
SELECT 'Dear ' || FIRST_NAME || 
       ' your admission to WhatsaMattaU has been accepted.'
FROM todaysAdmissions
WHERE SUBMIT_DATE = TRUNC(SYSDATE)    -- For stuff received today only
;

One of the other things I like about it, is that this form allows you to separate the filtering from the joining. As a result, you can frequently copy out the subqueries, and execute them stand alone to view the result set associated with them.

EvilTeach
Great at recursion too!
Jas Panesar
Seconded. Now how to upvote comments?
Joshua
Lol, I upvoted this one!
Jas Panesar
This is known as a Common Table Expression (at least in MSSQL) for those who want to do further research...
ck
@jas how about putting in an answer that demonstrates the technique.
EvilTeach
+1 for something new
ccook
As previously mentioned, this is a CTE available in MS SQL 2008. They're extremely handy, but only last the life-time of the query which is even more beneficial.
Kezzer
CTE's work in SQL Server 2005 as well, not just 2008...
Mun
It looks like [Postgresql 8.4](http://www.postgresonline.com/journal/index.php?/archives/56-PostgreSQL-8.4-goodies-in-store.html) intends to add them too. Nifty!
puetzk
Here's a question... are these queries executed lazily? Are their results cached? I guess the big question is... is this faster or slower than if you didn't break out all this stuff?
Will
In my environment rewriting with a WITH improves about 60-80% of queries. It depends on the degree of reuse, and probably a million other things. Your milage may vary.
EvilTeach
You can use WITH in Oracle too. At least in Oracle 9.2 and higher.
tuinstoel
Also available in DB2 v8 and above.
Adam Neal
+17  A: 

When developing pages in ASP.NET that need to utilize a GridView control, I like to craft the query with user-friendly field aliases. That way, I can simply set the GridView.AutoGenerateColumns property to true, and not spend time matching HeaderText properties to columns.

select
    MyDateCol 'The Date',
    MyUserNameCol 'User name'
from MyTable
Terrapin
Ya, this is exactly the sort of stuff I mean.Thanks.
EvilTeach
I appreciate the visual example as well.
EvilTeach
But this would make your application harder to internationalize.
Hosam Aly
+1. Works nicely for Excel imported datasets too. (Well, usually.)
j_random_hacker
@Hosam: Good point which I hadn't thought of. Though I often use this for small, in-house projects.
j_random_hacker
+12  A: 

The two biggest things I found were helpful were doing recursive queries in Oracle using the CONNECT BY syntax. This saves trying to write a tool to do the query for you. That, and using the new windowing functions to perform various calculations over groups of data.

Recursive Hierarchical Query Example (note: only works with Oracle; you can do something similar in other databases that support recursive SQL, cf. book I mention below):

Assume you have a table, testtree, in a database that manages Quality Assurance efforts for a software product you are developing, that has categories and tests attached to those categories:

CREATE TABLE testtree(
   id INTEGER PRIMARY KEY,
   parentid  INTEGER FOREIGN KEY REFERENCES testtree(id),
   categoryname STRING,
   testlocation FILEPATH);

Example Data in table:
id|parentid|categoryname|testlocation
-------------------------------------
00|NULL|ROOT|NULL
01|00|Frobjit 1.0|NULL
02|01|Regression|NULL
03|02|test1 - startup tests|/src/frobjit/unit_tests/startup.test
04|02|test2 - closing tests|/src/frobjit/unit_tests/closing.test
05|02|test3 - functionality test|/src/frobjit/unit_tests/functionality.test
06|01|Functional|NULL
07|06|Master Grand Functional Test Plan|/src/frobjit/unit_tests/grand.test
08|00|Whirlgig 2.5|NULL
09|08|Functional|NULL
10|09|functional-test-1|/src/whirlgig/unit_tests/test1.test
(...)

I hope you get the idea of what's going on in the above snippet. Basically, there is a tree structure being described in the above database; you have a root node, with a Frobjit 1.0 and Whirlgig 2.5 node being described beneath it, with Regression and Functional nodes beneath Frobjit, and a Functional node beneath Whirlgig, all the way down to the leaf nodes, which contain filepaths to unit tests.

Suppose you want to get the filepaths of all unit tests for Frobjit 1.0. To query on this database, use the following query in Oracle:

SELECT testlocation
   FROM testtree
START WITH categoryname = 'Frobjit 1.0'
CONNECT BY PRIOR id=parentid;

A good book that explains a LOT of techniques to reduce programming time is Anthony Mollinaro's SQL Cookbook.

sheepsimulator
The cook book looks interesting too. I had my wife order it from library.
EvilTeach
Can you add an example to your answer to make it easier for other users to visualize?
EvilTeach
+1 recommendation for the cookbook!
jimmyorr
Yes. Thank you. the start with/connect by should catch some peoples attentions....
EvilTeach
+61  A: 

Writing "where 1=1...." that way you don't have to keep track of where to put an AND into the statement you're generating.

Greg
I don't understand this one. Can you expand it with an example?
EvilTeach
Assuming you're putting line breaks in your query for readability, the FIRST condition starts with WHERE, while all the others will start the line with AND. Using 1=1 allows all the queries you care about to be interchangeable (and easy to comment out with -- at the beginnning of the line).
BQ
Yes. I am with you. That goes hand in hand with a suggestion further down to format your queries nicely so you can copy them out for independent execution.
EvilTeach
Humm. I wonder if the 1 = 1 clause has a execution cost.
EvilTeach
No, it doesn't SQL server will constant evaluate before generating execution plans.
Joshua
cool. Greg can you add a small example to your answer to make it easier for others to visualize
EvilTeach
Refer to this question to know more http://stackoverflow.com/questions/242822/why-would-someone-use-where-11-and-conditions-in-a-sql-clause
Eduardo Molteni
+1. This is a favorite one of mine, too!
Camilo Díaz
Some years ago my programming team got in trouble when it was discovered that MySQL would ignore indexes when 1=1 was used.
too much php
+1 I've definately used this one.
bendewey
See also http://stackoverflow.com/questions/1049512/t-sql-11-performance-hit.
Greg
you can do just WHERE 1
Darth
+1  A: 

Make sure you know what SELECT can do.

I used to spend hours writing dumb queries that SQL does out of the box (eg NOT IN and HAVING spring to mind)

DrG
What specific example can you show?
EvilTeach
+1  A: 

Three words... UPDATE FROM WHERE

Humm.what do you mean by that? Can you show an example?
EvilTeach
A: 

denormalize when performance is a big issue.

klyde
I think this results in more code. I have used this practice on my own of course, but setting up additional de-normalized tables requires that you keep those tables up to date with the other which results in a lot more code. But in most instances can certainly speed up your application.
nlaq
How does this reduce the amount of code you would have to write otherwise?
EvilTeach
Either this is "duh", or it's classic pre-optimization.
le dorfier
This question is about easier coding, not run-time performance.
finnw
+3  A: 

SQL's Pivot command. Learn it. Live it.

BoltBait
Agreed, the performance is great on this one. +1
jcollum
Can you add a simple example so that other readers have something to visualize?
EvilTeach
But, I will say this... Pivot is the way you can generate a result set with the following columns: product, WW05 orders, WW06 orders, WW07 orders, WW08 orders, Total Orders... dynamically... starting at the current work week for X workweeks showing the sum of orders in the order table. Imagine that!
BoltBait
ok. Thanks anyway.
EvilTeach
Here's a decent article. http://www.devx.com/dbzone/Article/28165
billb
A: 

Using the INFORMATION_SCHEMA to generate a bunch of very similar queries. Say I want to build some dynamic SQL that recreates a set of views. I can use the INFORMATION_SCHEMA to find the tables that I want to use, the columns in those tables etc. Then I can build new views and so forth with the results of that query. If I need to re-generate those views/procs I'll just re-run the generator script. Used this technique to re-build 8 complex views in about 20 secs.

jcollum
Can you show an example?
EvilTeach
+7  A: 

Never normalize a database to the point that writing a query becomes near impossible.

Example: http://stackoverflow.com/questions/184641

BoltBait
If you're normalising correctly, it's easy to add a view that's equivalent to the original (denormalised) table. It may run slower, but it's no harder to write the query.
finnw
@OP: The problem is not over-normalisation, it's that there are several ways that a tree can be encoded in a DB table, and the approach taken in that post requires recursion, which is no good for SQL queries. Using a "nested sets" representation instead would have enabled efficient queries.
j_random_hacker
+6  A: 

Aliasing tables and joining a table with it self multiple times:

select pf1.PageID, pf1.value as FirstName, pf2.value as LastName
from PageFields pf1, PageFields pf2
where pf1.PageID = 42
and   pf2.PageID = 42
and   pf1.FieldName = 'FirstName'
and   pf2.FieldName = 'LastName'

Edit: If i have the table PageFields with rows:

id | PageID | FieldName | Value 
.. | ...    | ...       | ... 
17 | 42     | LastName  | Dent
.. | ...    | ...       | ... 
23 | 42     | FirstName | Arthur
.. | ...    | ...       | ...

Then the above SQL would return:

42, 'Arthur', 'Dent'
Allan Simonsen
Can you add some sample data to explain what this would return?
Alex B
How would I do this, without knowing in advance how many such rows there will be? Or what the field names will be?
Schmuli
+2  A: 

There are a few things that can be done to minimize the amount of code that needs to be written and insulate you from code changes when the database schema changes (it will).

So, in no particular order:

  1. DRY up your schema - get it into third normal form
  2. DML and Selects can come via views in your client code
    • When your underlying tables changes, update the view
    • Use INSTEAD OF triggers to intercept DML calls to the view - then update the necessary tables
  3. Build an external data dictionary containing the structure of your database - build the DDL from the dictionary. When you change database products, write a new parser to build the DDL for your specific server type.
  4. Use constraints, and check for them in your code. The database that only has one piece of client code interacting with it today, will have two tomorrow (and three the next day).
aekeus
A: 

Write set-based queries instead of cursors. (shorter code and faster - a win all around!) Drag the table names and field names from the object browser (hours of mistyping avoided) Learn to use joins in update and delete statments.

HLGEM
+2  A: 

Using the WITH statement together with ROW_NUMBER function to perform a search and at the same time sort the results by a required field. Consider the following query, for example (it is a part of stored procedure):

    DECLARE @SortResults int;

SELECT @SortResults = 
 CASE @Column WHEN 0 THEN -- sort by Receipt Number
  CASE @SortOrder WHEN 1 THEN 0 -- sort Ascending
      WHEN 2 THEN 1 -- sort Descending
  END
    WHEN 1 THEN -- sort by Payer Name
  CASE @SortOrder WHEN 1 THEN 2 -- sort Ascending
      WHEN 2 THEN 3 -- sort Descending
  END
    WHEN 2 THEN -- sort by Date/Time paid
  CASE @SortOrder WHEN 1 THEN 4 -- sort Ascending
      WHEN 2 THEN 5 -- sort Descending
  END
    WHEN 3 THEN -- sort by Amount
  CASE @SortOrder WHEN 1 THEN 4 -- sort Ascending
      WHEN 2 THEN 5 -- sort Descending
  END
 END;

 WITH SelectedReceipts AS
 (
  SELECT TOP (@End) Receipt.*,

  CASE @SortResults
   WHEN 0 THEN ROW_NUMBER() OVER (ORDER BY Receipt.ReceiptID)
   WHEN 1 THEN ROW_NUMBER() OVER (ORDER BY Receipt.ReceiptID DESC)
   WHEN 2 THEN ROW_NUMBER() OVER (ORDER BY Receipt.PayerName)
   WHEN 3 THEN ROW_NUMBER() OVER (ORDER BY Receipt.PayerName DESC)
   WHEN 4 THEN ROW_NUMBER() OVER (ORDER BY Receipt.DatePaid)
   WHEN 5 THEN ROW_NUMBER() OVER (ORDER BY Receipt.DatePaid DESC)
   WHEN 6 THEN ROW_NUMBER() OVER (ORDER BY Receipt.ReceiptTotal)
   WHEN 7 THEN ROW_NUMBER() OVER (ORDER BY Receipt.ReceiptTotal DESC)
  END

  AS RowNumber

  FROM Receipt

  WHERE
  ( Receipt.ReceiptID LIKE ''%'' + @SearchString + ''%'' )

  ORDER BY RowNumber
 )

 SELECT * FROM SelectedReceipts
 WHERE RowNumber BETWEEN @Start AND @End
Evgeny
Clever, but unoptimizable. This should be 8 different simple optimized stored procedures selected from outside SQL.
le dorfier
And that solution was considered too. But, frankly, if we're talking about code saved, 1 call to stored proc saves code compared to 8 calls.
Evgeny
Except it isn't 8 calls, it's one of eight calls. Unless you're calling all 8 variants in a cluster; in which case it really needs work.
le dorfier
A: 

What I call the sum case construct. It's a conditional count. A decent example of it is this answer to a question.

Garry Shutler
+4  A: 

Off the top of my head:

  1. Use your editor artistry to make it easy to highlight subsections of a query so you can test them easily in isolation.

  2. Embed test cases in the comments so you can highlight and execute them easily. This is especially handy for stored procedures.

  3. Obviously a really popular technique is getting the folks on SO to work out the hard ones for you. :) We SQL freaks are real suckers for pop quizzes.

le dorfier
Would you add some simple code examples to show the technique, so it is easy for others to visualize?
EvilTeach
+50  A: 

copy a table without copying the data

select * into new_table from old_table where 1=0
Eric Johnson
Some brands of SQL (e.g. MySQL) also support CREATE TABLE foo LIKE bar which does the same thing.
Bill Karwin
Wouldn't doing a LIMIT 0 at the end be more intuitive?
Elijah
'limit 0' doesn't work in Sybase. And I don't think it works in Oracle either. Seems to be a MySQL/Postgres thing.
Eric Johnson
Just tried it in SQL 2005. Awesome.
User
note that indexes and constraints are not duplicated.
Yada
+1  A: 

Sqsh (pronounced skwish) is short for SQshelL (pronounced s-q-shell), it is intended as a replacement for the venerable 'isql' program supplied by Sybase. It came about due to years of frustration of trying to do real work with a program that was never meant to perform real work.

My favorite feature is that it contains a (somewhat feeble) scripting language which allows a user to source handy functions like this from a .sqshrc config file:

\func -x droptablelike
   select name from sysobjects where name like "${1}" and type = 'U'
   \do
      \echo dropping #1
      drop table #1
      go
   \done
\done
Eric Johnson
A: 

I generate my C# db classes using a SQL stored procedure. I also generate sproc wrappers (in C#) using a stored procedure. By my favorite trick permits me to return the Identity generated by an Insert statement without using @@IDENTITY or Scope_Identity()

Insert Into SomeTable (Col1, Col2, Col3) output inserted.$identity Values ('One', 2, 'Three');

Mark Brittingham
A: 

Multiple self joins on a table to transpose the table i.e. convert rows into columns. This is specially useful in tables where name-value pairs are stored.

Example:

Table company_data stores company information in id-value format where id represents type of information (e.g. 100 stands for name, 101 stands for address, 102 stands for CEO name etc.).

Table company_data
company_id   variable_id    value
----------   -----------    -----
1436878       100           'Apple Computers'
1436878       101           'Cupertino'
1436878       102           'Steve Jobbs'
...

select a.company_id, a.value name, b.value address, c.value ceo_name
from company_data a, company_data b, company_data c 
where a.company_id = b.company_id 
  and b.company_id = c.company_id 
  and a.variable_id =100 
  and b.variable_id = 101 
  and c.variable_id = 102

Of course, this is just a hack and should be used with caution but it's handy for "once in a while" jobs.

Rahul
Would you add a simple example to demonstrate the technique?
EvilTeach
A: 

Derived tables. Example below is simple (and makes more sense as a join), but in more complex cases they can be very handy. Using these means you don't have to insert a temporary result set into a table just to use it in a query.

SELECT   tab1.value1,
         tab2.value1
FROM     mytable tab1,
    (    SELECT id,
                value1 = somevalue
         FROM   anothertable
         WHERE  id2 = 1234 ) tab2
WHERE   tab1.id = tab2.id
Allethrin
+34  A: 

Use Excel to generate SQL. This is especially useful when someone emails you a spreadsheet full of rubbish with a request to "update the system" with their modifications.

  A       B       C
1 BlahID  Value   SQL Generation
2 176     12.76   ="UPDATE Blah SET somecolumn=" & B2 & " WHERE BlahID=" & A2
3 177     10.11   ="UPDATE Blah SET somecolumn=" & B3 & " WHERE BlahID=" & A3
4 178      9.57   ="UPDATE Blah SET somecolumn=" & B4 & " WHERE BlahID=" & A4

You do need to be careful though because people will have a column for something like UnitPrice and have 999 valid entries and one containing "3 bucks 99 cents".

Also "I have highlighted set A in yellow and set B in green. Put the green ones in the database." grrr.

EDIT: Here's what I actually use for Excel->SQL. I've got a couple of VBA functions that sit in an XLA file that's loaded by Excel on startup. Apologies for any bugs - it's a quick dirty hack that's nonetheless saved me a bucketload of time over the past few years.

Public Function SQL_Insert(tablename As String, columnheader As Range, columntypes As Range, datarow As Range) As String

    Dim sSQL As String
    Dim scan As Range
    Dim i As Integer
    Dim t As String
    Dim v As Variant

    sSQL = "insert into " & tablename & "("

    i = 0

    For Each scan In columnheader.Cells
        If i > 0 Then sSQL = sSQL & ","
        sSQL = sSQL & scan.Value
        i = i + 1
    Next

    sSQL = sSQL & ") values("

    For i = 1 To datarow.Columns.Count

        If i > 1 Then sSQL = sSQL & ","

        If LCase(datarow.Cells(1, i).Value) = "null" Then

            sSQL = sSQL & "null"

        Else

            t = Left(columntypes.Cells(1, i).Value, 1)

            Select Case t
                Case "n": sSQL = sSQL & datarow.Cells(1, i).Value
                Case "t": sSQL = sSQL & "'" & Replace(datarow.Cells(1, i).Value, "'", "''") & "'"
                Case "d": sSQL = sSQL & "'" & Excel.WorksheetFunction.Text(datarow.Cells(1, i).Value, "dd-mmm-yyyy") & "'"
                Case "x": sSQL = sSQL & datarow.Cells(1, i).Value
            End Select
        End If
    Next

    sSQL = sSQL & ")"

    SQL_Insert = sSQL

End Function

Public Function SQL_CreateTable(tablename As String, columnname As Range, columntypes As Range) As String

    Dim sSQL As String

    sSQL = "create table " & tablename & "("

    Dim scan As Range
    Dim i As Integer
    Dim t As String

    For i = 1 To columnname.Columns.Count

        If i > 1 Then sSQL = sSQL & ","

        t = columntypes.Cells(1, i).Value
        sSQL = sSQL & columnname.Cells(1, i).Value & " " & Right(t, Len(t) - 2)

    Next

    sSQL = sSQL & ")"

    SQL_CreateTable = sSQL

End Function

The way to use them is to add an extra row to your spreadsheet to specify column types. The format of this row is "x sqltype" where x is the type of data (t = text, n = numeric, d = datetime) and sqltype is the type of the column for the CREATE TABLE call. When using the functions in forumulas, put dollar signs before the row references to lock them so they dont change when doing a fill-down.

eg:

Name           DateOfBirth  PiesPerDay    SQL
t varchar(50)  d datetime   n int         =SQL_CreateTable("#tmpPies",A1:C1,A2:C2)
Dave           15/08/1979   3             =sql_insert("#tmpPies",A$1:C$1,A$2:C$2,A3:C3)
Bob            9/03/1981    4             =sql_insert("#tmpPies",A$1:C$1,A$2:C$2,A4:C4)
Lisa           16/09/1986   1             =sql_insert("#tmpPies",A$1:C$1,A$2:C$2,A5:C5)

Which gives you:

create table #tmpPies(Name varchar(50),DateOfBirth datetime,PiesPerDay int)
insert into #tmpPies(Name,DateOfBirth,PiesPerDay) values('Dave','15-Aug-1979',3)
insert into #tmpPies(Name,DateOfBirth,PiesPerDay) values('Bob','09-Mar-1981',4)
insert into #tmpPies(Name,DateOfBirth,PiesPerDay) values('Lisa','16-Sep-1986',1)
geofftnz
Lol, I would never have 'athought o' that.Thank you.
EvilTeach
I use this one a lot, especially with non technical clients who do EVERYTHING in excel. "We have 20,000 products updates, heres the excel sheet, can we have this loaded by next quater (thats how long it too THEM to make the spreadsheet) then I do it in an hour and they crap thier pants.
Neil N
I do this all the time as well. However, I typically insert my sql in columns around the data. So in your example, I'd have inserted a new column before A and placed "Update Blah Set somecolumn=" there. etc. Any tip on how to deal with single quotes when inserting strings with your approach?
Cory House
Actually, what I did was make a couple of VBA functions that are loaded every time I open Excel. SQL_CreateTable and SQL_Insert. These take a tablename and ranges that specify column names, types and row data. I can post them here if you want.
geofftnz
Ya, post em geoffnz
EvilTeach
"Also "I have highlighted set A in yellow and set B in green. Put the green ones in the database." grrr." - divine! Filter on formating!
Chris
Holy crap I forgot I'd posted this answer... I've since changed jobs and forgot to take this code with me. Thanks Me-From-History!!
geofftnz
+4  A: 

Using Boolean shortcuts in the filters to avoid what I used to do (with horrible string concatenation before executing the final string) before I knew better. This example is from a search Stored Procedure where the user may or may not enter Customer Firstname and Lastname

    @CustomerFirstName   VarChar(50) = NULL,
    @CustomerLastName    VarChar(50) = NULL,

    SELECT   * (I know, I know)
    FROM     Customer c
    WHERE    ((@CustomerFirstName IS NOT NULL AND 
               c.FirstName = @CustomerFirstName)
             OR @CustomerFirstName IS NULL)
    AND      ((@CustomerLastName IS NOT NULL AND 
               c.LastName = @CustomerLastName)
             OR @CustomerLastName IS NULL)
Rafe Lavelle
Is there a benefit over (@CustomerFirstName IS NULL OR c.FirstName = @CustomerFirstName) AND (@CustomerLastName IS NULL OR c.LastName = @CustomerLastName) that I'm not seeing?
A. Rex
No, probably none. I think you're right.
Rafe Lavelle
+11  A: 

In sql server 2005/2008 to show row numbers in a SELECT query result

SELECT ( ROW_NUMBER() OVER (ORDER BY OrderId) ) AS RowNumber,
        GrandTotal, CustomerId, PurchaseDate
FROM Orders

ORDER BY is a compulsory clause. The OVER() clause tells the SQL Engine to sort data on the specified column (in this case OrderId) and assign numbers as per the sort results.

Binoj Antony
One of those things that every DBMS does differently :-( Anyhow, I didn't know how to do it in MS SQL Server, so +1
finnw
+2  A: 

I offer these suggestions, which have helped me:

stored procedures and views

Use stored procedures to encapsulate complex joins over many tables - both for selects and for updates/inserts.
You can also use views where the joins don't involve too many tables. (where "too many" is a vague quantity between 4 and 10).

So, for example, if you want information on a customer, and it's spread over lots of tables, like "customer", "address", "customer status code", "order", "invoice", etc, you could create a stored procedure called "getCustomerFullDetail" which joins all those tables, and your client code can just call that and never have to worry about the table structure.
For updates, you can create "updateCustomerFullDetail", which could apply updates sensibly.
There will be some performance hits for this, and writing the stored procedures might be non-trivial, but you're writing the non-trivial code once, in SQL (which is typically succinct).

normalisation

Normalise your database.
Really.
This results in cleaner (simpler) update code which is easier to maintain.
It may have other benefits which are not in scope here.

I normalise to at least 4NF.
4NF is useful because in includes making all your lists of possible values explicit, so your code doesn't have to know about, e.g. all possible status codes, so you don't hard-code lists in client code.
(3NF is the one which really sorts out those update anomalies.)

perhaps use ORM?

This is as much a question as a suggestion: would a good ORM reduce the amount of code you have to write? Or does it just remove some of the pain from moving data from the database to the client? I haven't played with one enough.

AJ
+8  A: 

In some of my older code, I issue a SELECT COUNT(*) in order to see how many rows there are, so that we can allocate enough memory to load the entire result set. Next we do a query to select the actual data.

One day it hit me.

WITH 
base AS
(
    SELECT COL1, COL2, COL3
    FROM SOME-TABLE
    WHERE SOME-CONDITION
)
SELECT COUNT(*), COL1, COL2, COL3
FROM base;

That gives me the number of rows, on the first row (and all the rest).

So I can read the first row, allocate the array, then store the first row, then load the rest in a loop.

One query, doing the work that two queries did.

EvilTeach
Not a bad idea, but note: for many SELECT queries, the server is able to produce output rows "one at a time" and send them to the client as it produces them. But by using your trick, you force the server to produce and buffer all rows before it can send the first row back to you.
j_random_hacker
To deal with result sets whose size you don't know beforehand, you can use the following strategy: start with a smallish buffer (e.g. 10 rows) and double it each time the buffer runs out. You'll never waste more than 50% of memory, and you'll never need more than log(N) reallocations for N rows.
j_random_hacker
@j - ya in my environment that is the desired behavior.I want to load the entire result set. It is not a thing for interactive display.Using the technique described above, I do one allocation, and there is no wasted space.
EvilTeach
One other thing to consider is that for performance reasons, I prefer to get as much data as possible for each access to the database. One row at a time network options kill performance.
EvilTeach
@EvilTeach: I wasn't clear enough. I mean there are many queries for which the server *can* begin sending rows to the client as soon as it calculates the 1st row. Your way forces the server to determine all rows of the dataset before it can send any rows back, which *increases* latency.
j_random_hacker
Either way, the server will always send batches of rows at a time to improve network performance. The issue is the initial latency that your approach forces. (And the extra server resources consumed in buffering all rows on the server side.) Of course, this is only an issue for big result sets.
j_random_hacker
+12  A: 

(Very easy trick - this post is that long only because Im trying to fully explain whats going on. Hope you like it.)

Summary

By passing in optional values you can have the query ignore specific WHERE clauses. This effectively makes that particular clause become a 1=1 statement. Awesome when u're not sure what optional values will be provided.

Details

Instead of writing a lot of similar queries just for different filter combinations writing just one and exploit bool logic. I use it a lot in conjuction with typed datasets in .net. For example, let say we have a query like that:

select id, name, age, rank, hometown from .........;

We've created fill/get method that loads all data. Now, when we need to filter for id - we're adding another fill/get method:

select id, name, age, rank, hometown from ..... where id=@id;

Then we need to filter by name and hometown - next method:

select id, name, age, rank, hometown from .... where name=@name and hometown=@hometown;

Suppose now we need to filter for all other columns and their combinations - we quickly end up creating a mess of similar methods, like method for filtering for name and hometown, rank and age, rank and age and name etc. etc. One option is to create suitable query programatically, the other, much simpler, is to use one fill/get method that will provide all filtering possibilites:

select id, name, age, rank, hometown from .....
where
(@id = -1 OR id = @id) AND
(@name = '*' OR name = @name OR (@name is null AND name is null)) AND
(@age = -1 OR age = @age OR (@age is null AND age is null)) AND
(@rank = '*' OR rank = @rank OR (@rank is null AND rank is null) AND
(@hometown = '*' OR hometown = @hometown OR (@hometown is null AND hometown is null);

Now we have all possible filterings in one query. Let's say get method name is get_by_filters with signature:

get_by_filters(int id, string name, int? age, string rank, string hometown)

Want to filter just by name? :

get_by_filters(-1,"John",-1,"*","*");

By age and rank where hometown is null? :

get_by_filters(-1, "*", 23, "some rank", null);

etc. etc.

Just one method, one query and all filter combinations. It saved me a lot of time.

One drawback is that you have to "reserve" integer/string for "doesn't matter" filter. But you shouldn't expect an id of value -1 and person with name '*' (of course this is context dependant) so not big problem imho.


edit:

Just to quickly explain the mechanism, lets take a look at first line after where:

 (@id = -1 OR id = @id) AND ...

when parameter @id is set to -1 the query becomes:

(-1 = -1 OR id = -1) AND ...

thanks to short circuit logic the second part of OR is not going to be even tested: -1 = -1 is always true.

If parameter @id was set to, lets say, 77:

(77 = -1 OR id = 77) AND ...

then 77 = -1 is obviously false, so test for column id equal 77 will be performed. Same for other parameters. This is really easy yet powerful.

I use this a ton and love it!
billb
I use this a lot also. I was going to post this trick.
Pure.Krome
will this work, if the default value is NULL, and I test for (@id IS NULL OR Id = @id)?
Schmuli
Lovely. It will certainly save a hell lot of time :)
Varun Mahajan
@Schmuli - That's how I use this trick, and it's always worked well for me. However, I should note that I rarely allow NULL as a valid value. You have to pick something that won't be valid; otherwise, you won't get all values back.
Ryan Riley
That's the way i found dynamic filtering work with a GridView and a SqlDataSource, pretty cool trick
Jhonny D. Cano -Leftware-
This produces awful execution plans, often no indexes are used at all. Strongly recommended against.
wqw
In my opinion better approach is to write dynamic query and execute it with 'exec sp_executesql'. Why should you query a column if you don't have to (especially if this column doesn't have an index)?
_simon_
Unfortunately I have to agree with wqw and _simon_ – the use of the OR operator should be avoided for performance reasons. See the classic article [Dynamic Search Conditions](http://www.sommarskog.se/dyn-search-2005.html) by Erland Sommarskog for the various alternatives.
Kenny Evitt
A: 

The two things I find most useful are :

1) Getting your head around subselects, and limiting set results. Not only does it help you write more succinct and easier to interpret queries, you'll learn how to tweak the sections for performance independently before you end up with an intractable performance problem.

2) Excel. ='SELECT * FROM ' A1 & ' WHERE ' & B2 type code generation helps a lot. Granted, it's not always useful for every problem, but all in all knowing how to use Excel has saved me nearly as much time as Redgate tools.

oh yeah, 3) Redgate's SQL Toolbelt. (I am not a shill, just a very very very happy user)

edit: after rereading the answers, ROW_NUMBER() (SQLServer 2005 specific) and a general knowledge of normalization and the way indexes work would have to also make the list.

DigDoug
+1  A: 

Kinda off-topic and subjective, but pick a coding style and stick to it. It will make your code many times more readable when you have to revisit it. Separate sections of the SQL query into parts. This can make cut-and-paste coding easier because individual clauses are on their own lines. Aligning different parts of join and where clauses makes it easy to see what tables are involved, what their aliases are, what the parameters to the query are...

Before:

select it.ItemTypeName, i.ItemName, count(ti.WTDLTrackedItemID) as ItemCount 
from WTDL_ProgrammeOfStudy pos inner join WTDL_StudentUnit su 
on su.WTDLProgrammeOfStudyID = pos.WTDLProgrammeOfStudyID inner join
WTDL_StudentUnitAssessment sua on sua.WTDLStudentUnitID = su.WTDLStudentUnitID
inner join WTDL_TrackedItem ti on ti.WTDLStudentUnitAssessmentID = sua.WTDLStudentUnitAssessmentID
inner join WTDL_UnitItem ui on ti.WTDLUnitItemID = ui.WTDLUnitItemID inner 
join WTDL_Item i on ui.WTDLItemID = i.WTDLItemID inner join WTDL_ItemType it 
on i.WTDLItemTypeID = it.WTDLItemTypeID where it.ItemTypeCode = 'W' and i.ItemName like 'A%'
group by it.ItemTypeName, i.ItemName order by it.ItemTypeName, i.ItemName

After:

select          it.ItemTypeName, 
                i.ItemName, 
                count(ti.WTDLTrackedItemID) as ItemCount 

from            WTDL_ProgrammeOfStudy            pos 
inner join      WTDL_StudentUnit                 su        on su.WTDLProgrammeOfStudyID = pos.WTDLProgrammeOfStudyID 
inner join      WTDL_StudentUnitAssessment       sua       on sua.WTDLStudentUnitID = su.WTDLStudentUnitID
inner join      WTDL_TrackedItem                 ti        on ti.WTDLStudentUnitAssessmentID = sua.WTDLStudentUnitAssessmentID
inner join      WTDL_UnitItem                    ui        on ti.WTDLUnitItemID = ui.WTDLUnitItemID 
inner join      WTDL_Item                        i         on ui.WTDLItemID = i.WTDLItemID
inner join      WTDL_ItemType                    it        on i.WTDLItemTypeID = it.WTDLItemTypeID 

where           it.ItemTypeCode         = 'W' 
and             i.ItemName              like 'A%'

group by        it.ItemTypeName, 
                i.ItemName 

order by        it.ItemTypeName, 
                i.ItemName
geofftnz
+2  A: 

That would be

copy & paste

But in all seriousness, I've gotten in the habit of formatting my code so that lines are much easier to comment out. For instance, I drop all new lines down from their SQL commands and put the comma's at the end instead of where I used to put them (at the beginning). So my code ends up looking like this

Select
    a.deposit_no,
    a.amount
From 
    dbo.bank_tran a
Where
    a.tran_id = '123'

Oh and ALIASING!

GregD
You and me both. Maybe someone can get Steve to write a Code Complete for SQL :)
EvilTeach
A: 

Most people have already answered most of the ones I was thinking of.

  1. creating and using views
  2. CASE statements in SQL instead of code
  3. Date Functions (DateAdd especially, use negative numbers to subtract.
  4. Count, Sum, Average functions
  5. Column aliasing for grid views / datagrids
WACM161
I haven't seen a really good entry on doing magic tricks on date/times
EvilTeach
Come to think about it, there ought to be more examples of analytic functions here.
EvilTeach
+1  A: 

Combining aggregates with case statements (here w/ a pivot!):

select job,
       sum(case when deptno = 10 then 1 end) dept10,
       sum(case when deptno = 20 then 1 end) dept20,
       sum(case when deptno = 30 then 1 end) dept30
  from emp
 group by job
jimmyorr
A: 

Learn T4! It's a great little tool to have around. Creating templates is a little work at first, but hot hard at all once you get the hang of it. I know that in the age of ORMs, the example below is perhaps dated, but you'll get the idea.

See these links for more on T4: Start here:

http://www.olegsych.com/2007/12/text-template-transformation-toolkit/

Others of interest:

http://msdn.microsoft.com/en-us/library/bb126445.aspx

http://www.adamjcooper.com/blog/post/Microsofts-T4-A-Free-Alternative-to-CodeSmith.aspx

http://www.t4editor.net/

This T4 template:

<#@ template language="C#" #>
<#@ output extension="CS" #>
<#@ assembly name="Microsoft.SqlServer.ConnectionInfo" #>
<#@ assembly name="Microsoft.SqlServer.Smo" #>
<#@ import namespace="Microsoft.SqlServer.Management.Smo" #>
<#@ import namespace="System.Collections.Specialized" #>
<#@ import namespace="System.Text" #>

<#  
    Server server = new Server( @"DUFF\SQLEXPRESS" );
    Database database = new Database( server, "Desolate" );
    Table table = new Table( database, "ConfirmDetail" );
    table.Refresh();  

    WriteInsertSql( table );
#>

<#+
    private void WriteInsertSql( Table table )
    {
        PushIndent( "    " );
        WriteLine( "const string INSERT_SQL = " ); 
        PushIndent( "    " );
        WriteLine( "@\"INSERT INTO " + table.Name + "( " ); 

        PushIndent( "    " );
        int count = 0;
        // Table columns.
        foreach ( Column column in table.Columns )
        {
            count++;     
            Write( column.Name );
            if ( count < table.Columns.Count ) Write( ",\r\n" );
        }
        WriteLine( " )" );
        PopIndent();

        WriteLine( "values (" );
        PushIndent( "    " );
        count = 0;
        // Table columns.
        foreach ( Column column in table.Columns )
        {
            count++;     
            Write( "@" + column.Name );
            if ( count < table.Columns.Count ) Write( ",\r\n" );
        }
        WriteLine( " )\";" );
        PopIndent();     
        PopIndent();
        PopIndent();
        WriteLine( "" );
    }
#>

outputs this for any table specfied:

const string INSERT_SQL = 
    @"INSERT INTO ConfirmDetail( 
        ConfirmNumber,
        LineNumber,
        Quantity,
        UPC,
        Sell,
        Description,
        Pack,
        Size,
        CustomerNumber,
        Weight,
        Ncp,
        DelCode,
        RecordID )
    values (
        @ConfirmNumber,
        @LineNumber,
        @Quantity,
        @UPC,
        @Sell,
        @Description,
        @Pack,
        @Size,
        @CustomerNumber,
        @Weight,
        @Ncp,
        @DelCode,
        @RecordID )";
rp
+3  A: 

Tom Kyte's Oracle implementation of MySQL's group_concat aggregate function to create a comma-delimited list:

with data as
     (select job, ename,
             row_number () over (partition by job order by ename) rn,
             count (*) over (partition by job) cnt
        from emp)
    select job, ltrim (sys_connect_by_path (ename, ','), ',') scbp
      from data
     where rn = cnt
start with rn = 1
connect by prior job = job and prior rn = rn - 1
  order by job

see: http://tkyte.blogspot.com/2006/08/evolution.html

jimmyorr
Someone in our QA department asked for this functionality last friday
EvilTeach
A: 

embedding user permissions into my sprocs and using encryptions.. so.. my structure looks like

/* Proc.sql

Documentation for said proc

Modifications

*/ if exists (select 1 from information_schema where procedure_name='proc') drop proc go create proc proc with encryption as

--- blah blah blah

go grant exec on proc to [whoever]

Rob
+2  A: 

Analytic functions like rank, dense_rank, or row_number to provide complex ranking.
The following example gives employees a rank in their deptno, based on their salary and hiredate (highest paid, oldest employees):

select e.*,
       rank() over (
                      partition by deptno 
                      order by sal desc, hiredate asc
                   ) rank
from emp e
jimmyorr
+1  A: 

Generating SQL to update one table based on the contents of another table.

Some database brands such as MySQL and Microsoft SQL Server support multi-table UPDATE syntax, but this is non-standard SQL and as a result each vendor implements different syntax.

So to make this operation more portable, or when we had to do it years ago before the feature existed in any SQL implementation, you could use this technique.

Say for example you have employees and departments. You keep a count of employees per department as an integer in the departments table (yes this is denormalized, but assume for the moment that it's an important optimization).

As you change the employees of a department through hiring, firing, and transfers, you need to update the count of employees per department. Suppose you don't want to or can't use subqueries.

SELECT 'UPDATE departments SET emp_count = ' || COUNT(e.emp_id) 
  || ' WHERE dept_id = ' || e.dept_id || ';'
FROM employees e
GROUP BY e.dept_id;

The capture the output, which is a collection of SQL UPDATE statements. Run this as an SQL script.

It doesn't have to be a query using GROUP BY, that's just one example.

Bill Karwin
+1  A: 

The nested set method for storing trees / hierarchical data, as explained in Joe Celko's famous book ("SQL for smarties") and also e.g. here (too long to post here).

mjy
A: 

You simply must love the Tally table approach to looping. No WHILE or CURSOR loops needed. Just build a table and use a join for iterative processing. I use it primarily for parsing data or splitting comma-delimited strings.

This approach saves on both typing and performance.

From Jeff's post, here are some code samples:

--Build the tally table:

IF OBJECT_ID('dbo.Tally') IS NOT NULL
     DROP TABLE dbo.Tally

SELECT TOP 10000 IDENTITY(INT,1,1) AS N
INTO dbo.Tally
FROM Master.dbo.SysColumns sc1,
    Master.dbo.SysColumns sc2

ALTER TABLE dbo.Tally
    ADD CONSTRAINT PK_Tally_N
        PRIMARY KEY CLUSTERED (N) WITH FILLFACTOR = 100

--Split a CSV column

--Build a table with a CSV column.
CREATE TABLE #Demo (
    PK INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
    CsvColumn VARCHAR(500)
)
INSERT INTO #MyHead 
SELECT '1,5,3,7,8,2'
UNION ALL SELECT '7,2,3,7,1,2,2'
UNION ALL SELECT '4,7,5'
UNION ALL SELECT '1'
UNION ALL SELECT '5'
UNION ALL SELECT '2,6'
UNION ALL SELECT '1,2,3,4,55,6'

SELECT mh.PK,
    SUBSTRING(','+mh.CsvColumn+',',N+1,CHARINDEX(',',','+mh.CsvColumn+',',N+1)-N-1) AS Value
FROM dbo.Tally t
    CROSS JOIN #MyHead mh
WHERE N < LEN(','+mh.CsvColumn+',')
    AND SUBSTRING (','+mh.CsvColumn+',',N,1) = ','
Ryan Riley
A: 

finally accepting the fact that you really CAN do anything without having to use a Cursor

EDIT: sorry I guess I got a little off topic, the above does not neccesarily mean less code.

Neil N
A: 

Use procedures with table-valued inputs to provide a central definition of an entity (this is only available in SQL Server 2008 and later).

So, start with a table that defines identifiers and the order they should be in:

CREATE TYPE dbo.OrderedIntList AS TABLE
(
    Id INT NOT NULL,
    RowNumber INT NOT NULL,
    PRIMARY KEY (Id)
);

Declare an internal stored procedure that uses it:

CREATE PROCEDURE dbo.Internal_GetEntities
(
    @Ids dbo.OrderedIntList READONLY
)
AS
SELECT
    e.Column1
    ,e.Column2
    -- other columns
FROM
    dbo.Entity e
    INNER JOIN @Ids i ON i.Id = e.Id
    -- joins to other tables as necessary
ORDER BY
    ids.RowNumber ASC;

Then when retrieving data, use it to return the actual columns your result set needs, e.g.

DECLARE @Ids dbo.OrderedIntList;

INSERT INTO @Ids
SELECT
    e.Id
    ,ROW_NUMBER() OVER (ORDER BY e.Name ASC) AS RowNumber
FROM
    dbo.Entity e
WHERE
    -- conditions etc.

EXEC dbo.Internal_GetEntities @Ids

This is a little more code up-front than just using a single procedure, but if you have multiple procedures that return the same entity it can save quite a bit of typing as you only need to define the columns and the tables/joins they come from that make up the entity once, and it simplifies the queries in the public procedures as you only have to include the tables you need in the WHERE clause as the full SELECT is done elsewhere.

In addition, if you ever need to change the definition of the entity (which is common as an app evolves) you can change the returned columns in a single place rather than in each procedure that returns the entity, which means that the amount of SQL you have to write/change when in maintance/upgrade mode is vastly reduced.

Greg Beech
A: 

"select *" instead of naming all the columns.

I know it impacts the query planner and performance and all that stuff, which is why we all stopped doing it, but seriously, unless you're sure it's going to be a problem, just "select *" :)

Paul Stovell
ya right...since when is performance not an issue? I've spent years going through coldfusion pages with horribly written sql and always having to re-write all those select *. Just better to not do it.
crosenblum
A: 

Datamarts and GUI based OLAP tools.

Walter Mitty
+1  A: 

Use Excel to generate SQL Queries. This works great when you need to insert, update, delete rows based on a CSV that was provided to you. All you have to do is create the right CONCAT() formula, and then drag it down to create the SQL Script

Timur Fanshteyn
+6  A: 

Knowing the specifics of your RDBMS, so you can write more concise code.

  • concatenate strings without using loops. MSSQL:
    something that can prevent writing loops:
    declare @t varchar(1000000) -- null initially;
    select @t = coalesce(@t + ', ' + name, name) from entities order by name;
    print @t
    alternatively:
    declare @s varchar(1000000)
    set @s = ''
    select @s = @s + name + ', ' from entities order by name
    print substring(@s,1,len(@s)-1)
  • Adding an autonumber field to help ease out deleting duplicate records(leave one copy). PostgreSQL, MSSQL, MySQL:

    http://mssql-to-postgresql.blogspot.com/2007/12/deleting-duplicates-in-postgresql-ms.html

  • Updating table from other table. PostgreSQL, MSSQL, MySQL:

    http://mssql-to-postgresql.blogspot.com/2007/12/updates-in-postgresql-ms-sql-mysql.html

  • getting the most recent row of child table.

    PostgreSQL-specific:

    SELECT DISTINCT ON (c.customer_id) 
    c.customer_id, c.customer_name, o.order_date, o.order_amount, o.order_id 
    FROM customers c LEFT JOIN orders O ON c.customer_id = o.customer_id
    ORDER BY c.customer_id, o.order_date DESC, o.order_id DESC;
    

    Contrast with other RDBMS which doesn't support DISTINCT ON:

    select 
    c.customer_id, c.customer_name, o.order_date, o.order_amount, o.order_id 
    from customers c
    (
        select customer_id, max(order_date) as recent_date
        from orders 
        group by customer_id
    ) x on x.customer_id = c.customer_id
    left join orders o on o.customer_id = c.customer_id 
    and o.order_date = x.recent_date
    order by c.customer_id
    
  • Concatenating strings on RDBMS-level(more performant) rather than on client-side:

    http://www.christianmontoya.com/2007/09/14/mysql-group_concat-this-query-is-insane/

    http://mssql-to-postgresql.blogspot.com/2007/12/cool-groupconcat.html

  • Leverage the mappability of boolean to integer:

    MySQL-specific (boolean == int), most concise:

    select entity_id, sum(score > 15)
    from scores
    group by entity_id
    

    Contrast with PostgreSQL:

    select entity_id, sum((score > 15)::int)
    from scores
    group by entity_id
    

    Contrast with MSSQL, no first-class boolean, cannot cast to integer, need to perform extra hoops:

    select entity_id, sum(case when score > 15 then 1 else 0 end)
    from scores
    group by entity_id
    
  • Use generate_series to report gaps in autonumber or missing dates, on next version of PostgreSQL(8.4), there will be generate_series specifically for date:

    select '2009-1-1'::date + n as missing_date 
    from generate_series(0, '2009-1-31'::date - '2009-1-1'::date) as dates(n)
    where '2009-1-1'::date + dates.n not in (select invoice_date from invoice)
    
Michael Buen
+1 for "mappability to bool" and generate_series(), both very handy tricks. I'm not convinced that joining strings on the server is a win -- it's only faster if your server is faster, you're likely to run into field length constraints, and maybe you need to parse the results back again anyway.
j_random_hacker
A: 

Due to the lack of "LIMIT" clause in MS SQL 2005/2008, I use this (for paging data):

select * 
from 
(
    select *, row_number() over (order by id) as row from dbo.foo
) a 
where row > 5 and row <= 10

This query returns rows 6 - 10 from dbo.foo (ordered by the "id" column).

Davorin
Very interesting, never saw the over command or row_number. Any docs on that?
crosenblum
http://msdn.microsoft.com/en-us/library/ms189461.aspx and http://msdn.microsoft.com/en-us/library/ms186734.aspx
Davorin
+1  A: 

using variables in the sql where clause to cut down on conditional logic in your code/database. you can compare your variable's value against some default (0 for int let's say), and filter only if they're not equal. for example:

SELECT * FROM table AS t
WHERE (@ID = 0 OR t.id = @ID);

if @ID is 0 i'll get back all rows in the table, otherwise it'll filter my results by id.

this technique comes in handy often, especially in search, where you can filter by any number of fields.

ob
A: 

Simple but effective ... use views to get rid of complex repetitive joins and conditions. You can centralize a bit of simple logic without resorting to stored procedures.

Nerdfest
A: 

Reseeding identity column:

DBCC CHECKIDENT (yourtable, reseed, 34)
ajma
What sql are you dealing with here?
EvilTeach
+3  A: 

Not detailed enough and too far down to win the bounty but...

Did anyone already mention UNPIVOT? It lets you normalize data on the fly from:

Client | 2007 Value | 2008 Value | 2009 Value
---------------------------------------------
Foo         9000000     10000000     12000000
Bar               -     20000000     15000000

To:

Client | Year | Value
-------------------------
Foo      2007    9000000
Foo      2008   10000000
Bar      2008   20000000
Foo      2009   12000000
Bar      2009   15000000

And PIVOT, which pretty much does the opposite.

Those are my big ones in the last few weeks. Additionally, reading Jeff's SQL Server Blog is my best overall means of saving time and/or code vis a vis SQL.

I Have the Hat
add the sql that does the job
EvilTeach
+1  A: 

triggers and stored procedures!

waney
A: 

In hindsight this is obvious.

Order By.

If you need to process the rows in a particular order, for control break processing. It is generally easier to order the rows in the database, as it has to read all of the data anyway, then it is to suck all of the data back into your app, and sort it locally. Typically a server has more resources than the machine that local app is running on, so it is simplier. There is less code in your app, and it generally runs faster.

EvilTeach
A: 

If you use MySQL,

use this site:

http://www.artfulsoftware.com/infotree/queries.php?&amp;bw=1680

It really shows a lot of queries that let the db do the job instead of coding multiple queries and doing routine on the result.

Jean-Francois
+1  A: 

Calculating the product of all rows (x1*x2*x3....xn) in one "simple" query

SELECT exp(sum(log(someField)))  FROM Orders

taking advantage of the logarithm properties:

  1. log(x) + log(y) = log(x*y)

  2. exp(log(x*y)) = x*y

not that I will ever need something like that.......

pablito
A: 

I needed a way to pass a list of values as a parameter to a stored procedure to be used in the 'WHERE name IN (@list_of_values)' section of the query. After doing some research on the Internet I found the answer I was looking for and works fine. The solution was to pass an XML parameter. Here is a snippet that provides the general idea:

DECLARE @IdArray XML

SET @IdArray = '<id>Name_1</id><id>Name_2</id><id>Name_3</id><id>Name_4</id>'

SELECT ParamValues.ID.value('.','VARCHAR(10)') 
FROM @IdArray.nodes('id') AS ParamValues(ID)
Abel
+1  A: 

The SQL MERGE command:

In the past developers had to write code to handle situations where in one condition the database does an INSERT but in others (like when the key already exists) they do an UPDATE.

Now databases support the "upsert" operation in SQL, which will take care of some of that logic for you in a more concise fashion. Oracle and SQL Server both call it MERGE. The SQL Server 2008 version is pretty powerful; I think it can also be configured to handle some DELETE operations.

Bernard Dy
A: 

It's not specifically a coding trick but indeed a very helpful (and missing) aid to SQL Server Management Studio:

SQL Prompt - Intelligent code completion and layout for MS SQL Server

There are many answers already provided where the outcome was having written snippets in the past that eliminate the need to write the same in the future. I believe Code Completion through intellisense definitely falls into this category. It allows me to concentrate on the logic without worrying so much about the syntax of T-SQL or the schema of the database/table/...

SAMills
A: 

Move data access to the SQL data access layer or data object rather than continuing to throw ad-hoc embedded SQL all over your app.

This is less a SQL trick than a refactoring you can apply where an app has been modified by many different developers. It definitely reduces the amount of program code by leaving data access where it should be. Here's what happens:

Developer 1 builds the app.

Developer 2 comes along a year later and is asked to add some new data to the display so users can see it. Developer 2 doesn't know much about the app and doesn't have time to learn it, so he cobbles on an embedded SQL statement to pick up some data on the fly. Management pats him on the back for being so productive.

Developer 3 later comes to the fold and repeats Developer 2's approach, cutting and pasting Developer 2's code into another block and just changing the column he needed to get (all the while thinking to himself, "Oh, look how smart I am, I'm doing code reuse!"). Management pats him on the back for being so productive.

This cycle continues until someone that cares realizes that these additional SQL calls aren't necessary. The original SQL in the main data access object could have been modified to bring in the needed data. Fewer SQL statements, less network traffic, and less client app code. Management can't see the benefits here, so the refactorer gets nothing.

I know this situation sounds laughable...but I have seen it in more than one workplace.

Bernard Dy
A: 
SELECT TOP 0 * INTO #tmp FROM MyTbl

It constructs a temp table with the same structure as your source table in 1 simple line. Then you can run all the logic you want to fill up #tmp, diff the data for integrity, validate it before inserting...

Everything is simplified when you are focused on a small set of relevant data.

Haoest
which version of sql is this?
EvilTeach
Oh I read other answers after posting, someone has already given a more generic version: "select * into #t from tbl where 1=0" But the idea is the same. Good luck.
Haoest
+1  A: 

/* This is an fast and easy way to loop through the table without having to deal with cursors which exact a heavy toll on the database

For this you'll need a unique key on the table. It won't work without it and you'll be stuck with the cursors. If this unique key is indexed (which it should be), then this routine will even be faster.

Let say you have to loop through all the values in table SampleTable. Table has structure like this:

create table SampleTable
(
     ID        int  identity (1,1)
    ,Name      varchar(50)
    ,Address   varchar(100)
)

*/

DECLARE @minID  int

-- get the first record
SELECT @minID = min(ID) FROM SampleTable

-- loop until we have no more records
WHILE @minID is NOT NULL 
BEGIN
    -- do actual work, for instance, get values for this ID
    SELECT Name, Address FROM SampleTable WHERE ID = @minID

    -- get the next record
    SELECT @minID = min(ID) FROM SampleTable WHERE @minID < ID
END
AngryHacker
A: 

The best trick I've found is to avoid writing SQL in the first place. I'm not talking about using abstraction libraries (you can though), but simple things:

  • If you have to use one long query in multiple places, it's probably better as a view.
  • In places where parametrised statements don't work—table names, sort order and so on—you can still use sprintf() (with appropriate caution).
  • A good IDE goes a long way, especially if its autocomplete feature is case-sensitive and you tend to uppercase your SQL words...
Ant P.
Interesting...well said
crosenblum
A: 

Recursion; Using a common table expression. Select the CTE from within the CTE.

WITH fib(a,b) AS (
SELECT 1 AS a, 2 AS b
UNION ALL
SELECT b, a+b FROM f WHERE a < 100) SELECT a FROM fib

prints a Fibonacci sequence. And, so, your SQL can tackle the wide variety of hard and interesting problems solved using recursion. Such as, tree and graph algorithms, searching or the processing of hierarchical data.

A: 

Don't write SQL at all. Use an ORM layer to access and manage the database.

Syntax
+1  A: 

De-dup a table fast and easy. This SQL is Oracle-specific, but can be modified as needed for whatever DB you are using:

DELETE table1 WHERE rowid NOT IN (SELECT MAX(rowid) FROM table1 GROUP BY dup_field)

Barry
+4  A: 
Max Gontar
+1  A: 
Colin Pickard
A: 

Red-gate SQL prompt is very useful

http://www.red-gate.com/Products/SQL_Prompt/index.htm

Auto completion, code tidy-up, table/proc/view definitions as popup windows Datatype tooltips etc.

adolf garlic
+1  A: 

Nice quick little utility script I use for when I need to find an ANYTHING in a SQL object (works on MSSQL 2000 and beyond). Just change the @TEXT

SET NOCOUNT ON

DECLARE @TEXT VARCHAR(250)
DECLARE @SQL VARCHAR(250)

SELECT  @TEXT='WhatDoIWantToFind'

CREATE TABLE #results (db VARCHAR(64), objectname VARCHAR(100),xtype VARCHAR(10), definition TEXT)

SELECT @TEXT as 'Search String'
DECLARE #databases CURSOR FOR SELECT NAME FROM master..sysdatabases where dbid>4
    DECLARE @c_dbname varchar(64)   
    OPEN #databases
    FETCH #databases INTO @c_dbname   
    WHILE @@FETCH_STATUS  -1
    BEGIN
     SELECT @SQL = 'INSERT INTO #results '
     SELECT @SQL = @SQL + 'SELECT ''' + @c_dbname + ''' AS db, o.name,o.xtype,m.definition '   
     SELECT @SQL = @SQL + ' FROM '+@c_dbname+'.sys.sql_modules m '   
     SELECT @SQL = @SQL + ' INNER JOIN '+@c_dbname+'..sysobjects o ON m.object_id=o.id'   
     SELECT @SQL = @SQL + ' WHERE [definition] LIKE ''%'+@TEXT+'%'''   
     EXEC(@SQL)
     FETCH #databases INTO @c_dbname
    END
    CLOSE #databases
DEALLOCATE #databases

SELECT * FROM #results order by db, xtype, objectname
DROP TABLE #results

The next one is referred to as an UPSERT. I think in MSSQL 2008 you can use a MERGE command but before that if you had to do something in two parts. So your application sends data back to a stored procedure but you dont necessarily know if you should be updating existing data or inserting NEW data. This does both depending:

DECLARE @Updated TABLE (CodeIdentifier VARCHAR(10))

UPDATE AdminOverride 
SET Type1='CMBS'
OUTPUT inserted.CodeIdentifier INTO @Updated
FROM AdminOverride a 
INNER JOIN ItemTypeSecurity b
      ON a.CodeIdentifier = b.CodeIdentifier

INSERT INTO AdminOverride
SELECT c.CodeIdentifier
      ,Rating=NULL
      ,Key=NULL
      ,IndustryType=NULL
      ,ProductGroup=NULL
      ,Type1='CMBS'
      ,Type2=NULL
      ,SubSectorDescription=NULL
      ,WorkoutDate=NULL
      ,Notes=NULL
      ,EffectiveMaturity=NULL
      ,CreatedDate=GETDATE()
      ,CreatedBy=SUSER_NAME()
      ,ModifiedDate=NULL
      ,ModifiedBy=NULL
FROM dbo.ItemTypeSecurity c 
LEFT JOIN @Updated u
      ON c.CodeIdentifier = u.CodeIdentifier
WHERE u.CodeIdentifier IS NULL 

If it existed, it updated AND created a record in the @Updated table what it updated, the Insert command only happens for records that are NOT in the @Updated.

SomeMiscGuy
A: 

selecting from systables/information_schema to create the sql for queries and views, or in general, making the metadata work for you.

tpdi
A: 

I have had great use of Itzik Ben-Gan's table-valued function fn_nums. It is used to generate a table with a fixed number of integers. Perfekt when you need to cross apply a specific number of rows with a single row.

CREATE FUNCTION [dbo].[fn_nums](@max AS BIGINT)RETURNS @retTabl TABLE (rNum INT)
AS
BEGIN 
IF ISNULL(@max,0)<1 SET @max=1;
  WITH
    L0 AS (SELECT 0 AS c UNION ALL SELECT 0),
    L1 AS (SELECT 0 AS c FROM L0 AS A CROSS JOIN L0 AS B),
    L2 AS (SELECT 0 AS c FROM L1 AS A CROSS JOIN L1 AS B),
    L3 AS (SELECT 0 AS c FROM L2 AS A CROSS JOIN L2 AS B),
    L4 AS (SELECT 0 AS c FROM L3 AS A CROSS JOIN L3 AS B),
    L5 AS (SELECT 0 AS c FROM L4 AS A CROSS JOIN L4 AS B)
  insert into @retTabl(rNum)
  SELECT TOP(@max) ROW_NUMBER() OVER(ORDER BY (SELECT 0)) AS n 
  FROM L5;
RETURN
END
Frederik
A: 

Under Sybase's T-SQL you have a nice update from feature:

UPDATE aTable
SET a.field = b.field
FROM aTable a, bTable b
WHERE a.id = b.id

That's neat.

MySQl has this kind of feature as well, but the syntax does not look so intiutive at first glance:

UPDATE updatefrom p, updateto pp
SET pp.last_name = p.last_name
WHERE pp.visid = p.id
B0rG
+6  A: 

Date arithmetic and processing drives me crazy. I got this idea from the Data Warehousing Toolkit by Ralph Kimball.

Create a table called CALENDAR that has one record for each day going back as far as you need to go, say from 1900 to 2100. Then index it by several columns - say the day number, day of week, month, year, etc. Add these columns:

ID
DATE
DAY_OF_YEAR
DAY_OF_WEEK
DAY_OF_WEEK_NAME
MONTH
MONTH_NAME
IS_WEEKEND
IS_HOLIDAY
YEAR
QUARTER
FISCAL_YEAR
FISCAL_QUARTER
BEGINNING_OF_WEEK_YEAR
BEGINNING_OF_WEEK_ID
BEGINNING_OF_MONTH_ID
BEGINNING_OF_YEAR_ID
ADD_MONTH
etc.

Add as many columns as are useful to you. What does this buy you? You can use this approach in any database and not worry about the DATE function syntax. You can find missing dates in data by using outer joins. You can define multi-national holiday schemes. You can work in fiscal and calendar years equally well. You can do ETL that converts from words to dates with ease. The host of time-series related queries that this simplifies is incredible.

Paul Chernoch
I've seen this idea several times now; I wonder why this isn't built-in to the DBMSs.
Kenny Evitt
+4  A: 

How to not explode your rollback segment :

delete
from myTable
where c1 = 'yeah';
commit;

It could never finish if there is too many data to delete...

create table temp_myTable
as
select *
from myTable
where c1 != 'yeah';
drop myTable;
rename temp_myTable to myTable;

Juste recreate index/recompile objects, and you are done !

Scorpi0
+2  A: 

I wrote a stored procedure called spGenerateUpdateCode. You passed it a tablename or viewname and it generated an entire T-SQL Stored Procedure for updating that table. All I had to do was copy and paste into TextPad (my favorite editor). Do some minor find and replaces and minimal tweaking and BAM... update done.

I would create special views of base tables and call spGenerateUpdateCode when I needed to do a partial updates.

That single 6 hour coding session saved me hundreds of hours.

This proc created two blocks of code. One for inserts and one for updates.

Cape Cod Gunny
A: 

Information Schema pure and simple.

I just had to write a small app to delete all data with tables or columns named x or y.

Then i looped that in coldfusion and created in 5 lines what would have taken 20-30 lines.

It purely rocks

crosenblum