views:

132

answers:

3

Here's the situation. Due to the design of the database I have to work with, I need to write a stored procedure in such a way that I can pass in the name of the table to be queried against if at all possible. The program in question does its processing by jobs, and each job gets its own table created in the database, IE table-jobid1, table-jobid2, table-jobid3, etc. Unfortunately, there's nothing I can do about this design - I'm stuck with it.

However, now, I need to do data mining against these individualized tables. I'd like to avoid doing the SQL in the code files at all costs if possible. Ideally, I'd like to have a stored procedure similar to:

SELECT *
FROM @TableName AS tbl
WHERE @Filter

Is this even possible in SQL Server 2005? Any help or suggestions would be greatly appreciated. Alternate ways to keep the SQL out of the code behind would be welcome too, if this isn't possible.

Thanks for your time.

+4  A: 

best solution I can think of is to build your sql in the stored proc such as:

@query = 'SELECT * FROM ' + @TableName + ' as tbl WHERE ' + @Filter

exec(@query)

not an ideal solution probably, but it works.

BBlake
Thanks again. This got me on a track to look into Dynamic SQL, which serves my purposes just fine here.
Clyde
A: 

This would be a meaningless stored proc. Select from some table using some parameters? You are basically defining the entire query again in whatever you are using to call this proc, so you may as well generate the sql yourself.

the only reason I would do a dynamic sql writing proc is if you want to do something that you can change without redeploying your codebase. But, in this case, you are just SELECT *'ing. You can't define the columns, where clause, or order by differently since you are trying to use it for multiple tables, so there is no meaningful change you could make to it.

In short: it's not even worth doing. Just slop down your table specific sprocs or write your sql in strings (but make sure it's parameterized) in your code.

Andy_Vulhop
Well, the thing is, while it's multiple tables, it's all the SAME table. Each job just gets its own copy of the table, same schema, same columns, same everything, just a different name. Writing table specific sprocs isn't possible, either. One project could have any number of jobs - the most recently completed project had over seventeen thousand jobs, and each job handling anywhere from 10,000 to 1,000,000 records.
Clyde
Then why not normalize them to one actual table with a column specifying what job type it is? It could be a foreign key to a lookup table to give a verbose name to the job type, or just a string specifying it.
Andy_Vulhop
Believe me, I wish that i could. It's actually all one job type. It's just that whoever designed the database set it up this way - and I'm afraid I'm not high enough on the totem pole to point out that it's... horribly difficult to data mine with this setup. The next version of the software is going to be a mostly ground-up version, and we'll be visiting the database as well. Until then, this is what I have to work with.
Clyde
I feel your pain man. Still, I'd at least point out (repeatedly) that it's a major pain point in your codebase. If people hear it enough, it might get some solid rewrite love come next iteration. Squeaky wheel...
Andy_Vulhop
+1  A: 

The best answer I can think of is to build a view that unions all the tables together, with an id column in the view telling you where the data in the view came from. Then you can simply pass that id into a stored proc which will go against the view. This is assuming that the tables you are looking at all have identical schema.

example:

create view test1 as

select * , 'tbl1' as src
 from job-1
union all
select * , 'tbl2' as src
 from job-2
 union all
 select * , 'tbl3' as src
 from job-3

Now you can select * from test1 where src = 'tbl3' and you will only get records from the table job-3

chris.w.mclean
Unfortunately unfeasible - there's no set number of job tables, and each job table stores anywhere from 10,000 to over 1,000,000 records. Trying to union them all would run the server right out of memory. Thank you for the suggestion, though - that idea may come in handy later on for another reporting page I'll eventually be working on.
Clyde