tags:

views:

865

answers:

5

Should I be storing the raw SQL queries in my c# code, or should I be delegating them to stored functions in the Postgres Backend?

If I should be storing them in code, since they can get quite long, what is the optimal way to store them (ie. as constants in a separate static class)?

Does using stored functions have any impact on deployability/updatability of an application?

Thanks

A: 

I quite like storing SQL in text files embedded as resources into the assembly (when I absolutely have to have a significant number of them; I'm an ORM man normally).

Quite how you format that text file is up to you, of course. You could have double-line-break delimited:

UpdateCustomer:
UPDATE CUSTOMER SET NAME=@Name WHERE ID=@ID

FindCustomer:
SELECT NAME, ADDRESS FROM CUSTOMER
WHERE ID=@Id

etc

It's not hard to then have a class which loads that into a Dictionary<string,string> in a type initialier, or even a Dictionary<string,SqlStatementHelper> where the latter is a class to make it easy to prepare the statement with appropriate values. You could use any other format you want though, including XML - although if you didn't need anything else beyond the name/sql pairs, that would probably be overkill.

The downside of this is the disconnect between the SQL and the code that uses it - you can't instantly tell what the parameters are without looking at the text file. On the other hand (and I'm just thinking off the cuff here) you probably could autogenerate a class from the SQL (and a bit of metadata) with methods with strongly typed parameters.

The mixed blessing of stored procs is that they live in the database rather than in your code. The upside of this is that you're more likely to keep them in sync with DDL changes. The downside is that they're more effort to change (IME, anyway) than queries defined locally. (There are obviously plenty of other pros and cons, but I'm limiting myself to the convenience side here.)

As for maintenance - well, I suppose it's possible that you'd have a "database only" update of just tables and stored procs, without any clients having to know at all - in which case the stored procs would be a better move. In my experience (certainly with custom solutions where there are only a couple of applications accessing the database, so compatibility isn't as much of an issue) code changes are almost ubiquitous but database changes less so - at which point storing the queries with the application means you're less likely to have to do a database change when you update. Does any of that make sense? I haven't had coffee yet...

Jon Skeet
A: 

Either way! But case by case,

SQL at code level as constant,

  • Value is evaluated at compile time, this optimizes the performance.
  • It is safe, it can not be modified after code compiled or deployed.
  • Some draw back on performance by sending all the SQL text to the sql server over the wire.
  • Some drawback on security if SQL was dynamically created.

SQL at SQL function level

  • More Secure since parameters are typed.
  • Easier to maintain against database schema. e.g. table can not dropped if one function is referencing it.
  • Better performance by just sending through function names and parameters to the SQL server.
  • Easier to apply hotfix at function level since it is not compiled. People can see & modify your code since it exposes as plain text.
  • Some drawback to maintain the versions between application and the sql function.
codemeit
A: 

From the normal deployability/updatability considerations, storing SQL queries in code will limit your options, as source code is less likely to be recompiled when deployed, in comparison with the usual modifications/hotfixes of SQL stored procedures.

Of course it also depends on how big the your "update" would be. This point would be mood if there are significant changes of business logics or a database switch is required

Heartless Angel
+2  A: 

As a rule we do all of our database interactions via stored procedures instead of direct SQL statements; this allows a lot of flexibility for adjusting to schema changes and performance issues without having to rebuild or redeploy binary images or configuration files to all targets.

A: 

I think it's better to use stored procedures because of its using gives performance. In case if it's possible.

I mean - queries and source code, local variables, etc are enough 'independent' - it doesn't requires much 'preparation' and you can just send some variables to a stored procedure as parameters.

abatishchev