views:

571

answers:

13

I'm curious about people's approaches to using stored procedures in a database that is accessed by many applications. Specifically, do you tend to keep different sets of stored procedures for each application, do you try to use a shared set, or do you do a mix?

On the one hand, reuse of SPs allows for fewer changes when there is a model change or something similar and ideally less maintenance. On the other hand, if the needs of the applications diverge, changes to a stored procedure for one application can break other applications. I should note that in our environment, each application has its own development team, with poor communication between them. The data team has better communication though, and is mostly tasked with the stored procedure writing.

Thanks!

+5  A: 

Stored procedures should be created based on the data you intend to return, not the application making the request. If you have a stored procedure that is GetAllItems, it should return all of the items in the database. If one of the applications would like to get all of the items by category, create GetAllItemsByCategory. There is no reason for the business rules of a stored procedure to change based on the application requesting the data.

lordscarlet
+1  A: 

The last portion of your question I believe answered itself.

With already poor communication, sharing procedures between development teams would just add to the potential points of failure and could cause either team hardship.

If I'm on the same team working on multiple projects we will save some time and share procedures, but typically I have found that a little duplication (A few procedures here and there) helps avoid the catastrophic changes/duplication needed later when the applications start to diverge.

LordScarlet also points out a key element as well, if it is generic with no business logic sharing it shouldn't be an issue.

Mitchel Sellers
+4  A: 

My experience has been that having SPs shared by multiple applications is a cause of pain. In fact, I would argue that having a database that is accessed directly by more than one application is not the best long term architecture.

The pattern I recommend and have implemented is that only one application should "own" each database, and provide APIs (services, etc.) for other applications to access and modify data.

This has several advantages:

  1. The owning application can apply any business logic, logging, etc. to make sure it remains stable
  2. If the schema is changed, all interfaces are known and can be tested to make sure external applications will still work
Guy Starbuck
+1  A: 

Whenever we had stored procedures that were common to multiple applications, we would create a database just for those procedures (and views and tables, etc). That database (we named "base") would then have a developer (or team) responsible for it (maintenance and testing).

If a different team needed new functionality, they could write it and the base developer would either implement it in the base DB or suggest a simpler way.

Ed Schwehm
A: 

I don't think sharing Sprocs among multiple applications makes sense.

I can see the case for sharing a database in related applications, but presumably those applications are separate in large part because they treat the data very differently from one another.

Using the same architecture could work across applications, but imagine trying to use the same business logic layer in multiple applications. "But wait!" you say, "That's silly... if I'm using the same BLL, why would I have a separate app? They do the same thing!"

QED.

Pete Michaud
+2  A: 

Think of it this way: your stored procedures are about the data that's under them, and shouldn't really know about the applications above them. It's possible that one application will need to read or update data in a way that another doesn't, and so one would use SPs that the other wouldn't.

If it were my application / database / etc, and changes to an SP to improve one application broke another, I would consider that evidence of a deeper design issue.

Jeff Paulsen
+1  A: 

It all depends on your abstraction strategy. Are the stored procedures treated as a discrete point of abstraction, or are they treated as just another part of the application that calls them.

The answer to that will tell you how to manage them. If they are a discrete abstraction, they can be shared, as if you need new functionality, you'll add new procedures. If they are part of the app that calls them, they shouldn't be shared.

Jonathan
A: 

Ideally use one proc not multiple versions. If you need versions per customer, investigate the idea of 1 db per customer as opposed to 1 db for all customers. This also allows for some interesting staging of db's on different servers ( allocate the larger/heavier usage ones to bigger servers while smaller ones can share hardware)

Thomas Wagner
A: 

If you look for ability to share the SQL code try building a library of abstract functions. This way you could reuse some code which does generic things and keep business logic separate for each application. The same could be done with the views - they could be kept quite generic and useful for many applications.

You will probably find out that there is not that many uses for common stored procedures as you go along.

That said we once implemented a project which was working with a very badly designed legacy database. We've implemented a set of stored procedures which made information retrieval easy. When other people from other teams wanted to use the same information we refactored our stored procedures to make them more generic, added an extra layer of comments and documentation and allowed other people to use our procedures. That solution worked rather well.

Ilya Kochetov
+3  A: 

Stored procedures should expose business rules which don't change depending on the application using them. This lets the rules be stored and updated once instead of every place they are used, which is a nightmare.

DoniG
+1  A: 

We try to use a single, shared stored proc wherever possible, but we've run into the situation you describe as well. We handled it by adding an application prefix to the stored procs (ApplicationName_StoredProcName).

Often these stored procs call the centralized or "master" stored proc, but this method leaves room for app specific changes down the road.

Sean Gough
+1 This works very well in practice
Andomar
A: 

Many stored procedures are application independent but there may be a few that are application dependent. For example, the CRUD (Create, Select, Update, Delete) stored procedures can go across applications. In particular you can throw in auditing logic (sometimes done in triggers but there is a limit to how complicated you can get in triggers). If you have some type of standard architecture in your software shop the middle tier may require a stored procedure to create/select/update/delete from the database regardless of the application in which case the procedure is shared.

At the same time there may be some useful ways of viewing the data, ie GetProductsSoldBySalesPerson, etc.. which will also be application independent. You may have a bunch of lookup tables for some fields like department, address, etc. so there may be a procedure to return a view of the table with all the text fields. Ie instead of SalesPersonID, SaleDate, CustomerID, DepartmentID, CustomerAddressID the procedure returns a view SalesPersonName, SaleDate, CustomerName, DepartmentName, CustomerAddress. This could also be used across applications. A customer relationship system would want Customer Name/Address/Other Attributes as would a billing system. So something that did all the joins and got all the customer information in one query would probably be used across applications. Admittedly creating ways to view the data is the domain of a view, but often people used stored procedures to do this.

So basically, when deleting from your table do you need to delete from 3 or 4 other tables to ensure data integrity. is the logic too complicated for a trigger? Then a stored procedure that all applications use to do deletions may be important. The same goes for things that need to be done on creation. If there are common joins that are always done, it may make sense to have one stored procedure to do all the joins. Then if later you change the tables around you could keep the procedure the same and just change the logic there.

Cervo
A: 

The concept of sharing a data schema across multiple applications is a tough one. Invariably, your schema gets compromised for performance reasons: denormalization, which indexes to create. If you can cut the size of a row in half, you can double the number of rows per page and, likely, halve the time it takes to scan the table. However, if you only include 'common' features on the main table and keep data only of interest to specific applications on different (but related) tables, you have to join everywhere to get back to the 'single table' idea.

More indexes to support different applications will cause ever-increasing time to insert, update and delete data from each table.

The database server will often become a bottleneck as well, because databases cannot be load-balanced. You can partition your data across multiple servers, but that gets very complicated too.

Finally, the degree of co-ordination required is typically huge, no doubt with fights between different departments over whose requirements get priority, and new developments will get bogged down.

In general, the 'isolated data silo per application' model works better. Almost everything we do - I work for a contract software house - is based on importing data from and exporting data to other systems, with our applications' own databases.

It may well be easier in data warehouse/decision support systems; I generally work on OLTP systems where transaction performance is paramount.

Mike Dimmick