tags:

views:

55

answers:

2

We currently run an ecommerce solution for a leisure and travel company. Everytime we have a release, we must bring the ecommerce site down as we update database schema and the data access code. We are using a custom built ORM where each data entity is responsible for their own CRUD operations. This is accomplished by dynamically generating the SQL based on attributes in the data entity.

For example, the data entity for an address would be...

[tableName="address"]
public class address : dataEntity
{
  [column="address1"]
  public string address1;
  [column="city"]
  public string city;
}

So, if we add a new column to the database, we must update the schema of the database and also update the data entity.

As you can expect, the business people are not too happy about this outage as it puts a crimp in their cash-flow. The operations people are not happy as they have to deal with a high-pressure time when database and applications are upgraded. The programmers are upset as they are constantly getting in trouble for the legacy system that they inherited.

Do any of you smart people out there have some suggestions?

+1  A: 

The first answer is obviously, don't use an ORM. Only application programmers think they're good. Learn SQL like everyone else :)

OK, so back to reality. What's to stop you restricting all schema changes to be additions only. Then you can update the DB schema anytime you like, and only install the recompiled application until a safe time (6am works best I find) after the DB is updated. If you must remove things, perform the steps the other way round - install the new app leaving the schema unchanged, and then remove the bits from the schema.

You're always going to have a high-pressure time as you roll out changes, but at least you can manage it better by doing it in 2 easier to understand pieces. Your DBAs will be ok with updating the schema for the existing application.

The downside is that you have to be a lot more organised, but that's not a bad thing when dealing with production servers and you should be seriously organised about it currently.

gbjbaanb
I strongly disagree with this.
Michael Maddox
the solution as it is is no different from yours, only I say run in the DB changes with a slightly longer amount of time before running in the app. Incidentally, I work with 911 call centres, we can't have downtime at all. We split our fault-tolerant pair, update the DB, update the apps, then fail the running system over to the updated system. If we have to undo our apps, its a lot easier to switch back to the old app, keeping the new DB schema in place. Sure, ORMs are making this increasingly difficult, but that's more an implementation issue that we have to deal with.
gbjbaanb
Like anything else, use the right tool for the right job... For 'Example'... Read heavy products can use first and second level caching features offered in most ORMs and keep from making calls to databases that never produce new answers. On the other hand, write-heavy applications may find it makes more sense to offload writes (especially complex ones) to the database.
JoeGeeky
A: 

Supporting this scenario will add significant complexity to your environment and/or process and/or application.

You can run a complex update process where your application code is smart enough to run correctly on both the old schema and the new schema at the same time. Then you can update the application first and the schema second. A third step may be to migrate any data, which again, the application has to be able to work with. In that case, you only need to "tombstone" the application for the time it takes to upgrade the application, which could just be seconds, depending on how many files and machines are involved in the upgrade.

In most cases, it's best to leave the application/environment/process simple and live with the downtown during a slow time of the day/week/month. Pretty much all applications need to be "taken down" for time to time for "regularly schedule maintenance".

Michael Maddox