views:

203

answers:

6

Time and again, I've seen people here and everywhere else advocating avoidance of nonportable extensions to the SQL language, this being the latest example. I recall only one article stating what I'm about to say, and I don't have that link anymore.

Have you actually benefited from writing portable SQL and dismissing your dialect's proprietary tools/syntax?

I've never seen a case of someone taking pains to build a complex application on mysql and then saying You know what would be just peachy? Let's switch to (PostGreSQL|Oracle|SQL Server)!

Common libraries in -say- PHP do abstract the intricacies of SQL, but at what cost? You end up unable to use efficient constructs and functions, for a presumed glimmer of portability you most likely will never use. This sounds like textbook YAGNI to me.

EDIT: Maybe the example I mentioned is too snarky, but I think the point remains: if you are planning a move from one DBMS to another, you are likely redesigning the app anyway, or you wouldn't be doing it at all.

+1  A: 

In the vast majority of applications I would wager there is little to no benefit and even a negative effect of trying to write portable sql; however, in some cases there is a real use case. Let's assume you are building a Time Tracking Web Application. And you'd like to offer a self hosted solution.

In this case your clients will need to have a DB Server. You have some options here. You could force them into using a specific version which could limit your client base. If you can support multiple DBMS then you have a wider potential client that can use your web application.

JoshBerke
+7  A: 

Software vendors who deal with large enterprises may have no choice (indeed that's my world) - their customers may have policies of using only one database vendor's products. To miss out on major customers is commercially difficult.

When you work within an enterprise you may be able to benefit from the knowledge of the platform.

Generally speaking the DB layer should be well encapsulated, so even if you had to port to a new database the change should not be pervasive. I think it's reasonable to take a YAGNI approach to porting unless you have a specific requriement for immediate multi-vendor support. Make it work with your current target database, but structure the code carefully to enable future portability.

djna
+1  A: 
  • If you're corporate, then you use the platform you are given
  • If you're a vendor, you have to plan for multiple platforms

Longevity for corporate:

  • You'll probably rewrite the client code before you migrate DBMS
  • The DBMS will probably outlive your client code (Java or c# against '80 mainframe)

Remember:

SQL within a platform is usually backward compatible, but client libraries are not. You are forced to migrate if the OS can not support an old library, or security environment, or driver architecture, or 16 bit library etc

So, assume you had an app on SQL Server 6.5. It still runs with a few tweaks on SQL Server 2008. I bet you're not using the sane client code...

gbn
+3  A: 

The problem with extensions is that you need to update them when you're updating the database system itself. Developers often think their code will last forever but most code will need to be rewritten within 5 to 10 years. Databases tend to survive longer than most applications since administrators are smart enough to not fix things that aren't broken so they often don't upgrade their systems with every new version.
Still, it's a real pain when you upgrade your database to a newer version yet the extensions aren't compatible with that one and thus won't work. It makes the upgrade much more complex and demands more code to be rewritten.
When you pick a database system, you're often stuck with that decision for years.
When you pick a database and a few extensions, you're stuck with that decision for much, much longer!

Workshop Alex
I find this isn't true unless you wait for several expired versions of the software to upgrade your database. Most database vendors go out of their way to ensure backwards compatibility. If they didn't no one would ever upgrade as the data is business critical.
HLGEM
It does apply to long-term usage, indeed. In general, companies just don't upgrade to save expenses, to avoid downtime or simply because the upgrade doesn't fix any of their problems. It's not unusual to still see SQL Server 2000 in the wild! Or Oracle 8. Then again, this also means that those companies are in existence for more than 5 to 10 years...
Workshop Alex
Ok, but did you wheigh the advantage those extensions gave you?
Adriano Varoli Piazza
+1 functionality that conforms (closely) to SQL Standards have greater longevity.
onedaywhen
Actually, I just tend to use databases for plain vanilla data storage, which just needs simple queries. I add business logic in a business layer, like a Delphi datamodule or a .NET entity model. Thus keeping as much as logic away from the database itself. This works quite well with small databases with up to half a million records over dozens of tables.
Workshop Alex
+3  A: 

The only case where I can see it necessary is when you are creating software the client will buy and use on their own systems. By far the majority of programming does not fall into this category. To refuse to use vendor specific code is to ensure that you have a porrly performing database as the vendor specific code is usually written to improve the performance of certain tasks over ANSII Standard SQL and it written to take advatage of the specific architecture of that database. I've worked with datbases for over 30 years and never yet have I seen a company change their backend database without a complete application rewrite as well. Avoiding vendor-specific code in this case means that you are harming your performance for no reason whatsoever most of the time.

I have also used a lot of different commercial products with database backends through the years. Without exception, every one of them was written to support multiple backends and, without exception, every one of them was a miserable, slow dog of a program to actually use on a daily basis.

HLGEM
+1  A: 

There are always some benefits and some costs to using the "lowest common denominator" dialect of a language in order to safeguard portability. I think the dangers of lock-in to a particular DBMS are low, when compared to the similar dangers for programming languges, object and function libraries, report writers, and the like.

Here's what I would recommend as the primary way of safeguarding future portability. Make a logical model of the schema that includes tables, columns, constraints and domains. Make this as DBMS independent as you can, within the context of SQL databases. About the only thing that will be dialect dependent is the datatype and size for a few domains. Some older dialects lack domain support, but you should make your logical model in terms of domains anyway. The fact that two columns are drawn from the same domain, and don't just share a common datatype and size, is of crucial importance in logical modelling.

If you don't understand the distinction between logical modeling and physical modeling, learn it.

Make as much of the index structure portable as you can. While each DBMS has its own special index features, the relationship between indexes, tables, and columns is just about DBMS independent.

In terms of CRUD SQL processing within the application, use DBMS specific constructs whenever necessary, but try to keep them documented. As an example, I don't hesitate to use Oracle's "CONNECT BY" construct whenever I think it will do me some good. If your logical modeling has been DBMS independent, much of your CRUD SQL will also be DBMS independent even without much effort on your part.

When it comes time to move, expect some obstacles, but expect to overcome them in a systematic way.

(The word "you" in the above is to whom it may concern, and not to the OP in particular.)

Walter Mitty