Is it okay to run Hibernate applications configured with hbm2ddl.auto=update
to update the database schema in a production environment?
views:
10638answers:
9NOOOOOOOOOOOOOOOOOOOO! :)
It's unsafe.
Despite folks in Hib do their best, you simply cannot rely on automatic updates in production. Write you own patches, review them with DBA, test them, then apply them manually.
Theoretically, if hbm2ddl update worked in development, it should work in production too. But in reality, it's not always the case. :-(
Even if it worked OK, it may be suboptimal. DBAs are paid that much for a reason.
I wouldn't risk it because you might end up losing data that should have been preserved. hbm2ddl.auto=update is purely an easy way to keep your dev database up to date.
We do it in production albeit with an application that's not mission critical and with no highly paid DBAs on staff. It's just one less manual process that's subject to human error - the application can detect the difference and do the right thing, plus you've presumably tested it in various development and test environments.
One caveat - in a clustered environment you may want to avoid it because multiple apps can come up at the same time and try to modify the schema which could be bad. Or put in some mechanism where only one instance is allowed to update the schema.
I agree with Vladimir. The administrators in my company would definitely not appreciate it if I even suggested such a course.
Further, creating an SQL script in stead of blindly trusting Hibernate gives you the opportunity to remove fields which are no longer in use. Hibernate does not do that.
And I find comparing the production schema with the new schema gives you even better insight to wat you changed in the data model. You know, of course, because you made it, but now you see all the changes in one go. Even the ones which make you go like "What the heck?!".
There are tools which can make a schema delta for you, so it isn't even hard work. And then you know exactly what's going to happen.
I would vote no. Hibernate doesn't seem to understand when datatypes for columns have changed. Examples (using MySQL):
String with @Column(length=50) ==> varchar(50)
changed to
String with @Column(length=100) ==> still varchar(50), not changed to varchar(100)
@Temporal(TemporalType.TIMESTAMP,TIME,DATE) will not update the DB columns if changed
There are probably other examples as well, such as pushing the length of a String column up over 255 and seeing it convert to text, mediumtext, etc etc.
Granted, I don't think there is really a way to "convert datatypes" with without creating a new column, copying the data and blowing away the old column. But the minute your database has columns which don't reflect the current Hibernate mapping you are living very dangerously...
typically enterprise apps in large organizations run with reduced privileges. db username may not have DDL privilege for adding columns which hbm2ddl.auto=update requires.
Applications' schema may evolve in time; if you have several installations, which may be at different versions, you should have some way to ensure that your application, some kind of tool or script is capable of migrating schema and data from one version stepwise to any following one.
Having all your persistence in Hibernate mappings (or annotations) is a very good way for keeping schema evolution under control.
You should consider that schema evolution has several aspects to be considered:
evolution of the database schema in adding more columns and tables
dropping of old columns, tables and relations
filling new columns with defaults
Hibernate tools are important in particular in case (like in my experience) you have different versions of the same application on many different kinds of databases.
Point 3 is very sensitive in case you are using Hibernate, as in case you introduce a new boolean valued property or numeric one, if Hibernate will find any null value in such columns, if will raise an exception.
So what I would do is: do indeed use the Hibernate tools capacity of schema update, but you must add alongside of it some data and schema maintenance callback, like for filling defaults, dropping no longer used columns, and similar. In this way you get the advantages (database independent schema update scripts and avoiding duplicated coding of the updates, in peristence and in scripts) but you also cover all the aspects of the operation.
So for example if a version update consists simply in adding a varchar valued property (hence column), which may default to null, with auto update you'll be done. Where more complexity is necessary, more work will be necessary.
This is assuming that the application when updated is capable of updating its schema (it can be done), which also means that it must have the user rights to do so on the schema. If the policy of the customer prevents this (likely Lizard Brain case), you will have to provide the database - specific scripts.
Hibernate creators discourage doing so in a production environment in their book "Java Persistence with Hibernate":
WARNING: We've seen Hibernate users trying to use SchemaUpdate to update the schema of a production database automatically. This can quickly end in disaster and won't be allowed by your DBA.