views:

47

answers:

2

I have been using hbm2ddl and hbm2java to build my database and POJOs for my project based on my XML mapping files (hbm.xml). I'm starting to get some real data now in the system, so re-running my ant script which drops the database and recreates everything is no longer a viable option since I want to keep the data. I have heard that there is a hbm2ddl.auto feature which can be used to modify the db structure without getting rid of the data, but I gather from some of the answers on s/o that it's not a best practice.

My understanding is that you should write the SQL DDL based on the changes in the mapping file and run it against the DB. I was wondering if hbm2ddl can be used to generate this DDL based on the mapping file. I have also head about LiquiBase, but not sure if those would be appropriate for my situation as I am hoping to keep using the hbm.xml files as the key data definition source.

Comments and suggestions are appreciated - thanks in advance!

+1  A: 

It might not be the best way to do this in production environment, but if you are not "live with thousands of users", here's how to do it:

  • set hibernate.hbm2dll to update (this is a hibernate property, set it in hibernate.cfg.xml, or in the spring config, if you use it)
  • every time you (re)start your application, the database schema will be updated with the new fields and constraints.
  • old fields may not be removed, so you can go and clean them up manually
  • primitives, corresponding to columns with not null should be assigned a default or you should go and make a manual UPDATE query. Otherwise you'll get runtime exceptions.
Bozho
Would I get the same results if I use my ant script's `hbm2ddl` task with `export=false` and `update=true` to generate a script, and run that against my DB? When people comment that the update should not be used in a production environment, it's not clear to me if they mean: "don't use it to automatically update the database when you start your app by putting it in your config because the changes may not be well tested with your data", or "don't use this functionality at all, even to just generate the SQL; you should write the SQL by hand."
slau
@slau I definitely don't agree with the second statement. These are people that want their life to be unnecessarily complicated by tracking DB changes. Since your object model is _the_ original data, you should generate everything from it, if possible. The first statement is covered by the 4th bullet in my answer - there might be inconsistencies with the data already entered. But If you test it before putting on production, with the data from production, it's still fine.
Bozho
@Bozho - I agree with maintaining only a single artifact - the mappings - rather than keeping the deltas. But I think that in this case, the common perception is that the delta generated isn't complete enough, coupled with the unpredictability of the change - I suppose that remains an open problem in data model transformation / refactoring. In any case, I tried using the update to generate SQL and saw many of the points which you mentioned. I think it's a manageable solution for what I need at this point. Thanks!
slau
A: 

I was asking me the same question last week and I decided to use LiquiBase. There are many statements here on "Stack Overflow" which discourage you from using hbm2ddl in production environments. So do I.

I was running into hbm2ddl's capability limitations when I tried to add unique constraints and I realized that it was time to choose a more sophisticated approach.

splash
My understanding of LiquiBase is that you have to manually maintain `changesets` which reflect any changes which you made to the mapping files. I thought that might be troublesome, especially since I'm not worried about distributed changes and you would need to include the LiquiBase libraries for deployment. When you mention `hbm2ddl` has problems with unique constraints, do you mean when you try to add unique to an existing column?
slau
Yes, `hbm2ddl` has problems with adding constraints to existing columns. IIRC, it also fails when you try to add indexes.
splash
@splash - You're right. It doesn't like it when I add an index to an existing column, but it seems to do ok with new fields. Looks like a new LiquiBase build is on its way - will probably check that out soon. Thanks for the suggestion!
slau

related questions