tags:

views:

74

answers:

3

First off, I am not a DBA, but I do work in an environment where DBAs do tune/make changes in the production database from time to time in ways that do not cause the need for an application rebuild/redeployment. Usually these changes consist of reworking indexes, changing procs, and sometimes changing the table structure in minor ways (usually abstracted from the app via procs).

Obviously, a team should strive to catch performance problems with NHibernate before they get into production using things like NHProf, SQL Profiler, and load tests. That being said, are there certain strategies that can be used to allow some amount of tweaking once the code is built and out running in production? Using stored procedures 100% of the time seems like it would allow the most flexibility for the DBA's, but obviously that would really kill the efficiency of NHibernate. From what I've read, updatable views (in SQL Server) don't really work that well with NHibernate either (this may-or-may-not be true).

I've read quite a bit about NHibernate and experimented with it over the years, but I have never put it into practice in a production environment. I have yet to come across a set of "best practices" to allow for maximum tweaking once deployed.

As an NHibernate user, how are you and your team dealing with issues if they arise in production? My production environment is made up of ASP.NET apps and SQL server, but I don't think the answers need to be restricted to that platform.

+1  A: 

I'm not in the deploy phase yet, but on my current project I've come up against this already and my solution presently has been to replace my queries with stored procs. As long as the shape of the data coming back from the DB remains the same it's not a big deal. Yeah you do lose some of that agility you enjoyed during development but I'm not sure it's as bad as it initially sounds. You'll have a code push when you first make the change of course, and then from that point it's just proc changes.

Webjedi
A: 

You can use a profiler like NHProf to see the sql queries executed, so you can show them to a DBA. This tool can also detect some problem like n+1 select.

Using a second level of cache can be useful : http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/11/09/first-and-second-level-caching-in-nhibernate.aspx

Matthieu
NHProf is a great tool to use during development, but it doesn't provide a complete solution for this scenario beyond identifying the details and amount of db hits that are being performed. However, used early it can vastly improve your mappings and querying strategy, which is bound to have a positive effect in reducing the amount of issues that arise in production.
Nigel
+1  A: 

I am in a similar position, and in order to keep our DBA happy, I did the following:

  1. Wrote some of the queries in HQL, some others in SQL (especially those perf-sensitive)
  2. Externalized those queries to files, one file per query.
  3. When your app needs to execute of these queries, it just loads the appropriate file, optionally running it through a pre-processor, and runs it.

With this approach, the DBA could theoretically tweak the queries just by modifying those files. That's quite similar to having stored procedures.

In practice, it's up to you to decide if you'll really give the DBA access to those files (if you catch my drift...)

IMHO the DBA should just use the DBMS's profiling tools and report her findings back to the devs (as in "there's this query that is running 20 times/sec and does 10 joins. is that really necessary? can it be cached? do you really need all those joins? can we denormalize this?" etc.

Mauricio Scheffer
This sounds like an interesting approach. I see you blog. Going into more detail on this with examples would make an interesting blog post IMHO.
Daniel Auger