views:

202

answers:

1

I was wondering what is the best practice for a JPA model in Lift? I noticed that in the jpa demo application, there is just a Model object that is like a super object that does everything. I don't think this can be the most scalable approach, no?

Is it is wise to still do the DAO pattern in Lift? For example, there's some code that looks a tad bloated and could be simplified across all model objects:

Model.remove(Model.getReference(classOf[Author], someId))

Could be:

AuthorDao.remove(someId)

I'd appreciate any tips for setting up something that will work with the way Lift wants to work and is also easy to organize and maintain. Preferably from someone who has actually used JPA on a medium to large Lift site rather than just postulating what Spring does (we know how to do that) ;)

The first phase of development will be around 30-40 tables, and will eventually get to over 100... we need a scalable, neat approach.

+1  A: 

Reposted from the Lift mailing list for posterity (source here):

I can shed a little light on how we use JPA. I'm not sure what kind of container you are working with, but we are using JBoss 4.2.2, and using its connection pool facilities.

We utilize the scalajpa library to initialize the JPA stuff and keep a reference to the entity manager in a thread local variable. We specifically don't use the Lift RequestVarEM, because the lifecycle of a RequestVar is somewhat more complicated than a regular HTTP request, and this can lead to connections not being returned to the pool in a timely fashion.

The first step is to create the "model", and point it at the unit name from your persistence.xml:

object MyDBModel extends LocalEMF("unitName", false) with
ThreadLocalEM

And we've created a little bit of code to make some operations simple. Each of our persistent classes mixes in a that provides some basic JPA operations:

trait Persistent {
   def persist = DBModel.persist(this)
   def merge = DBModel.merge(this)
   def remove = DBModel.remove(this)

}

For example,

@Entity
@Table{val name="person"}
class Person extends Persistent {

@Id
var id:String = _

@Column {val name="first_name", val nullable = false, val
updatable=false}
var firstName:String = _

@Column {val name="last_name", val nullable = false, val
updatable=false}
var lastName:String = _

@OneToMany{ ... }
var roles:Set[Role] = new HashSet[Role]()

// etc.

}

We primarily use the mapped collections to navigate the object model, and put more complex database methods on the companion object, so that we don't have references to MyDBModel scattered throughout the code (as you noted, an undesirable practice). For example:

object Person {
def findByLastName = MyDBModel.createQuery[Person]
("...").findAll.toList

// etc.

}

Lastly, our integration with Lift is in the form of a bit of code that wraps each request:

S.addAround(new LoanWrapper {
   def apply[T](f: => T):T = {
      try {
         f
      }
      catch {
         case e => MyDBModel.getTransaction.setRollbackOnly
      }
      finally {
         MyDBModel.cleanup
      }
   }

})

I've left out some error handling here to make the idea clearer, but the intent is that each HTTP request executes in a transaction, which either succeeds or fails in its entirety. Since MyDBModel is initialized when it is first touched, in your test code you can rig up the EM as you see fit, and the data objects are isolated from this configuration.

Hope this is useful.

Sean

Will Sargent
How do you test code that relies so heavily on global variables (like `MyDBModel` in your example). Isn't it complicated to do without involving the whole stack?
Theo