views:

128

answers:

3

Hi community,

Perhaps this question is not very clear but I didn't find better words for the heading, which describes the problem I like to deal with shortly.

I want to restrict access from a java desktop application to postgres.

The background:

Suppose you have 2 apps running and the first Application has to do some complex calculations on the basis of data in the db. To nail the immutability of the data in the db down i'd like to lock the db for insert, update and delete operations. On client side i think it's impossible to handle this behaviour satisfactory. So i thought about to use a little java-app on server-side which works like a proxy. So the task is to hand over CRUD (Create Read Update Delete) operations until it gets a command to lock. After a lock it rejects all CUD operations until it gets a unlock command from the locking client or a timeout is reached.

Questions:

What do you think about this approach?

Is it possible to lock a Database while using such an approach?

Would you prefer Java SE or Java EE as server-side java app?

Thanks in advance.

+3  A: 

Why not use transactions in your operations? The database has features to maintain data integrity itself, rather than resorting to a brute operation such as a total-database lock.

This locking mechanism you describe sounds like it would be a pain for the users. Are the users initating the lock or is the software itself? If it's the users, you can expect some problems when Bob hits lock and then goes to lunch for 2 hours, forgetting to unlock the database first...

matt b
+1  A: 

Indeed... there are a few proper ways to deal with this problem.

  1. Just lock the tables in your code. Postgresql has commands for locking entire tables that you could run from your client application
  2. Pick a transaction isolation level that doesn't have the problem of reading data that was committed after your txn started (BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ).

Of these, by far the most efficient is to use repeatable read as your isolation level. Postgres supports this quite efficiently, and it will give you a consistent view of the data without such heavy locking of the db.

jsight
A: 

Year i thought about transactions but in this case i can't use them. I'm sorry i didn't mention it exactly. So assume the follow easy case: A calculation closes one area of responsibility. After calc a new one is opened and new inserts are dedicated to it. But while calculation-process a insert or update or delete is not allowed to the data of the (currently calculated) area of responsibility. More over a delete is strictly prohibited because data has to be archived.

So imo the use of transactions doesn't fit this requirement. Or did i miss sth.?

ps: (off topic) @jsight: i currently read that intenally postgres mapps "repeatable read" to "serializable", so using "repeatable read" gets you more restriction then you would perhaps expect.