views:

26

answers:

1

I have a table containing roles. Many users can be assigned to a role. Permissions can be assigned to a role, granting all users to that role with the permission.

When I assign a permission to a role that has 500 hundred people, I've effectively created 500 permission assignments.

With each permission that is assigned to a role I specify a complex calculation that must take place when determining what type of permission access each individual with that role has. For example, if I assign a "READ_EMAIL" permission to an "ACCOUNTANT" role, at the time I do that I also include a variable that specifies which mailbox types are being permitted, as there are multiple mailbox types and the accountants should only have access to a certain group of them.

So when I want to find out which individuals have access to which specific mailboxes I have to not only join my permissions, roles and users tables, but I need to do a lookup that for reasons which are hard to explain in the space here is time consuming and cannot be made faster.

The problem I run into is that when I expose all calculated permission assignments in a view (joining all those tables + doing the complex, time-consuming calculation), it takes a very very long time.

It occurred to me that I could simply create a user-role-permission table which is populated via triggers activated by assigning a user to a role or by assigning a permission to a role. Whenever a permission is assigned to a role it would insert a record for each individual having that role and would do the complex lookup at that time, putting 500 records into this table. Whenever a user is assigned to a role it would generate any permissions + complex lookups. There would be foreign keys from this table to the role assignment table and the permission assignment tables, with cascading deletes.

Permission to role assignment is rare, so it's fine if that is slowed down greatly. 99.99999% of the access to my tables is SELECT.

Any drawbacks to this method? I need real-time data. Am I just engineering my own on-commit materialized view? Any other suggestions?

+2  A: 

It does sound like you're engineering your own on-commit materialized view. Is there a reason that you couldn't just use an on-commit materialized view here to cache the results?

A failure refreshing the materialized view would cause the transaction's commit operation to fail which would force a rollback. So it's not really possible for the data and the materialized view to get out of sync. It's certainly far more likely that a trigger-based solution would end up with bugs (particularly if there are changes being made in multiple sessions simultaneously). I would much rather leave the writing of the synchronization logic to Oracle.

Justin Cave
@Justin Cave: We're not very experienced with materialized views and I do not know all the potential problems I may run into. Will an exception when generating the materialized view bubble up to and rollback the transaction doing the inserts? I'm not even sure what else could be an issue -- I just know I need real-time data at all times and if things got out of sync for any reason it would be a problem.
RenderIn
@Renderln: Added to my answer
Justin Cave
@Justin Cave: Thanks!
RenderIn