views:

240

answers:

3

I am working on a web API for the insurance industry and trying to work out a suitable data structure for the quoting of insurance.

The database already contains a "ratings" table which is basically:

sysID (PK, INT IDENTITY)
goods_type (VARCHAR(16))
suminsured_min (DECIMAL(9,2))
suminsured_max (DECIMAL(9,2))
percent_premium (DECIMAL(9,6))
[Unique Index on goods_type, suminsured_min and suminsured_max]

[edit] Each type of goods typically has 3 - 4 ranges for suminsured [/edit]

The list of goods_types rarely changes and most queries for insurance will involve goods worth less than $100. Because of this, I was considering de-normalising using tables in the following format (for all values from $0.00 through to $100.00):

Table Name: tblRates[goodstype]
suminsured (DECIMAL(9,2)) Primary Key
premium (DECIMAL(9,2))

Denormalising this data should be easy to maintain as the rates are generally only updated once per month at most. All requests for values >$100 will always be looked up in the primary tables and calculated.

My question(s) are:
1. Am I better off storing the suminsured values as DECIMAL(9,2) or as a value in cents stored in a BIGINT?
2. This de-normalisation method involves storing 10,001 values ($0.00 to $100.00 in $0.01 increments) in possibly 20 tables. Is this likely to be more efficient than looking up the percent_premium and performing a calculation? - Or should I stick with the main tables and do the calculation?

A: 

i am not entirely sure exactly what calculations we are talking about, but unless they are obnoxiously complicated, they will more than likely be much quicker than looking up data in several different tables. if possible, perform the calculations in the db (i.e. use stored procedures) to minimize the data traffic between your application layers too.

and even if the data loading would be quicker, i think the idea of having to update de-normalized data as often as once a month (or even once a quarter) is pretty scary. you can probably do the job pretty quickly, but what about the next person handling the system? would you require of them to learn the db structure, remember which of the 20-some tables that need to be updated each time, and do it correctly? i would say the possible performance gain on de-normalizing will not be worth much to the risk of contaminating the data with incorrect information.

Tomas Lycken
Thanks for your answer. Do you have any thoughts on the use storing currency values in DECIMAL(9,2) vs in cents as a BIGINT?
John
actually, i do not. i'm not well into the specifics of the database to know which is most effective. don't trust me ;)my initial thought is that integer is faster if you're multiplying, but i'm not sure how a bigint reacts on division - depending on how you store the result, you might loose data.
Tomas Lycken
+4  A: 

Don't create new tables. You already have an index on goods, min and max values, so this sql for (known goods and its value):

SELECT percent_premium 
FROM ratings 
WHERE goods='PRECIOUST' and :PREC_VALUE BETWEEN suminsured_min AND suminsured_max

will use your index efficently.

The data type you are looking for is smallmoney. Use it.

dmajkic
smallmoney... Eureka!
John
+1  A: 

The plan you suggest will use a binary search on 10001 rows instead of 3 or 4.

It's hardly a performance improvement, don't do that.

As for arithmetics, BIGINT will be slightly faster, thought I think you will hardly notice that.

Quassnoi