I have a table (2 million rows) in Informix v11.10, replicated (50+ node) environment
Basic layout is like so:
ID (PK) (int)
division (int)
company (int)
feature1 char(20)
feature2 int
...
feature 200 char(2)
There are several issues I have with the current layout: There are 200 "features" associated with this record but maybe 5-10 of them at any given time are not default/null (different for each record).
An update to all records for a company would sometimes mean updating 100k rows which chokes replication and isn't easy to manage.
So I made a change to a table like so:
ID (int)
ID_TYPE (ID,division, or company)
Feature_name
Feature_value
And had another table with only:
ID (int)
division (int)
company (int)
So for say ID #1 there would be 10 rows in the table, and the associated division might have a few records, and company might have a few. An ID record would "override" any record with the same feature_name that matches the division, and division would override any company.
I created a function that when you pass in an ID and a feature_name it queries based on company, then queries on division, and then based on ID, and returns the feature value based on the above override logic. (Basically an ordered foreach loop)
Then I created a view looking like:
select
my_func(feature1,ID) as feature1
my_func(feature2,ID) as feature2
...
my_func(feature200,ID) as feature200
from table
Now the issue is that I'm hitting the table 200 * 3(for ID, company, division) times for each feature which is just not going to work, it pegs the CPU. The new number of records is around 20 million and takes up much less space.
Any thoughts? I feel like I'm missing use of a temp table somewhere that would keep it from needing to hit the 20 million row table 600 times.