tags:

views:

350

answers:

4

We have a database that we are using to store test results for an embedded device. There's a table with columns for different types of failures (details not relevant), along with a primary key 'keynum' and a 'NUM_FAILURES' column that lists the number of failures. We store passes and failures, so a pass has a '0' in 'NUM_FAILURES'.

In order to keep the database from growing without bounds, we want to keep the last 1000 results, plus any of the last 50 failures that fall outside of the 1000. So, worst case, the table could have 1050 entries in it. I'm trying to find the most efficient SQL insert trigger to remove extra entries. I'll give what I have so far as an answer, but I'm looking to see if anyone can come up with something better, since SQL isn't something I do very often.

We are using SQLITE3 on a non-Windows platform, if it's relevant.

EDIT: To clarify, the part that I am having problems with is the DELETE, and specifically the part related to the last 50 failures.

+1  A: 

The reason you want to remove these entries is to keep the database growing too big and not to keep it in some special state. For that i would really not use triggers and instead setup a job to run at some interval cleaning up the table.

Per Hornshøj-Schierbeck
Generally this would be correct, but the question specifies an embedded device so we cannot assume there is any mechanism available to do this or that it would be desirable if it were.
Cruachan
Cruachan is corrent about the platform. Also, the trigger itself isn't really the problem; it's the DELETE part of it that I'm having trouble with. That would be the same, even if I did it as part of a separate thread.
mbyrne215
A: 

So far, I have ended up using a View combined with a Trigger, but I'm not sure it's going to work for other reasons.

CREATE VIEW tablename_view AS SELECT keynum FROM tablename WHERE NUM_FAILURES!='0' 
    ORDER BY keynum DESC LIMIT 50;
CREATE TRIGGER tablename_trig
  AFTER INSERT ON tablename WHEN (((SELECT COUNT(*) FROM tablename) >= 1000) or
    ((SELECT COUNT(NUM_FAILURES) FROM tablename WHERE NUM_FAILURES!='0') >= 50))
  BEGIN
     DELETE FROM tablename WHERE ((((SELECT MAX(keynum) FROM ibit) - keynum) >= 1000)
  AND 
     ((NUM_FAILURES=='0') OR ((SELECT MIN(keynum) FROM tablename_view) > keynum)));
  END;
mbyrne215
A: 

I think you may be using the wrong data structure. Instead I'd create two tables and pre-populate one with a 1000 rows (successes) and the other with 50 (failures). Put a primary ID on each. The when you record a result instead of inserting a new row find the ID+1 value for the last timestamped record entered (looping back to 0 if > max(id) in table) and update it with your new values.

This has the advantage of pre-allocating your storage, not requiring a trigger, and internally consistent logic. You can also adjust the size of the log very simply by just pre-populating more records rather than to have to change program logic.

There's several variations you can use on this, but the idea of using a closed loop structure rather than an open list would appear to match the problem domain more closely.

Cruachan
There are several reasons why this won't work for me. For starters, we don't have an accurate time and date stamp in the system at all times. Also, I need the failures and passes in the correct relation to each other.
mbyrne215
A: 

How about this:

DELETE  
FROM  table 
WHERE ( id   > ( SELECT max(id) - 1000 FROM table ) 
        AND  num_failures   = 0 
      )
OR    id     > ( SELECT max(id) - 1050 FROM table )

If performance is a concern, it might be better to delete on a periodic basis, rather than on each insert.

AJ
This isn't exactly what I was looking for, since if you have 50 failures within the 1000, you don't need to keep more failures past that. However, the criteria might not need to actually be that complicated after all.
mbyrne215