I have a constant data flux. All data must be stored into the database with a timestamp. The data comes in a 5 minutes interval, and a select of the latest data is made in the same interval, in pseudo SQL code:
SELECT * FROM TB_TABLE WHERE TIMESTAMP = MAX(TIMESTAMP)
As this table grows really big (gigabytes), I did a premature optimization spliting it in two tables: one for all data (only for inserts), and another for the latest data (for inserts, delete and select).
I wonder if this duplication is a good thing to do, since I have no metrics to prove that it improved my application performance. As general guidelines, would you recommend what I did?
Update BTW I use MS SQL Server 2005 and .NET C# Linq-To-Sql