views:

25

answers:

1

I'm generating data at a rate of 4096 bytes every 16.66ms. This data needs to be stored constantly and will be read randomly. It would be nice to have it in a relational database, but I think doing so many inserts would create too much overhead for the processor I'm working with (ARM11). And I don't need all the features that something like SQLite offers.

In fact, just writing this stuff to a file seems tempting because while most of the time I'll just be writing lots of data, when I actually do need to read data, I can just seek to the block I need. However, I just know I'm going to run into some problem along the way. Especially when I leave this thing running for a day and end up with gigabytes of data.

This just seems like a very naive solution to my problem and I need someone else to tell me so I can start thinking about a better solution. Thanks.

A: 

You should add some more details to get better answers. What are you use cases, do you need ACID, what is your storage you are writing to, etc.

What is your OS, do you only write fixed size records. Just saying like I will do random access and this is my write rate is something is much too unspecific.

You are writing at 240 kb/s, which are 20 GB/day.

If you have just fixed size records, only append data and use Linux, then a plain file is great. Perhaps think about using some fsync calls, if your storage is fast enough.

maxschlepzig