views:

596

answers:

6

I've build a small web-service in PHP. I want to control the number of calls to a specific API method.

I thought at first of using a text file, because it's just an integer. But after taking a good look at SQLite it seemed much more convenient. So the code is just: get the "counter" from SQLite and increment it when that method is called, then store it back on SQLite db.

Is this a right way of doing it? Would it be simpler and more scalable to just use a file or... maybe something else?

Thanks in advance.

+3  A: 

A database would typically scale better than the filesystem, but you are going to need to be careful and use appropriate locks with either mechanism.

Mitch Wheat
note that sqlite is optimised for reads, writes lock the whole db ...
Sam Saffron
SQL database just for storing one integer? I'm actually shocked your answer is upvoted so much, this is sad state of software development.
lubos hasko
@lubos hasko: I am merely stating the facts. The poster is talking about a web service; will the filesystem scale to thousands of calls to a single file?
Mitch Wheat
Flat files will scale just like SQLite because SQLite *are* flatfiles. Only that you need special library to read them or write into them.
lubos hasko
@lubos hasko: what mythical properties must have a DB file to not be 'flat'? SQLite DBs are BTrees, on top of a a inner 'page' structure, and a complex strata of loaded/cached/dirty/disk/commited.
Javier
+2  A: 

Of course use flat file.

  • It will be right thing to do
  • Extremely simple
  • No dependency on third-party library
  • You will be able to view or update counter with just plain Notepad.
  • Faster than SQLite (no SQL parsing, no other overhead)
  • and as scalable as SQLite (SQLite are just more complex flat files)

If you will ever need something as fast as possible, you will have to make persistent process in memory to eliminate I/O bottleneck. But please... just don't use SQL database to store single integer.

lubos hasko
you make no mention of how to handle locking and potentially transactions... locking is key to get this thing to working in a reliable way.
Sam Saffron
Oh, and what do you think, how SQLite handles locking? It either opens file in read-only mode or exclusive read-write mode.... there is no magic.
lubos hasko
actually there is magic, a lot of it http://www.sqlite.org/lockingv3.html
Sam Saffron
You are missing the point. You either open file in shared read mode or exclusive read-write mode. That's how SQLite internally operates as well.
lubos hasko
+1  A: 

Put it in a real database both a textfile, and a SQLite DB will have locking issues where if the file(s) is currently open and a new request to open it happens the second request will error out. Use a real DB like MySQL, MSSQL, or whatever is available.

Unkwntech
SQLite definitely copes with multiple processes/threads/handles to the same DB.
Javier
But they are still just files. A full DBMS like MySQL is MUCH MUCH better at handling 1000 simultaneous writes.
Unkwntech
Where do you think MySQL stores it's data? In a magical treasure box?
jcollum
But you do have a point, SQLite apparently locks the whole DB instead of the record: http://weblogs.asp.net/jgalloway/archive/2006/04/12/442615.aspx. Warrants more research.
jcollum
+4  A: 

a third possibility: memcachedb. it's compatible with memcached, but stores it's key-value store to a BDB file. it not only has read-write commands, but also atomic increment/decrement for numeric values.

one more alternative would be to write a 'counter server'. an independent process that gets 'read' and 'increment' commands over a socket. the advantage is that it's really easy to do 'atomic' increments. it stores it's counter(s) on a simple file, without concurrency problems since there's never more than one server.

it should be easy to write in less than a hundred lines of C. just a tight loop processing one command at a time, flushing to disk every few seconds. since the processing is so simple, the latency is minimal.

Javier
+1  A: 

You could use shared memory to hold the counter, and locking to ensure safe increments.

John Nilsson
A: 

According to SQLite it can handle sites that go up to "100K [sic] hits/day". If you're going over that, you may need to investigate a more robust database like MySQL. But if you're writing 100k records to the database every day for one website I think something is wrong with what you're doing, based on your example. This really sounds like a job for caching. Which I know nothing about in PHP, sorry.

They have this to say about files vs. SQLite: "Many programs use fopen(), fread(), and fwrite() to create and manage files of data in home-grown formats. SQLite works particularly well as a replacement for these ad hoc data files." (source)

jcollum