views:

67

answers:

2

If there is a HTTP request coming to a web server from many clients the requests will be handled in the order.

For all the http request i want to use a token bucket system. So when there is a first Request i write a number to a file and increment the number for the next request and so on..

I dont want to do it in DB since the DB size increases..

Is this the right way to do this.Please suggest

Edit:So if a user posts a comment the comment should be stored in the a file instead of the DB.So to keep track of it there is a variable that is incremented for every request.this number will be used in writing the file name and refer it for future reference.so if there are many requests is this the right way to do it..

Thanks..

A: 

The database size need not increase. All you need is a single row. In concept the logic goes:

 Read row, taking lock, getting the current count
 Write row with count incremented, releasing lock

Note that you're using the database locks to deal with the possibilities that multiple requests are being processed at the same time.

So I'm suggesting to use the database as the place to manage your count. You can still write your other data to files if you wish. However you'll still need housekeeping for the files. Is that much harder with a database?

djna
+1  A: 

Why not lock ( http://php.net/manual/en/function.flock.php ) files in a folder ?

First call locks 01,
Second call locks 02,
3rd call locks 03,
01 gets unlocked,
4th call locks 01

Basically each php script tries to lock the first file it can and when it's done it unlocks/erases the file.

I use this in a system with 250+ child processes spawned by a "process manager". Tried to use a database but it slowed down everything.

If you want to keep incrementing the file number for some content i would suggest using mktime() or time() and using


$now=time();
$suffix=0;
while(is_file($dir.$now.'_'.$suffix)) {
  $suffix++;
} 

But again, depending on how you want to read the data or use it, there are many options. Could you provide more details?

-----EDIT 1-----

  1. Each request has a "lock-file", and stores the lock id (number) is in $lock.
  2. three visitors post at the same time with the lock-id 01, 02, 03 (the last step in the described situation)

$now=time();
$suffix=0;
$post_id=30;
$dir='posts/'.$post_id.'/';
if(!is_dir($dir)) { mkdir($dir,0777,true); }
while(is_file($dir.$mktime.'_'.$lock.'_'.$suffix.'.txt')) {
  $suffix++;
}

The while should not be neede but i usually keep it anyway just in case :). That should create a txt file 30/69848968695_01_0.txt and ..02_0.txt and ..03_0.txt.

When you want to show the comments you just sort them by filename....

vlad b.
Please see the edit..
Hulk
In this case suffix will be incremented in case if multiple files,but incase of three different request $suffix will be 0 .So to track this i write the $suffix in a file.
Hulk
(code tag does not seem to work here, moved above)
vlad b.
re-read your "but incase of three different request $suffix will be 0"It will be 0 at first, but the scripts will write the files one after another.. it will be 0 for the first one that tries to write the file and 1,2,3, etc for the rest...
vlad b.
Thanks Vlad.Will try it out..
Hulk
If you have any issues feel free post here or we can chat on skype or something... I've done similar things in the past (blog and forum "frameworks" using only files) and there might be some gotchas i'm forgetting right now.If you want speed and no db i suggest using lighttpd if possible. I put lightpd on a virtual server with 128 ram running a file-based blog. It has 20 ram useage when idle and it can handle a few hundred hits at a time.
vlad b.