views:

62

answers:

3

We have a web application (it is a game) with lots of various forms and elements which act as buttons and trigger some actions on server. The problem is that users can sometimes confuse our application if he clicks on buttons too fast or opens the website in two tabs and then issues some actions simultaneously. We have some basic protection - MySQL transactions, some double-click preventing Javascripts, but anyway sometimes something just skips out. Of course, the best way would be to redesign all the SQL transactions and supporting functions in the way that does not allow to confuse the system. One example of such confusion is issuing two updates simultaneously - one web request changes something in the db, but the second request operates still with the old data and so SQL update returns "number of affected rows was zero" because the first transaction has already changed data in the db. The obvious solution is to read data once again right before UPDATE to see if it still needs updating, but that means putting many more double SELECT queries everywhere, not a good solution - why to read the same data from db twice? Also we have considered using some hidden token to compare on the server on each updating request and deny operations which have the same token id, but this also means touching really many places of code and possibly introducing new bugs into the system which works just fine except this one problem.

The logic of the action flow would be the following: if the user issued two requests simultaneously, the second request should wait until the first completes. But we must also consider that there are many redirects in our application (for example after POSTs to avoid double POSTing when user refreshes the page), so the solution should not create deadlocks for a user.

So the question is: what would be the most easy possible global fix which could just somehow make all the user operations sequential? Is there any good universal solution?

We are using MySQL and PHP.

+1  A: 

Two tabs and simultaneous action ? How Fast are your users ?

Anyway : allowing only one tab would already be a good solution.

With only one tab, you can stack actions in javascript and submit a new one once you've got the return value form the last call.

Loïc Février
Maybe users are not that fast, but our server may be a bit laggy sometimes :D So if the user hits Sell for the same game item in both tabs or in the same tab twice, he might get his game money back twice.
Martin
Why don't you check on server side that the user can do an action ? Check it and then do it.And in javascript, prevent double-click : once the user has clicked, do the javascript action (move object, delete it) and then send request. If you do validation in javascript you are sure the action will be valid.But in javascript only, no way to address the "two tabs" problem.
Loïc Février
The problem is with some lengthy operations. If the user issues the second request at the moment when the first request is after SELECT but before update, then I have the following: SELECT(1) SELECT(2) UPDATE(1), UPDATE(2) - and update 2 is wrong because it was executed on the old data, ignoring changes after UPADTE(1).
Martin
That's why transactions and locks are made for : if you are going to modify stuff add a lock, do your select/update and then release the lock.
Loïc Février
+1  A: 

I'm glad you realise that its only a bad and temporary solution and you should optimize your code. Forget your tokens and stuff. The easiest and still efficient way is probably to have an exclusive file lock on a shared file per operation. You can imagine this like a piece of wood and only one can hold id and only who holds it is allowed to talk or do something.

<?php
    $fp = fopen("/tmp/only-one-bed-available.txt", "w+");

    if (flock($fp, LOCK_EX)) { // do an exclusive lock

         // do some very important critical stuff here which must not be interrupted:
         // sleeping.
         sleep(60);
         echo "I now slept 60 seconds";
        flock($fp, LOCK_UN); // release the lock
    } else {
        echo "Couldn't get the lock!";
    }

    fclose($fp);
?>

If you execute this script 10 times parallel, it will take 10 minutes to finish (because there is only one bed available!) and you will see the echo "I now slept..." every 60 seconds (approximately).

This serializes ALL executions of this code GLOBALLY. This is probably not what you want (you want it on a per user basis, don't you?) I am sure you have something like User-IDs, otherwise use the foreign IP-Adress, and have a unique filename for each user:

<?php $fp = fopen("/tmp/lock-".$_SERVER["REMOTE_ADDR"].".txt", "w+"); ?>

You can also define a lock file name for a group of operations. Maby the user can do some stuff parallel but only this operation makes problems? Give it a tag and include it int he lock filename!

This practice really isn't that bad and can be used! There are only little ways to do it faster (mysql internals, shared memory segments, serialisation on a higher layer like load balancer...)

With the solution posted above you can do it in your application which is probably good. You can use the same scheme in Mysql as well: http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_get-lock

But the PHP implementation is probably easier and better for your use case.

If you really wanna kick ass there an even more elegant solution, just check out this function http://www.php.net/manual/en/function.sem-get.php

Joe Hopfgartner
Wow, you're really milking the sleep metaphor here. :)
bzlm
P.S: $_SERVER["REMOTE_ADDR"] might not be unique, because a lot ip's are behind routers. I would rather call uniqid() and store the value inside users cookie instead.
Alfred
lol true alfred, but there are a lot of users behind routers that dont accept cookies, share the same user agent, have javascript disabled and have the same browser accept settings :D what about them? :D ... but you are right. a combination might be good.
Joe Hopfgartner
+1  A: 

How about using mysql's get_lock function? Also maybe you should have a look at mysql's transactions.

Alfred