views:

402

answers:

3

I'll admit I'm not 100 % on the inner workings of PDO and MySQL, so I'll give an example to make my question clearer.

I'm making a rather crude browser based strategy game, because I think it's a fun way to learn PHP and databases. I was bug-testing the battle script when I came across a rather unexpected bug. I use Cron Jobs to call a script each minute, looking something like this:

$sql = "SELECT army_id FROM activearmy WHERE arrival=0;";

foreach($dbh->query($sql) as $row)
 {

  battleEngine($row["army_id"]);

 }

When the basic calculations are done (attacking army vs. defending army), six tables in the database is updated. The problem I face is that when I make several attacks at the same target within the same minute, every now and then one of those attacks fetch obsolete database information (in one extreme case, attack #10 fetched the same table as attack #5).

I'm guessing this happens because the script is faster than the database? Is there a way to force PHP to wait until all relevant information is in place before repeating the function with the next $row?

EDIT: Emil is probably correct. I can't make sure though as it seems impossible for my PDO to stay open long enough for me to pass several statements between beginTransaction() and commit(). However, a dirty read seems odd since I'm using InnoDB with "REPEATABLE READ". Some Googling suggest that REPEATABLE READ will make dirty reads impossible. After contemplating how to kill myself for a while, I opted for a hack instead. For now, I put an UPDATE for a special value on top of my script, and another at the end of the big batch (about six UPDATE statements) at the bottom. Before running the function, I've put up a while()-loop which checks if that special value is set to 0. If it's not, it sleeps for 0.01 seconds and try again. Looking at the output, the while loop repeats an average of two times, suggesting it might actually be working? It hasn't failed yet, but it might be because it's not peak hour. I'll try again at regular intervals tomorrow. I know no-one cares about all this, but I felt I should make this update just in case. =P

+1  A: 

If PHP would just wait altogether, you would getting a high number of cron jobs stacking up at.

What you really want is to use something like anachron, to make sure only one instance of the script runs at any given moment. Also you could do something like the following:

<?php

  if (file_exists('/tmp/myscriptlock')) exit();
  touch('/tmp/myscriptlock');

  // do your stuff

  unlink('/tmp/myscriptlock');
Evert
srry for the bad formatting
Evert
It'd be better to use a MySQL lock for this, that way if the job and thus the connection dies then the lock is automatically released. http://dev.mysql.com/doc/refman/5.0/en/miscellaneous-functions.html#function_get-lock
+1  A: 

What you're seeing is called a "dirty read". Use InnoDB and set the isolation level to serializable. Then wrap all the queries in transaction-blocks:

$dbh->beginTransaction();
// run queries
$dbh->commit();
Emil H
A: 

My initial thought was simple mysql write locks, but I don't understand the problem well enough. In any case this might help: http://www.perplexedlabs.com/2008/02/06/mutex-with-php-and-mysql/

therealsix