views:

142

answers:

5

Hello dear fellow SOers,

I have a system sitting on a "Master Server", that is periodically transferring quite a few chunks of information from a MySQL DB to another server in the web.

Both servers have a MySQL Server and an Apache running. I would like an easy-to-use solution for this.

Currently I'm looking into:

  • XMLRPC
  • RestFul Services
  • a simple POST to a processing script
  • socket transfers

The app on my master is a TurboGears app, so I would prefer "pythonic" aka less ugly solutions. Copying a dumped table to another server via FTP / SCP or something like that might be quick, but in my eyes it is also very (quick and) dirty, and I'd love to have a nicer solution.

Can anyone describe shortly how you would do this the "best-practise" way?

This doesn't necessarily have to involve Databases. Dumping the table on Server1 and transferring the raw data in a structured way so server2 can process it without parsing too much is just as good. One requirement though: As soon as the data arrives on server2, I want it to be processed, so there has to be a notification of some sort when the transfer is done. Of course I could just write my whole own server sitting on a socket on the second machine and accepting the file with own code and processing it and so forth, but this is just a very very small piece of a very big system, so I dont want to spend half a day implementing this.

Thanks,

Tom

A: 

Assuming your situation allows this security-wise, you forgot one transport mechanism: simply opening a mysql connection from one server to another.

Me, I would start by thinking about one script that ran regularly on the write server and opens a read only db connection to the read server (A bit of added security) and a full connection to it's own data base server.

How you then proceed depends on the data (is it just inserts to deal with? do you have to mirror deletes? how many inserts vs updates? etc) but basically you could write a script that pulled data from the read server and processed it immediately into the write server.

Also, would mysql server replication work or would it be to over-blown as a solution?

James
A: 

If you have access to MySQL's data port, and don't mind the constant network traffic, you can use replication.

Jeremy Smyth
+1  A: 

If the table is small and you can send the whole table and just delete the old data and then insert the new data on remote server - then there can be an easy generic solution: you can create a long string with table data and send it via webservice. Here is how it can be implemented. Note, that this is far not perfect solution, just an example how I transfer small simple tables between websites:

function DumpTableIntoString($tableName, $includeFieldsHeader = true)
{
  global $adoConn;

  $recordSet = $adoConn->Execute("SELECT * FROM $tableName");
  if(!$recordSet) return false;

  $data = "";

  if($includeFieldsHeader)
  {
    // fetching fields
    $numFields = $recordSet->FieldCount();
    for($i = 0; $i < $numFields; $i++)
      $data .= $recordSet->FetchField($i)->name . ",";
    $data = substr($data, 0, -1) . "\n";
  }

  while(!$recordSet->EOF)
  {
    $row = $recordSet->GetRowAssoc();
    foreach ($row as &$value)
    {
      $value = str_replace("\r\n", "", $value);
      $value = str_replace('"', '\\"', $value);
      if($value == null) $value = "\\N";
      $value = "\"" . $value . "\"";
    }
    $data .= join(',', $row);

    $recordSet->MoveNext();

    if(!$recordSet->EOF)
      $data .= "\n";
  }

  return $data;
}

// NOTE: CURRENTLY FUNCTION DOESN'T SUPPORT HANDLING FIELDS HEADER, SO NOW IT JUST SKIPS IT
// IF NECESSARRY
function FillTableFromDumpString($tableName, $dumpString, $truncateTable = true, $fieldsHeaderIncluded = true)
{
  global $adoConn;

  if($truncateTable)
    if($adoConn->Execute("TRUNCATE TABLE $tableName") === false)
      return false;


  $rows = explode("\n", $dumpString);
  $startRowIndex = $fieldsHeaderIncluded ? 1 : 0;

  $query = "INSERT INTO $tableName VALUES ";
  $numRows = count($rows);

  for($i = $startRowIndex; $i < $numRows; $i++)
  {
    $row = explode(",", $rows[$i]);

    foreach($row as &$value)
    {
      if($value == "\"\\N\"")
        $value = "NULL";
    }

    $query .= "(". implode(",", $row) .")";
    if($i != $numRows - 1)
      $query .= ",";
  }

  if($adoConn->Execute($query) === false)
  {
    return false;
  }

  return true;
}

If you have large tables, then I think that you need to send only new data. Ask your remote server for the latest timestamp, and then read all newer data from your main server and send the data either in generic way (as I've shown above) or in non-generic way (in this case you have to write separate functions for each table).

nightcoder
+2  A: 

Server 1: Convert rows to JSON, call the RESTful api of second with JSON data

Server 2: listens on a URI e.g. POST /data , get json data convert back to dictionary or ORM objects, insert into db

sqlalchemy/sqlobject and simplejson is what you need.

Anurag Uniyal
A: 

If you're using MyISAM or Archive tables, then I would highly recommend mysqlhotcopy

Chris