Master-master replication is not as good as You might think, same goes to the round-robin proxy and similar 'easy' solutions. If You commit colliding data to separate servers fast enough (faster than the delay between the servers, which on production servers might be up to a full second*
), both will accept the data. If You have an auction server, You just sold the same car twice. Who bought it? It depends on which DB will You ask!
The application must be aware that there are actually 2 databases out there and it has to know both of their ip addresses. If You want to "sell", You should f.e.
DB_number = `auction_number` % `number_of_databases`
( %
is for modulo
)
... and commit it to DB_number database. If You get a connection error, then perhaps do it with the other one (but in case of an auction server, I'd just display an error).
Also, the IP addresses should be wackamole-d between both servers. On a disaster scenario, where one database server goes down for a couple of hours in peak usage time, You will find that the application will try to connect to the absent server and hang until TIMEOUT, say, 3s. Suddenly half of Your queries run 3s longer (and they all go to the same database eventually - which doesn't make it run faster than before the disaster). This doesn't make Your httpd happy, as it probably has a limited connection pool of concurrent request handler threads...
*
replication delay on production servers might be up to a full second - I have tested this in a remote colocation and in our datacenter and for like 99% of the time it's 0, but sometimes mysql shows 1s. On massive traffic I had many collisions due to client application making two requests resulting in two queries, insert and select. For some cases, the row just wasn't there yet, so We used hash of the userID and it fixed the problem
I hope You will learn from my mistakes ;-)