I'm currently using mysql w/ PHP because that's what I learned and haven't ever had to use anything else. In my current project, I have a database w/ 10 million rows and about 10 columns and have found it to be very slow when I do complex calls, both in a local (windows) environment as well as production (linux) environment. Both servers have over 10GB of ram. I've already got Zend AMF installed and so all data transfer is binary.
I am self taught and have never had anyone teach me the ins-and-outs of efficient dbm. My first thought would be to break the DB into chunks and then change all my php code..seems like a lot of hassle and subject to error in the php code...is there a better way I am not aware of?
My UI is flash-based and I am willing to change the middleware if it means a significant increase in speed. At this point I really only know PHP as a middleware.
I don't want to learn another db at this point and figure that there has to be some tricks of the trade to optimizing my performance but wanted to get some ideas before I hacked away at my DB.
fyi: the database is read only from a user's standpoint. users are NOT entering any new data into the database. Also, of the 10 million rows in the database, there's only about 200,000 that are heavily used. (I don't know WHICH 200K rows are heavily used as recording this info would probably slow down my db even more).
Each call is something like:
SELECT name, id, address
FROM table
WHERE date_added=?
LIMIT ?,?
ORDER BY no_children ASC
... and various other "select" statements.