Essentially what your question is is: Which of the above is better for a multi-tenancy SaaS application?
A thousand little sqlite databases may appeal, but may not scale.
I'll address your points in turn:
- backing up is straightforward : set version, zip, upload.
What happens if an update occurs during your backup? How long does your backup take (with say, a thousand accounts with a million posts each) ? Does your backup lock the database out for the duration of the backup, or does it backup a consistent view of the data, or neither? Does a database backup taken during updates restore correctly in each case? Have you tested these things?
I think backing up a sqlite database isn't as easy as you seem to think, because of the concurrent access issue.
- SQLITE is lightening fast, no expensive connection time...
Your implication that the MySQL connection time would be "expensive" may be false. Do you have hard data? Connecting to a server over a LAN does not take very long in practice.
- Table scheme is much simplier : no need to make any distinction between account every times
Have you thought about how you'll do a migration if you ever need to change the schema on these 1,000 small databases? What impact will it have on the service?
I'll now add some of my own:
- Scalability: If you are relying on a local filesystem with locking semantics (As I believe sqlite does), you cannot simply add more web servers, as they would have their own filesystems.
Because sqlite is not a network-based system, you can't just add more web servers. You would need to either partition your users across several servers and ensure that they only ever hit their own "home" server (which is going to introduce some issues but may be viable), or figure out some way to share a sqlite database between servers, which is not going to be pretty, and may well erase any perceived performance advantages that it ever had over (e.g.) MySQL.
- Maintainabiliy - If your development team ever make a schema change to the database (which is not just possible, but very likely), it will need to be applied to these 1,000 tiny databases. Successfully. With a rollback plan.
I think that scaling a system with a thousand tiny sqlite databases won't scale. In particular, you will probably end up finding that instead of 1,000 tiny ones, you end up with 995 tiny ones, and 5 rather large ones.
Using a dedicated MySQL server will enable you to carry out central backups and migrations. It will enable you to use the resources on that box (i.e. RAM) to cache the most frequently used parts of the database, whichever account they happen to be in.
The RAM used to cache a large MySQL database (e.g. Innodb buffer pool) can be reused between requests and is shared between all of the data (e.g. tables, rows, columns) in it. A sqlite database reads the data from disc each time it's needed, except inside a single session.
My suggestions:
- Consider the above points, ignore them if you like
- MEASURE the performance of your application with a high simulated load on production-grade hardware. Make sure you use production grade hardware for your database server (e.g. battery backed raid controller)
- Compare it with a sqlite implementation, if you can.