In one of the stackoverflow podcasts (#18 I think) Jeff and Joel were talking about multi vs single tenant databases. Joel mentioned that "FogBugz on Demand" used a database per customer architecture and I was wondering if there is a point beyond which you'll need to have multiple database servers to distribute the load?
Technically the limit of databases per instance in SQL Server is 32,767, but I doubt that you could use a SQL Server instance that has more than 2,000 databases, at that point the server would probably be not responsive.
You may be able to have close to 30,000 databases if they were all auto-closed and not being used. you can find more information about capacity limits here:
I'd think mostly it depends on the memory limitations of the machine. SQL Server likes to keep as much cached in memory as possible, and as you add databases you reduce the amount of memory available.
I think it is more a question of the load on the databases. As was said above, if there is no load then 32,767. With a high load then it comes down, eventually to 1 or less than 1.
In addition, you might want to consider the number of connections to a SQL Server. After 500-1000, it gets very cloggy and slow. So that is a limitation as well.
Joel has talked about this at another place (sorry, no reference handy) and said that before switching to MS SQL 2005 the management console (and the backend) had problems attaching more than 1000 or 2000 databases. It seems that 2005 and probably 2008 again improved on these numbers.
As for all performance questions are always dependent on your actual hardware and workload an can only definitely answered by local benchmarking/system monitoring.