I know that in SQL Server, the maximum number of "objects" in a database is a little over 2 billion. Objects contains tables, views, stored procedures, indexes, among other things . I'm not at all worried about going beyond 2 billion objects. However, what I would like to know, is, does SQL Server suffer a performance hit from having a large number of tables. Does each table you add have a performance hit, or is there basically no difference (assuming constant amount of data). Does anybody have any experience working with databases with thousands of tables? I'm also wondering the same about MySQL.
No difference, assuming constant amount of data.
Probably a gain in practical terms because of things like reduced maintenance windows (smaller index rebuilds), ability to have read-only file groups etc.
Performance is determined by queries and indexes (at the most basic level): not number of objects
I doubt SQL Server will have a performance problem working with thousands of tables, but I sure would.
I've worked on databases with hundreds of tables in SQL Server with no problems, though.
SQl Server can suffer a larger performance hit by using tables with many, many columns instead of breaking out a related table (even one with a one-to_one relationship). Plus a wide table likely can have problems when the data you want to input exceeds the number of bytes that you can store for a column. You can create a table that has the potential to store, for example, 10000 bytes but you will still only be able to store 8060 bytes.
In terms of the max number of tables I have had a database with 2 million tables. No performance hit at all. my tables where small around 15MB each.