I am updating a piece of legacy code in one of our web apps. The app allows the user to upload a spreadsheet, which we will process as a background job.
Each of these user uploads creates a new table to store the spreadsheet data, so the number of tables in my SqlServer 2000 database will grow quickly - thousands of tables in the near term. I'm worried that this might not be something that SqlServer is optimized for.
It would be easiest to leave this mechanism as-is, but I don't want to leave a time-bomb that is going to blow up later. Better to fix it now if it needs fixing (the obvious alternative is one large table with a key associating records with user batches).
Is this architecture likely to create a performance problem as the number of tables grows? And if so, could the problem be mitigated by upgrading to a later version of SqlServer?
Edit: Some more information in response to questions:
- Each of these tables has the same schema. There is no reason that it couldn't have been implemented as one large table; it just wasn't.
- Deleting old tables is also an option. They might be needed for a month or two, no longer than that.