I have created a database in PostgreSQL, let's call it testdb.
I have a generic set of tables inside this database, xxx_table_one, xxx_table_two and xxx_table_three.
Now, I have Python code where I want to dynamically create and remove "sets" of these 3 tables to my database with a unique identifier in the table name distinguishing different "sets" from each other, e.g.
Set 1
testdb.aaa_table_one
testdb.aaa_table_two
testdb.aaa_table_three
Set 2
testdb.bbb_table_one
testdb.bbb_table_two
testdb.bbb_table_three
The reason I want to do it this way is to keep multiple LARGE data collections of related data separate from each other. I need to regularly overwrite individual data collections, and it's easy if we can just drop the data collections table and recreate a complete new set of tables. Also, I have to mention, the different data collections fit into the same schemas, so I could save all the data collections in 1 set of tables using an identifier to distinguish data collections instead of separating them by using different tables.
I want to know, a few things
- Does PostgreSQL limit the number of tables per database?
- What is the effect on performance, if any, of having a large number of tables in 1 database?
- What is the effect on performance of saving the data collections in different sets of tables compared to saving them all in the same set, e.g. I guess would need to write more queries if I want to query multiple data collections at once when the data is spread accross tables as compared to just 1 set of tables.