Have you ascertained that this is really an issue? I only mention that because the cost of an open database is likely quite small, specifically "opening" most likely consists of syncing up any outstanding transactions waiting for the database, and doing a basic consistency check (notably loading a couple of pages of data stored on the disk).
Once that is done, without activity, there isn't really a lot of data to maintain on the server.
If you think about it, the most base functionality of a DB system is to manage the caching of database pages with memory. When a request is made for a piece of data, the system locates the actual page its on, and checks RAM to see if it's loaded. If not, it loads it from disk.
If you also notice, the vast bulk of DB "meta" data is stored -- in the database. That means that when the system wants to know anything, it effectively uses itself to locate the information, specifically the data page caching subsystem.
Like any other cache, as data is expired and no longer needed, it gets flush back out to disk and refetched when necessary.
So, this implies that once a database has been "opened", any information really necessary to maintain its state will likely be maintained via the data cache subsystem, and for unused database, released back to disk to make room for current traffic.
This is why I'm curious if you've tested your candidate DBs to see if you run in to issues about this, or if the database even has the concept of "opening a database".
When, as a client, we discuss this, the focus tends to be on connections to the database server. But once all of those are closed, I do not think that the system is going to retain any significant amount of in memory data about a particular database that is inactive.
After all, all (ALL) of the data in a database is stored "the same", a table is a table is a table, an index is an index is an index, particularly on central server where all of the data pages are managed as a single big "soup" of data.
The only issue you might run in to is if your database happens to create a file specifically for each database, and that file remains open. Eventually you may run out of file descriptors.
But most of the modern systems don't do that, they store everything in a big blob of files irrespective of what database or schema they're in (barring specific table space allocations that you make or the server allows, of course).
So, essentially, I don't think this is a problem, as I don't think the modern databases really make the kind of distinctions that you're talking about internally. That multiple databases or schema are a logical artifact within the system, not a technical implementation, and that all of the data pages end up in the same cache and use the same resources regardless of what schema, database, table, or index they come from.
I would do some testing on your database of choice to see if this is a problem. For example, you could create 1M databases, lower the memory as much as possible for the database, and then just starting cycling through them, opening however many at a time you feel is appropriate (10, 100, 1000, whatever) and see if you have any problems.
Finally, I don't "know" any of this for any specific database, it's just gut instinct on how historically databases are implemented.