First, some vocabulary. SQLAlchemy defines schemas with MetaData objects, containing objects representing tables and other database entities. Metadata objects can be optionally "bound" to engines that are what you think of as "pools."
To create a standard schema and use it across multiple databases, you'll want to create one metadata object and use it with several engines, each engine being a database you connect to. Here's an example, from the interactive iPython prompt. Note that each of these SQLite engines connect to different in-memory databases; connection1 and connection2 do not connect to the same database:
In [1]: from sqlalchemy import *
In [2]: metadata = MetaData()
In [3]: users_table = Table('users', metadata,
...: Column('id', Integer, primary_key=True),
...: Column('name', String))
In [4]: connection1 = create_engine('sqlite:///:memory:')
In [5]: connection2 = create_engine('sqlite:///:memory:')
In [6]: ## Create necessary tables
In [7]: metadata.create_all(bind=connection1)
In [8]: metadata.create_all(bind=connection2)
In [9]: ## Insert data
In [10]: connection1.execute(
users_table.insert(values={'name': 'Mike'}, bind=connection1))
In [11]: connection2.execute(
users_table.insert(values={'name': 'Jim'}, bind=connection2))
In [12]: print connection1.execute(users_table.select(bind=connection1)).fetchall()
[(1, u'Mike')]
In [13]: print connection2.execute(users_table.select(bind=connection2)).fetchall()
[(1, u'Jim')]
As you can see, I connect to two sqlite databases and executed statements on each using a common schema stored in my metedata object. If I were you, I'd start by just using the create_engine method and not worry about pooling. When it comes time to optimize, you can tweak how databases are connected to using arguments to create_engine.