views:

369

answers:

2

so, my design calls for a separate SQLite file for each "project".. I am reading through the SQLAlchemy Pooling docs more carefully.. my guess right now is that I dont want to fool with pooling at all, but this is really a separate connection engine for each project.. Agree??

In that case, when I create the engine, either I connect to a file named by convention, or create a new SQLite file and supply a schema template... ??

A: 

Ehm, what? Connection Pools contain many connections to the same (database) server. It takes time to establish a new connection, so when there are many short-lived processes using the same database, it's handy to have a pool of already established connections. The processes can check out a connection, do their thing and return it, without having wait while opening a new connection.

In any case, all connections go to the same database, given by the URI passed to create_engine

THC4k
+1  A: 

First, some vocabulary. SQLAlchemy defines schemas with MetaData objects, containing objects representing tables and other database entities. Metadata objects can be optionally "bound" to engines that are what you think of as "pools."

To create a standard schema and use it across multiple databases, you'll want to create one metadata object and use it with several engines, each engine being a database you connect to. Here's an example, from the interactive iPython prompt. Note that each of these SQLite engines connect to different in-memory databases; connection1 and connection2 do not connect to the same database:

In [1]: from sqlalchemy import *
In [2]: metadata = MetaData()
In [3]: users_table = Table('users', metadata,
   ...:                     Column('id', Integer, primary_key=True),
   ...:                     Column('name', String))
In [4]: connection1 = create_engine('sqlite:///:memory:')
In [5]: connection2 = create_engine('sqlite:///:memory:')

In [6]: ## Create necessary tables
In [7]: metadata.create_all(bind=connection1)
In [8]: metadata.create_all(bind=connection2)

In [9]: ## Insert data
In [10]: connection1.execute(
             users_table.insert(values={'name': 'Mike'}, bind=connection1))
In [11]: connection2.execute(
             users_table.insert(values={'name': 'Jim'}, bind=connection2))

In [12]: print connection1.execute(users_table.select(bind=connection1)).fetchall()
[(1, u'Mike')]

In [13]: print connection2.execute(users_table.select(bind=connection2)).fetchall()
[(1, u'Jim')]

As you can see, I connect to two sqlite databases and executed statements on each using a common schema stored in my metedata object. If I were you, I'd start by just using the create_engine method and not worry about pooling. When it comes time to optimize, you can tweak how databases are connected to using arguments to create_engine.

Ken