views:

588

answers:

3

I am currently playing around with SQLAlchemy a bit, which is really quite neat.

For testing I created a huge table containing my pictures archive, indexed by SHA1 hashes (to remove duplicates :-)). Which was impressingly fast...

For fun I did the equivalent of a select * over the resulting SQLite database:

session = Session()
for p in session.query(Picture):
    print(p)

I expected to see hashes scrolling by, but instead it just kept scanning the disk. At the same time, memory usage was skyrocketing, reaching 1GB after a few seconds. This seems to come from the identity map feature of SQLAlchemy, which I thought was only keeping weak references.

Can somebody explain this to me? I thought that each Picture p would be collected after the hash is written out!?

+3  A: 

Okay, I just found a way to do this myself. Changing the code to

session = Session()
for p in session.query(Picture).yield_per(5):
    print(p)

loads only 5 pictures at a time. It seems like the query will load all rows at a time by default. However, I don't yet understand the disclaimer on that method. Quote from SQLAlchemy docs

WARNING: use this method with caution; if the same instance is present in more than one batch of rows, end-user changes to attributes will be overwritten. In particular, it’s usually impossible to use this setting with eagerly loaded collections (i.e. any lazy=False) since those collections will be cleared for a new load when encountered in a subsequent result batch.

So if using yield_per is actually the right way (tm) to scan over copious amounts of SQL data while using the ORM, when is it safe to use it?

Bluehorn
yield_per is safe when you have one result SQL row per resulting instance. You get extra rows per instance when you eagerload or join a one-to-many relation. If you need more details about the circumstances, you could create a separate question about yield_per.
Ants Aasma
+2  A: 

You can defer the picture to only retrieve on access. You can do it on a query by query basis. like

session = Session()
for p in session.query(Picture).options(sqlalchemy.orm.defer("picture")):
    print(p)

or you can do it in the mapper

mapper(Picture, pictures, properties={
   'picture': deferred(pictures.c.picture)
})

How you do it is in the documentation here

Doing it either way will make sure that the picture is only loaded when you access the attribute.

David Raznick
Thanks David, that is quite interesting. In fact I just changed my code to use from sqlalchemy.orm import deferred ... class Picture(object): ... imagedata = deferred(Column(Binary))Which is a big improvement. However, it still takes a few seconds until the first result is output as a few 1000 Picture objects are created for the query.I would like to have those objects created one at a time while SQLAlchemy is iterating through the SQLite result rows.
Bluehorn
I think that most of the time will be spent retrieving the data from the database and not making the objects. So the query iself needs to have limits on. So. for p in session.query(Picture)[0:5]: print p.
David Raznick
+1  A: 

here's what I usually do for this situation:

def page_query(q):
    offset = 0
    while True:
        for elem in q.limit(1000).offset(offset):
           r = True
           yield elem
        offset += 1000
        if not r:
            break
        r = False

for item in page_query(Session.query(Picture)):
    print item

This avoids the various buffering that DBAPIs do as well (such as psycopg2 and MySQLdb). It still needs to be used appropriately if your query has explicit JOINs, although eagerly loaded collections are guaranteed to load fully since they are applied to a subquery which has the actual LIMIT/OFFSET supplied.

I have noticed that Postgresql takes almost as long to return the last 100 rows of a large result set as it does to return the entire result (minus the actual row-fetching overhead) since OFFSET just does a simple scan of the whole thing.

zzzeek