views:

185

answers:

2

Due to "circumstances beyond my control"tm, I am using sqlalchemy with an MS Access backend. I'm having an issue that the following code:

def delete_imports(self, files_imported_uid):
    table_name = 'my_table'
    delete_raw = self.meta.tables[table_name].delete()
    self.engine.execute(delete_raw.where(
        self.meta.tables[table_name].c.files_imported_uid == files_imported_uid)
        )

throws a "File sharing lock count exceeded." error with large tables. The statement generated is just:

DELETE FROM my_table WHERE my_table.files_imported_uid = ?

with a parameter of the uid. The statement is then executed via pyodbc. MSDN provided some suggestions to get around the problem, before informing me that they won't work if the database is on a Novell NetWare server, which it is.

Is there a known work around that I can use (preferably at the sqlalchemy layer), or do I need to create some ugly hack that selects the top 9,000 records at a time to delete and loops until finished?

A: 

PRB: Error "3050 Could Not Lock File" When You Connect to a Jet Database on a Novell Server

Now option 1 requires admin access to HKLM. Option 2 is Access specific and I have no idea if those would work in sqlalchemy .

Tony Toews
A: 

Having looked into this, sqlalchemy can't support 'Top' on deletes (eliminating the simple option of ordering and deleting the top 9000 each time till the whole delete is finished) so I gave in and just deleted each line individually. Not ideal, but given that:

1) This operation is only performed once a month
2) Deletes are pretty quick anyway
3) I'm the main user of the application...

it didn't seem worth coding a more efficient work around.

mavnn