I can offer my experience on a similar experience, but with a different platform, namely J.
There was some slowness, and I pinpointed it to the sqlite3_get_table
function. This function returns a pointer for each column, each pointing to an array of pointers, where each of those refers to a null terminated string. Pointers can also be null if the result of a function is null (say a Max on an empty dataset, it will return a null pointer, and not a pointer to a null. I hate that.) J then formed the addresses as readable (form a large matrix of addresses followed with a 0 for offset and -1 for length, meaning up to first null) and cycles through each, to finally re-shape the table in its intended columns and rows.
So, there's a memory transfer aspect, as well as the actual reading aspect, to fetching data from SQLite to another platform. I have found that this often large dataset is not easily handled by J, meaning that it's clunky as all strings. Plus there's that nasty null pointer thing.
I was able to limit matrix modifications enough to optimise the function. The final optimisation was to use the primitive code for reading a memory address (15!:1
), and not decently named function (memr
), because using memr meant J had to interpret what memr
means at every memory read.
In conclusion, if python allows some modification, maybe you can tweak database access to better suit your needs. I hope this helps, but I don't have very high hopes...