views:

192

answers:

3

I have a Django view which creates 500-5000 new database INSERTS in a loop. Problem is, it is really slow! I'm getting about 100 inserts per minute on Postgres 8.3. We used to use MySQL on lesser hardware (smaller EC2 instance) and never had these types of speed issues.

Details: Postgres 8.3 on Ubuntu Server 9.04. Server is a "large" Amazon EC2 with database on EBS (ext3) - 11GB/20GB.

Here is some of my postgresql.conf -- let me know if you need more

shared_buffers = 4000MB
effective_cache_size = 7128MB

My python:

for k in kw:
        k = k.lower()
        p = ProfileKeyword(profile=self)
        logging.debug(k)
        p.keyword, created = Keyword.objects.get_or_create(keyword=k, defaults={'keyword':k,})
        if not created and ProfileKeyword.objects.filter(profile=self, keyword=p.keyword).count():
            #checking created is just a small optimization to save some database hits on new keywords
            pass #duplicate entry
        else:
            p.save()

Some output from top:

top - 16:56:22 up 21 days, 20:55,  4 users,  load average: 0.99, 1.01, 0.94
Tasks:  68 total,   1 running,  67 sleeping,   0 stopped,   0 zombie
Cpu(s):  5.8%us,  0.2%sy,  0.0%ni, 90.5%id,  0.7%wa,  0.0%hi,  0.0%si,  2.8%st
Mem:  15736360k total, 12527788k used,  3208572k free,   332188k buffers
Swap:        0k total,        0k used,        0k free, 11322048k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                            
14767 postgres  25   0 4164m 117m 114m S   22  0.8   2:52.00 postgres                                                                                                                                            
    1 root      20   0  4024  700  592 S    0  0.0   0:01.09 init                                                                                                                                                
    2 root      RT   0     0    0    0 S    0  0.0   0:11.76 migration/0                                                                                                                                         
    3 root      34  19     0    0    0 S    0  0.0   0:00.00 ksoftirqd/0                                                                                                                                         
    4 root      RT   0     0    0    0 S    0  0.0   0:00.00 watchdog/0                                                                                                                                          
    5 root      10  -5     0    0    0 S    0  0.0   0:00.08 events/0                                                                                                                                            
    6 root      11  -5     0    0    0 S    0  0.0   0:00.00 khelper                                                                                                                                             
    7 root      10  -5     0    0    0 S    0  0.0   0:00.00 kthread                                                                                                                                             
    9 root      10  -5     0    0    0 S    0  0.0   0:00.00 xenwatch                                                                                                                                            
   10 root      10  -5     0    0    0 S    0  0.0   0:00.00 xenbus                                                                                                                                              
   18 root      RT  -5     0    0    0 S    0  0.0   0:11.84 migration/1                                                                                                                                         
   19 root      34  19     0    0    0 S    0  0.0   0:00.01 ksoftirqd/1

Let me know if any other details would be helpful.

+1  A: 

What is the storage structure of the table you are inserting into? Any secondary indexes?

+1 - try dropping your indexes, inserting rows, then re-adding indexes.
nilamo
@cnemelkasr: nope, just the default indexes added by Django. Probably a few ForeignKey indexes as well...
erikcw
+5  A: 

One common reason for slow bulk operations like this is each insert happening in its own transaction. If you can get all of them to happen in a single transaction, it could go much faster.

Ned Batchelder
+1 My guess too. @erikcw: try managing your transactions per http://docs.djangoproject.com/en/dev/topics/db/transactions/
Van Gale
This is almost guaranteed to be the answer. Single insert transactions are necessarily slower than doing multiple inserts within the transaction.
wlashell
+2  A: 

Firstly, ORM operations are always going to be slower than pure SQL. I once wrote an update to a large database in ORM code and set it running, but quit it after several hours when it had completed only a tiny fraction. After rewriting it in SQL the whole thing ran in less than a minute.

Secondly, bear in mind that your code here is doing up to four separate database operations for every row in your data set - the get in get_or_create, possibly also the create, the count on the filter, and finally the save. That's a lot of database access.

Bearing in mind that a maximum of 5000 objects is not huge, you should be able to read the whole dataset into memory at the start. Then you can do a single filter to get all the existing Keyword objects in one go, saving a huge number of queries in the Keyword get_or_create and also avoiding the need to instantiate duplicate ProfileKeywords in the first place.

Daniel Roseman
+1 for pointing out that reading everything in at once is more efficient if you can, and for not doing an extra filter EVERY time.
Peter Recore
@Daniel Roseman: Can you expand a bit on "Then you can do a single filter to get all the existing Keywords objects..."? I actually have several million Keywords objects already in the database, so I don't want to read them all into memory. I'm inserting 5000 new objects at a time....
erikcw
I ended up using Keyword.objects.filter(keyword__contains=...) to read into memory the related keyword objects. The script now runs in less than 10 seconds!! I also added transaction savepoints....
erikcw