views:

295

answers:

4

I need to populate my database with a bunch of dummy entries (around 200+) so that I can test the admin interface I've made and I was wondering if there was a better way to do it. I spent the better part of my day yesterday trying to fill it in by hand (i.e by wrapping stuff like this my_model(title="asdfasdf", field2="laksdj"...) in a bunch of "for x in range(0,200):" loops) and gave up because it didn't work the way I expected it to. I think this is what I need to use, but don't you need to have (existing) data in the database for this to work?

+3  A: 

You can use fixtures for this purpose, and the loaddata management command.

One approach is to do it like this.

  1. Prepare your test database.

  2. Use dumpdata to create JSON export of the database.

  3. Put this in the fixtures directory of your application.

  4. Write your unit tests to load this "fixture".

Matthew Schinckel
I just had to add some details.
S.Lott
A: 

I'm not sure why you require any serialization. As long as you have setup your Django settings.py file to point to your test database, populating a test database should be nothing more than saving models.

for x in range(0, 200):
    m = my_model(title=random_title(), field2=random_string(), ...)
    m.save()

There are better ways to do this, but if you want a quick test set, this is the way to go.

BrainCore
And you are running in the context of the shell, right? `python manage.py shell`
hughdbrown
+1  A: 

Django fixtures provide a mechanism for importing data on syncdb. However, doing this initial data propagation is often easier via Python code. The technique you outline should work, either via syncdb or a management command. For instance, via syncdb, in my_app/management.py:

def init_data(sender, **kwargs):
    for i in range(1000):
        MyModel(number=i).save()

signals.post_syncdb.connect(init_data)

Or, in a management command in myapp/management/commands/my_command.py:

from django.core.management.base import BaseCommand, CommandError

from models import MyModel

class MyCommand(BaseCommand):
    def handle(self, *args, **options):
        if len(args) > 0:
            raise CommandError('need exactly zero arguments')

        for i in range(1000):
            MyModel(number=i).save()

You can then export this data to a fixture, or continue importing using the management command. If you choose to continue to use the syncdb signal, you'll want to conditionally run the init_data function to prevent the data getting imported on subsequent syncdb calls. When a fixture isn't sufficient, I personally like to do both: create a management command to import data, but have the first syncdb invocation do the import automatically. That way, deployment is more automated but I can still easily make modifications to the initial data and re-run the import.

Daniel
+3  A: 

Check this app

http://code.google.com/p/django-dilla/

Let's say you wrote your blog application (oh yeah, your favorite!) in Django. Unit tests went fine, and everything runs extremely fast, even those ORM-generated ultra-long queries. You've added several categorized posts and it's still stable as a rock. You're quite sure the app is efficient and ready to for live deployment. Right? Wrong.

Eeyore
This is exactly the sort of thing I was looking for. Thanks a bunch!
cornjuliox