I tried to do something like
ss = Screenshot(key=db.Key.from_path('myapp_screenshot', 123), name='flowers')
db.put([ss, ...])
It seems to work on my dev_appserver, but on live I get this traceback:
05-07 09:50PM 19.964 File "/base/data/home/apps/quixeydev3/12.341796548761906563/common/appenginepatch/appenginepatcher/patch.py", line 600, in put
E 05-07 09:50PM 19.964 result = old_db_put(models, *args, **kwargs) GAE bug is actually extremely minor. It seems to be due to the fact that I called something l E 05-07 09:50PM 19.964 File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/init.py", line 1278, in put
E 05-07 09:50PM 19.964 keys = datastore.Put(entities, rpc=rpc)E 05-07 09:50PM 19.964 File "/base/python_runtime/python_lib/versions/1/google/appengine/api/datastore.py", line 284, in Put
E 05-07 09:50PM 19.965 raise _ToDatastoreError(err)E 05-07 09:50PM 19.965 InternalError: the new entity or index
you tried to insert already exists
I happen to know just the ID of an existing Screenshot entity I want to update; that's why I was manually constructing its key. Am I doing it wrong?
Update: I filed this as Google App Engine issue 3209.
Update 2: It looks like the ike db.put([ss, ss2, ss]), i.e. the call just fails when the list references the same model twice.
Update 3: OK, I think I finally know what's going on here, and I'm updating this question because right now it's the only Google result for "the new entity or index you tried to insert already exists".
This InternalError seems to arise when the Datastore attempts two writes of a BigTable row for the same entity key, within the same Remote Procedure Call. This can happen if you manually give two entities the same key, or if you put() two new entities without specifying a key, and the same ID gets allocated to both entities in parallel. In the latter case, the solution is to use db.allocate_ids().