views:

108

answers:

2

I'm working on an application backed by Core Data. Right now, I'm saving the Object Context as and when I add or delete an entity to and from the Context. I'm afraid it will affect the performance, so I was thinking of delaying the save. In fact, I could delay it all the way until the application is gonna terminate. Is it too risky to save the data only when the application is about to close? How often should I call the save on Object Context?

I was thinking of having a separate thread handle the save: it will wait on a semaphore. Every time any part of the application calls a helper/util method to save the Core Data, it will decrement the semaphore. When it is down to zero, the "save thread" will do a save once and it increments the semaphore to a, say, 5, and then sleep again.

Any good recommendation? Thanks!

A: 

The best way I think, is to save after every object. If something ever happens such as a sudden crash nothing will be lost.

Some performance enhancements, if you adding a lot of objects is to batch. Add all objects to the context than save. This is good for example if you adding a lot objects in a loop. Your idea is similar, but there could be a long time between saves, in which the program could crash.

I don't think adding a single object would be a that much of a performance problem. How big are your objects, do they contain a lot of data?

UK-AL
NEVER save after every crash. That kills performance dead.
Marcus S. Zarra
Save after every crash? I think that's a typo.Surely if the user is dictating the saves by creating objects, the user could not create objects fast enough to slow down the application.If you think that it slows down your application enough to take risk on not saving enough. Please provide evidence, all my benchmarks have been acceptable.
UK-AL
A: 

You should save frequently. The actual performance of the save operation has a lot to do with which persistent store type you're using. Since binary and XML stores are atomic, they need to be completely rewritten to disk on every save. As your object graph grows, this can really slow down your application. The SQLite store, on the other hand, is much easier to write to incrementally. So, while there will be some stuff that gets written above and beyond the objects you're saving, the overhead is much lower than with the atomic store types. Saves affecting only a few objects will always be fast, regardless of overall object graph size.

That said, if you're importing data in a loop, say, I would wait until the end of the complete operation to save rather than saving on each iteration. Your primary goal should be to prevent data loss. (I have found that users don't care for that very much!) Performance should be a close second. You may have to do some work to balance the frequency of saving against performance, but the solution you outline above seems like overkill unless you've identified a specific and significant performance issue.

Alex
Thanks for the quick answer. My objects are not huge, but the object is a bit complicated. The underlying store is SQLite, so as you said, it should be pretty okay.There's an additional issue I encountered recently: I create an entity object in the context and save it. In a short moment, if I delete that entity (retrieved later using a FetchedResultsController), I would get an error that has something to do with Internal Consistency. I think it is because the context hasn't updated the object graph in memory?
Justin
I would add to this that you want to make your actual save frequency a variable or a `#define` so that once you are in testing you can fine tune your save frequency and watch the results in Instruments.
Marcus S. Zarra