In addition-to/instead-of running threading, you could restrict your fetch to a small batch size, say 10 objects. Then fetch 10 objects, display them, then fetch the next 10 and so on. This would keep your interface live and the user would understand they were watching a progressive process.
Use -[NSFetchRequest setFetchLimit:]
to restrict the number of objects returned per fetch and -[NSFetchRequest setFetchOffset:]
to index the subsequent fetches.
From the Apple Docs on fetchOffset:
The default value is 0.
This setting allows you to specify an
offset at which rows will begin being
returned. Effectively, the request
will skip over the specified number of
matching entries. For example, given a
fetch which would normally return a,
b, c, d, specifying an offset of 1
will return b, c, d, and an offset of
4 will return an empty array. Offsets
are ignored in nested requests such as
subqueries.
This can be used to restrict the
working set of data. In combination
with -fetchLimit, you can create a
subrange of an arbitrary result set.
You might also want to look at your Core Data object graph design. 250 objects isn't a lot and there shouldn't be a significant performance hit processing that many objects. You may have to much data crammed into one entity so you have to fault in a lot unneeded data to get some relatively trivial information.
For example, a common mistake is to add an attribute with a great deal of data, such as an image, to a commonly accessed entity, such as Person entity. This causes problem because to get the Person.name attribute, you also have to load in an image of hundreds of kb.
A better design is to park large attributes in their own entity and link to other entities as relationships. That way, the large data chunk is only faulted in when you explicitly call the relationship. In the above example, you would put the image in its own entity. That way, when you wanted Person.name, you need only fault in the lightweight text.