views:

302

answers:

10

I have the following problem: In my dictionary IPhone app I need to save an array of strings which actually contains about 125.000 distinct words; this transforms in aprox. 3.2Mb of data. The first time I run the app I get this data from an SQLite db. As it takes ages for this query to run, I need to save the data somehow, to read it faster each time the app launches. 'Till now I've tried serializing the array and write it to a file, and afterword I've tested if writing directly to NSUserDefaults to see if there's any speed gain but there's none. In both ways it takes 'bout 7sec. on the device to load the data. It seems that not reading from the file (or NSUserDefaults) actually takes all that time, but the deserialization does:

     objectsForCharacters = [[NSKeyedUnarchiver unarchiveObjectWithData:data] retain];

Do you have any ideeas about how I could write this data structure somehow that I could read/put in memory it faster ?

A: 

Do you need to store/load all data at once?
Maybe you can just load the chunk of strings you need to display and load all other strings in the background.

caahab
No, that's not a solution because I implement a binary-search-based search-as-you-type mechanism over that data structure and I need it to be fully functional from start.
sshadoww
A: 

Perhaps you can load data into memory in one thread and search from it in another? You may not get search results instantly, but having some searches feel snappier may be better than none at all, by waiting until all data are loaded.

Alex Reynolds
I'm already doing things asynchronous. Thanks anyway.
sshadoww
+1  A: 

Just keep the strings in a file on the disk, and do the binary search directly in the file.

So: you say the file is 3.2mb. Suppose the format of the file is like this: key DELIMITER value PAIRDELIMITER

where key is a string, and value is the value you want to associate. The DELIMITER and PAIRDELIMITER must be chosen as such that they don't occur in the value and key. Furthermore, the file must be sorted on the key

With this file you can just do the binary search in the file itself. Suppose one types a letter, you go to the half of the file, and search(forwards or backwards) to the first PAIRDELIMITER. Then check the key and see if you have to search upwards or downwards. And repeat untill you find the key you need,

I'm betting this will be fast enough.

Toad
No can't do. It's mandatory for me that I load all the data in memory as I have to display it in UITableView. (It's like since I didn't typed any letters yet, the result is represented by all the words in the dictionary). That would've worked for a binary-search-based alg. where you still return a single result.
sshadoww
A: 

Are some words searched more frequently or repeatedly than others? Perhaps you can cache frequently searched terms in a separate database or other store. Load it in a separate thread as a searchable store, while you are loading the main store.

As for a data structure solution, you might look into a suffix trie to search for substrings in linear time. This will probably increase your storage requirements, though, which may affect your ability to implement this with an iPhone's limited memory and disk storage capabilities.

Alex Reynolds
Searching is not the problem. I'm searching in log(n) so I believe I beat any linear-search's ass.
sshadoww
+2  A: 

The UITableView is not really designed to handle 10s of thousands of records. If would take a long time for a user to find what they want.

It would be better to load a portion of the table, perhaps a few hundred rows, as the user enters data so that it appears they have all the records available to them (Perhaps providing a label which shows the number of records that they have got left in there filtered view.)

The SQLite db should be perfect for this job. Add an index to the words table and then select a limited number of rows from it to show the user some progress. Adding an index makes a big difference to the performance of the even this simple table.

For example, I created two tables in a sqlite db and populated them with around 80,000 words

#Create and populate the indexed table
create table words(word);    
.import dictionary.txt words
create unique index on words_index on word DESC;

#Create and populate the unindexed table
create table unindexed_words(word);
.import dictionary.txt unindexed_words

Then I ran the following query and got the CPU Time taken for each query

.timer ON
select * from words where word like 'sn%' limit 5000;
...
>CPU Time: user 0.031250 sys 0.015625;

select * from unindex_words where word like 'sn%' limit 5000;
...
>CPU Time: user 0.062500 sys 0.0312

The results vary but the indexed version was consistently faster that the unindexed one.

With fast access to parts of the dictionary through an indexed table, you can bind the UITableView to the database using NSFecthedResultsController. This class takes care of fecthing records as required, caches results to improve performance and allows predicates to be easily specified.

An example of how to use the NSFetchedResultsController is included in the iPhone Developers Cookbook. See main.m

Robert Christie
`UITableView` only loads cells on demand.
Alex Reynolds
Has anyone benchmarked it? I'm curious to see what the real-world performance is really like. I'd imagine that the filtering would take some time, applying predicates on a large set, as opposed to scrolling to a nearby row.
Alex Reynolds
I'm running a 130k rows UITableView and navigation through it work very fine.
sshadoww
Thanks for the feedback. I've added some notes on the NSFetchedResultsController which takes care of the efficient loading from a SQLite DB into a UITableView
Robert Christie
A: 

I really don't think you're on the right path trying to load everything at once.

  • You've already determined that your bottleneck is the deserialization.

  • Regardless what the UI does, the user only sees a handful (literally) of search results at a time.

  • SQLlite already has a robust indexing mechanism, there is likely no need to re-invent that wheel with your own indexing, etc.

IMHO, you need to rethink how you are using UITableView. It only needs a few screenfuls of data at a time, and you should reuse cell objects as they scroll out of view rather than creating a ton of them to begin with.

So, use SQLlite's indexing and grab "TOP x" rows, where x is the right balance between giving the user some immediately-available rows to scroll through without spending too much time loading them. Set the table's scroll bar scaling using a separate SELECT COUNT(*) query, which only needs to be updated when the user types something different.

You can always go back and cache aggressively after you deserialize enough to get something up on-screen. A slight lag after the first flick or typing a letter is more acceptable than a 7-second delay just starting the app.

richardtallent
OK, I understand your point of view, but that still isn't the solution to my problem. You CANNOT build a fast enough search-as-you-type mechanism based on sql queries in a embedded enviroment such as this for this kind of data quantity (my language has 130k DISTINCT words) as you can try it yourself. I managed to implement this search-as-you-type algorithm based on divide-et-impera approach that only does log(n) operations to give me the interval of indexes between witch the words are compliant to my current search criteria. That works and it works VERY FAST.
sshadoww
Fallowing this premise I need a mechanism to SAVE/LOAD this data structure that I run my algorithm on.
sshadoww
My advice regarding loading still applies, regardless the data store: load the first few screenfuls of each letter of the alphabet, and display those for the user while you load the remainder in the background. That will get you past the first letter. In fact, if you're very clever about it, you could read the data for the first few screens of each two-letter combination of characters first (AA, AB, AC, etc.), which will get you through two letters before your background load has to catch up.
richardtallent
A: 

I have currently a somewhat similar coding problem with a large amount of searchable strings. My solution is to store the prepared data in one large memory array, containing both the texttual data and offsets as links. Meaning I do not allocate objects for each item. This makes the data use less memory and also allows me to load & save it to a file without further processing.

Not sure if this is an option for you, since this is quite an obvious solution once you've realized that the object tree is causing the slowdown.

Thomas Tempelmann
How to you do really do that ?
sshadoww
How? Well, I just write code in C that does it :)I've added a longer comment further down since I was restricted to a short reply right here
Thomas Tempelmann
+1  A: 

Store your dictionary in Core Data and use NSFetchedResultsController to manage the display of these dictionary entries in your table view. Loading all 125,000 words into memory at once is a terrible idea, both performance- and memory-wise. Using the -setFetchBatchSize: method on your fetch request for loading the words for your table, you can limit NSFetchedResultsController to only handling the small subset of words that are visible at any given moment, plus a little buffer. As the user scrolls up and down the list of words, new batches of words are fetched in transparently.

A case like yours is exactly why this class (and Core Data) was added to iPhone OS 3.0.

Brad Larson
I agree, this solves my loading problem for displaying all the 125k words in a UITableView. What about my search-as-you-type mechanism ? If I use Core Data as the storage solution for my dictionary I won't be able to run my log(N) search-as-you-type algorithm (since I don't have the data structure to run it on) and transform the search-as-you-type in a Predicate-based FetchedResult wich will take much much longer. Any ideeas ?
sshadoww
Why won't you be able to run the same search routine as you do with your SQLite database? If it requires a secondary hash table of some sort, you could use Core Data for that and perform queries against this hash table. All I know is that Apple themselves use Core Data for storing your contacts, with search-as-you-type enabled there. I recommend testing the performance first with standard queries as you type and seeing if that really is too slow for your needs. Core Data can surprise you, performance-wise.
Brad Larson
A: 

I use a large NSData memory block, then search through it. Well, there's more to it, it took me about two days to get it well optimized.

In your case I suspect you have a dictionary with a lot of words that have similar beginnings. You could prepare them on another computer in a format the both compacts the data and also facilitates fast lookup. As a first step, the words should be sorted. With that, you can already perform a binary search on them for a fast lookup. If you store it all in one large memory area, you can do the search quite fast, compared to how sqlite would search, I think.

Another way would be to see the words as a kind of tree: You have many thousands that begin with the same letter. So you divide your data accordingly: You have a sql table for each beginning letter of your set of words. that way, if you look up a word, you'd select one of the now-smaller tables depening on the first letter. This makes the amount that has to be searched already much smaller. and you can do this for the 2nd and 3rd letter as well, and you already could have quite a fast access.

Did this give you some ideas?

Thomas Tempelmann
A: 

Well actually I figured it out myself in the end, but of course I thank you all for your quick and pertinent answers. To be concise I will just say that, the fact that Objective-C, just like any other object-based programming language, due to introspection and other objective requirements is significantly slower than procedural programming languages.
The solution was in fact to load all my data in a continuous chunk of memory using malloc (a char **) and search on-demand in it and transform to objects. This concluded in a .5 sec loading time (from file to memory) and resonable (should be read "fast") operations during execution. Thank you all again and if you have any questions I'm here for you. Thanks

sshadoww