views:

1338

answers:

4

I have a classifieds website. Users may put ads, edit ads, view ads etc.

Whenever a user puts an ad, I am adding a document to solr. I don't know however when to commit it. Commit slows things down from what I have read.

How should I do it? Autocommit every 12 hours or so?

Also, how should I do it with optimize?

Please give a detailed answer...

Thanks

+1  A: 

Try it first. It would be really bad if you avoided a simple and elegant solution just because you read that it might cause a performance problem. In other words, avoid premature optimization.

John
Optimization in this context means a special solr/lucene operation which compacts the search index. It has nothing whatsoever to do with "premature optimization".
Joe23
+2  A: 

Actually, committing often and optimizing makes things really slow. It's too heavy.

After a day of searching and reading stuff, I found out this:

1- Optimize causes the index to double in size while beeing optimized, and makes things really slow.

2- Committing after each add is NOT a good idea, it's better to commit a couple of times a day, and then make an optimize only once a day at most.

3- Commit should be set to "autoCommit" in the solrconfig.xml file, and there it should be tuned according to your needs.

Camran
+1  A: 

The way that this sort of thing is usually done is to perform commit/optimize operations on a Solr node located out of the request path for your users. This requires additional hardware, but it ensures that the performance penalty of the indexing operations doesn't impact your users. Replication is used to periodically shuttle optimized index files from the master node to the nodes that perform search queries for users.

Paul Brown
+2  A: 

A little more detail on Commit/Optimize:

Commit: When you are indexing documents to solr none of the changes you are making will appear until you run the commit command. So timming when to run the commit command really depends on the speed at which you want the changes to appear on your site through the search engine. However it is a heavy operation and so should be done in batches not after every update.

Optimize: This is similar to a defrag command on a hard drive. It will reorganize the index into segments (increasing search speed) and remove any deleted (replaced) documents. Solr is a read only data store so every time you index a document it will mark the old document as deleted and then create a brand new document to replace the deleted one. Optimize will remove these deleted documents. You can see the search document vs. deleted document count by going to the Solr Statistics page and looking at the numDocs vs. maxDocs numbers. The difference between the two numbers is the amount of deleted (non-search able) documents in the index.

Also Optimize builds a whole NEW index from the old one and then switches to the new index when complete. Therefore the command requires double the space to perform the action. So you will need to make sure that the size of your index does not exceed %50 of your available hard drive space. (This is a rule of thumb, it usually needs less then %50 because of deleted documents)

Index Server / Search Server: Paul Brown was right in that the best design for solr is to have a server dedicated and tuned to indexing, and then replicate the changes to the searching servers. You can tune the index server to have multiple index end points.

eg: http://solrindex01/index1; http://solrindex01/index2

And since the index server is not searching for content you can have it set up with different memory footprints and index warming commands etc.

Hope this is useful info for everyone.

JamesR