views:

423

answers:

3

I have a postgresql database. In table, which i need to index, i have about 20 million rows. When i want to index them all in one attempt(smth like "select * from table_name"), i have Java OutOfMemory error, even, if i`ll give to JVM more memory.

Is there any option in SOLR to index a table part by part(e.g. execute sql for first 1000000 rows, then index it, then execute sql for second million)?

Now i am using sql query with LIMIT. But, everytime, when solr has indexed it, i need manually start it again.

UPDATE: Ok, 1.4 is out now. No OutOfMemory Exceptions, seems, Apache had done very big work on DIH. Also, now we can pass parameters through request, and use them in our SQL selects. Wow!

A: 

Do you have autoCommit, batchSize configured? If you do, it might be this bug, try updating to trunk.

Mauricio Scheffer
A: 

Have you looked at using SolrJ as a client? While DIH is great, the tight coupling between Solr and your Database means that it can be hard to manipulate your data and work around issues.

With a SolrJ client, you could then iterate in batches that you control over your database, and then turnaround and dump then directly into Solr. Also, using SolrJ new binary java stream format instead of XML means that indexing your 20 million rows should go fairly quickly.

DIH is great, until you end up in issues like this!

Eric Pugh
As far, as i understood. SolrJ is a client for Java. Right? But, in my case, i use SOLR as independent full-text search server, without Java app.
Yurish
You are correct. SolrJ is a client for Java. However, there are many different clients, for Ruby, Python, .NET etc that you could use as well. The binary stream format unfortunantly is Java specific today.
Eric Pugh
A: 

See the bit about "cursors" here, that might well help.

http://jdbc.postgresql.org/documentation/83/query.html

Richard Huxton