Has anyone successfully used Tokyo Cabinet / Tokyo Tyrant with large datasets? I am trying to upload a subgraph of the Wikipedia datasource. After hitting about 30 million records, I get exponential slow down. This occurs with both the HDB and BDB databases. I adjusted bnum to 2-4x the expected number of records for the HDB case with only a slight speed up. I also set xmsiz to 1GB or so but ultimately I still hit a wall.
It seems that Tokyo Tyrant is basically an in memory database and after you exceed the xmsiz or your RAM, you get a barely usable database. Has anyone else encountered this problem before? Were you able to solve it?