I've spent hours searching for examples of how to use the bsddb module and the only ones that I've found are these (from here):
data = mydb.get(key)
if data:
doSomething(data)
#####################
rec = cursor.first()
while rec:
print rec
rec = cursor.next()
#####################
rec = mydb.set()
while rec:
key, val = rec
doSomething(key, val)
rec = mydb.next()
Does anyone know where I could find more (practical) examples of how to use this package?
Or would anyone mind sharing code that they've written themselves that used it?
Edit:
The reason I chose the Berkeley DB was because of its scalability. I'm working on a latent semantic analysis of about 2.2 Million web pages. My simple testing of 14 web pages generates around 500,000 records. So doing the math out... there will be about 78.6 Billion records in my table.
If anyone knows of another efficient, scalable database model that I can use python to access, please let me know about it! (*lt_kije* has brought it to my attention that bsddb
is depricated in Python 2.6 and will be gone in 3.*)