Several databases I've been looking at recently implement a virtual machine internally to perform the respective data reads and writes. For an example, check out this article on SQLite's virtual machine they call the 'VDBE'. I'm curious as to what the benefits of such an architecture are. I would assume performance is one but why would a virtual machine like this run faster? In fact, it seems to be that this extra layer could cause it to run slower. So perhaps it's for security? Or portability? Anyway, just curious about this.
views:
89answers:
1
+1
A:
They do their stuff at the "assembly-like" level, where you gain acceptable speed without losing portability. I think they provide a virtual machine so they get a balanced trade off. Either you execute the high level code(SQL code**) as a high level language and you lose speed but you gain convenience. The other way is to produce Platform-Specific(Native) code which is going to run much faster compared to the interpreted but it is a lot of hassle for a wide-spread library which runs where ANSI-C exists.
** It doesn't have to be SQL-code of course. I think that an imperative representation is much better suited for execution. Anyway, that representation still is a very high level representation compared to "opcode".
AraK
2010-03-19 01:30:04
But why would gain speed by using a VM model?
Marplesoft
2010-03-25 17:36:08
@Marplesoft: mainly because you save the time required to analyze the source language and its semantics (SQL in the case of SQLite). The compiler transcribing the source language to VM bytecode does a lot, including some optimizations, and all you've got left is to run the resulting bytecode as fast as possible.
Eli Bendersky
2010-08-28 17:17:54