Well I have seen the commercial DBs get up 2GB per minute on not particulary impressive hardware. The standard Open Source dbs (MySQL, Postgress even sqlite are not far behind).
For any volume of writes which will give a modern DB trouble there are three things which will affect performance (neither of which depends on the particular DB you choose).
One is basic design, particulary partitioning (spreading your db over several physical disks) and minimising the number of indexes on the tables (for write performance zero indexes is best!).
Two is log placement or if possible log avoidence. Logging is the bottleneck in most RDBMes. Making sure your are logging to dedicated fast disks is one way, turning of logging (varies according to the RDBMS but most support this) for the table, if you
can afford to lose transactions.
Three is hardware -- lots of memory and lots of fast disks to spread out your I/O load.
There are some exotic options out there if this is still not fast enough.
Buy a z/OS mainframe and run the venerable IMS/DB with the DEDB (Data Entry database) feature. This is about four times faster than any other ACID DB. Buy Oracle's In Memory DB option (used to be HPs TimesTen).
Another possibility if you have some decent queing software avaiable is to capture the data and immediatly place it in a queue. You can then have one or more background processes pulling the data off the queue and doing the actual DB updates in the backgroud.