We have an application that produces a database on disk. The database is made of thousand of files.
The application can have from 500 to 3000 file's handle opened at the same time. These handle are kept opened and data is continuously written to.
Up until now, it worked really well on local hard drive, but when trying to put the database on a shared disk, we encountered a lot of problems.
Is it simply a bad idea or it can work if we change the design of the database engine to open/close file handle on demand?
EDIT
At this time, we only have one client "connected" to the database.