Application I work on generates several hundreds of files (csv) in a 15 minutes period of times. and the back end of the application takes these files and process them (updates database with those values). One problem is database locks.
What are the best practices on working with several thousands of files to avoid locking and efficiently processing these files?
Would it be more efficient to create a single file and process it? or process single file at a time?
What are some common best practices?
Edit: the database is not a relational dbms. It s nosql, object oriented dbms that works in the memory.