I have a number of rather large binary files (fixed length records, the layout of which is described in another –textual– file). Data files can get as big as 6 GB. Layout files (cobol copybooks) are small in size, usually less than 5 KB.
All data files are concentrated in a GNU/Linux server (although they were generated in a mainframe).
I need to provide the testers with the means to edit those binary files. There is a free product called RecordEdit (http://record-editor.sourceforge.net/), but it has two severe drawbacks:
It forces the testers to download the huge files through SFTP, only to upload them once again every time a slight change has been made. Very inefficient.
It loads the entire file into working memory, rendering it useless for all but the relatively small data files.
What I have in mind is a client/server architecture based in Java:
The server would be running a permanent process, listening for edition-oriented requests coming from the client. Such requests would include stuff like
return the list of available files
lock certain file for edition
modify this data in that record
return the n-th page of records
and so on…
The client could take any form (RCP-based in a desktop –which is my first candidate-, ncurses in the same server, a middle web application…) as long as it is able to send requests to the server.
I've been exploring NIO (because of its buffers) and MINA (because of protocol transparency) in order to implement the scheme. However, before any further advancement of this endeavor, I would like to collect your expert opinions.
Is mine a reasonable way to frame the problem?
Is it feasible to do it using the language and frameworks I'm thinking of? Is it convenient?
Do you know of any patterns, blue prints, success cases or open projects that resemble or have to do with what I'm trying to do?