We have a very similar system. We collect data from from manufacturing/analysis equipment, process the raw data with our algorithms, and store the results of our processing in a standard format in our Oracle database.
If you're using Matlab for your processing, it sounds like you're even more math intensive than we are, yet we still chose to manage all the data processing in our external apps. Our schema basically looks something like this:
CREATE TABLE RawData
(
RawDataId NUMBER(22) DEFAULT Null NOT NULL, (PK)
RawDataType VARCHAR2(50) DEFAULT Null NOT NULL,
ExposureToolId NUMBER(22) DEFAULT Null NULL,
EventTime DATE DEFAULT Null NULL,
... ( various columns descibing event characteristics) ...
Body LOB DEFAULT Null NOT NULL,
Length NUMBER DEFAULT 0 NULL
)
CREATE TABLE Analysis
(
AnalysisId NUMBER(22) DEFAULT Null NOT NULL, (PK)
RawDataId NUMBER(22) DEFAULT Null NOT NULL,
AnalysisName VARCHAR2(255) DEFAULT Null NOT NULL,
AnalysisType VARCHAR2(255) DEFAULT Null NOT NULL,
AnalyzeTime DATE DEFAULT Null NULL,
Status CHAR(1) DEFAULT Null NULL,
SentToServerDate DATE DEFAULT Null NULL,
ServerId NUMBER(22) DEFAULT Null NULL
)
CREATE TABLE DataSet
(
DataSetId NUMBER(22) DEFAULT Null NOT NULL, (PK)
AnalysisId NUMBER(22) DEFAULT Null NOT NULL,
DataSetName DATE DEFAULT sysdate NULL
)
CREATE TABLE Coefficient
(
DataSetId NUMBER(22) DEFAULT Null NOT NULL, (PK)
IdentifierId NUMBER(22) DEFAULT Null NOT NULL, (PK)
Coefficient FLOAT DEFAULT 0 NULL
)
Collected data from the tools goes into the RawData table, and at the time a header record(s) are created in the Analysis table, indicating that the data needs processing, showing the name of the analysis and the server that will do the processing; the status is set to "Pending". The server is then sent the message to perform the work, and when it stores the results (in the Coefficient table, grouped by Datasets), the status is set to "Done" (or "Error" if we encountered a problem). When the server components start up, they query the Analysis table for any pending work; in this way we ensure that there is no lost work in the event of crashes.
This has been our model for over ten years, and it works really well. It's easy to deploy and maintain different versions of the server modules; in my opinion, it's easier to manage executables this way than it is to compile packages on the Oracle server, but that could just be my bias - the conversion of raw to analyzed data just seems more "app-centric" than a database task.