The state of affairs with regard to Jet file sizes is interminably problematic for me.
I am currently watching a piece of my own VBA code from Access database A as it does a series of single-record field updates using ADO to a table on Access database B (via a updateable-query reference in database A). The single field is a CHAR(8). With every 4 updates that go by, database B grows by about 8 Kbytes. No good excuse for that. The addition to the file size slows performance on this severely; with each file growth, updates slow from about one per second (in a table of about 30-40K records using single-record SQL lookups and no indexes anywhere) to one per 5-10 seconds.
Now, I admit, I did compact/repair database B prior to running this update code; perhaps if I had not done that, the performance would not have been this bad. Had the target field for update been of, say, type Memo, then I would have expected this. But to carry out an update on a CHAR() field and get this result is simply not reasonable.
Most of the above (no particular criticism for any one solution intended) appear to be valid solutions for applications that use a relatively permanent business application arrangement (talk to the same target databases all of the time). Mine is not so . . . I cannot alter the target database (database B), as it is generated and consumed by a vendor's tool that we use to export and import data from their application.
I understand and commend the above writers for coming up with solutions to users' problems. However, I cannot let it stand when poor software design/implementation gets in the way of users using a product as the users expect it to function.