views:

2075

answers:

8

I have a large MS Access application with a lot of computations in VBA code. When I run it it eventually crashes due to excessive file size. There are a lot of intermediate tables and queries created and subsequently deleted, but Access does not reclaim the space. I have diligently closed all intermediate record sets and set all temporary objects to nothing, but nothing helps. The only way I can get my code to run is to run part of it, stop and repair/compress the file then restart the code.

Isn't there a better way?

Thanks

A: 

Unfortunately, MS Access has problems when you get too large - I think the max size is 2GB for an access DB.

You may consider moving to Sql Express, VistaDB, etc.

Bramha Ghosh
The 2GB limit makes sense, but it is likely not an Access limitation, but an OS limitation. That is, since access stores its data in a single file, the 2GB file size limit on the OS might be the point of failure.
JohnFx
I don't know of any recent version of Windows that has a 2GB file size limitation. It is true that it's not an *Access* limitation -- it's a Jet limitation, and it's hard-wired into Jet.
David-W-Fenton
A late addendum: certain file systems, like FAT32, have a 2GB file size limitation. But nobody with any sense uses FAT32 on Windows any more -- that went out with the death of the Win9x family of Windows versions.
David-W-Fenton
+3  A: 

What sizes are you dealing with? What is the error code when it crashes? I'd be surprised if it is simply because the file gets "too big", but I imagine there's a limit. It sounds from your description of all the temp stuff that there may be design improvements that would help.

EDIT: I expect you realize it's non-trivial to replace the database with something else - even if you try to keep whatever else is in the mdb besides the tables. Access querydefs are unique, Access SQL is non-standard and you'd be basically starting over.

Most Access applications I've seen have lots of opportunity for refactoring; and it's usually not that difficult if a) you understand the logic and the business rules, and b) you have a solid understanding of Access programming. But that would be more or less true for any alternatives. If I were you and you're a little short in either area, maybe you can get some help. But I'd try to rescue the Access app first.

There's also a suggestion from another poster about moving the tables into one or more attached MDBs. That's a solid, well-proven technique in general. But first I'd get a handle on what the real cause of the problem is.

le dorfier
+1 I agree that it sounds like the design could be improved.
Remou
I disagree with you. I've moved some fairly large MS Access DB's to MS SQL without a big problem (no its not trivial); but the vast majority of the query defs, etc. work without modification (especially if they are linked to the SQL tables in the Access database). You aren't starting over,
CodeSlave
+5  A: 

You should be able to run the compact function from within your VBA code.

I had the below snippet bookmarked from a long time ago when I was doing access work.

Public Sub CompactDB() 
    CommandBars("Menu Bar").Controls("Tools").Controls("Database utilities").Controls("Compact and repair database...").accDoDefaultAction 
End Sub

You can put that in your code to get around it.

NOTE: you might also consider growing to a larger db system if you are having these types of scaling issues.

Mitchel Sellers
A: 

According to http://office.microsoft.com/en-us/access/HP051868081033.aspx, Access 2003 and 2007 have a 2 GB limit. However, it's easy to move some or all the tables into a separate .mdb file and then link to those tables. It's good practice anyway to have two files, one for your data and one for all the macros, queries, and so on. You could even have multiple files if your table file gets near the 2 GB limit.

Knox
A: 

I'd push the data over to MS SQL (the permanent data and the intermediate tables); and you can leave the code portion in MS Access for the time being.

This solves two big issues:

  1. The data will be inherently more stable/dependable (I can't tell you how many times I've had a corrupt MS Access database).
  2. Your Access database won't grow/change very much (it should reach an equilibrium once all the code in has been run and compiled).

Both of these mean no more having to compress/repair the database; you can get a free version (the Express Edition) of MS SQL and it is not that hard to do.

CodeSlave
A: 

If you do not want to switch to SQL Express or similar, you could dig the following ideas:

  • Open another 'external' access database (mdb file) for all temporary tables, so you could put all temp data in the external file, throwing away the mdb file when you close your app. You will then manipulate in your code the 'currentDb' object and another database that you build at startup and connect to through jet, OLEDB or ODBC connection
  • Separate your permanent tables from your code and, when needed, bring the data into your local client interface to build your temporary tables. This can be done for example by linking the external database to the local/client file using "DoCmd.transferDatabase acLink". This can also be done by connecting to the permanent data through OLEDB connection, opening the needed recordset(s) and saving them locally as XML files. There are many other solutions that can be implemented here.
Philippe Grondier
A: 

The state of affairs with regard to Jet file sizes is interminably problematic for me.

I am currently watching a piece of my own VBA code from Access database A as it does a series of single-record field updates using ADO to a table on Access database B (via a updateable-query reference in database A). The single field is a CHAR(8). With every 4 updates that go by, database B grows by about 8 Kbytes. No good excuse for that. The addition to the file size slows performance on this severely; with each file growth, updates slow from about one per second (in a table of about 30-40K records using single-record SQL lookups and no indexes anywhere) to one per 5-10 seconds. Now, I admit, I did compact/repair database B prior to running this update code; perhaps if I had not done that, the performance would not have been this bad. Had the target field for update been of, say, type Memo, then I would have expected this. But to carry out an update on a CHAR() field and get this result is simply not reasonable.

Most of the above (no particular criticism for any one solution intended) appear to be valid solutions for applications that use a relatively permanent business application arrangement (talk to the same target databases all of the time). Mine is not so . . . I cannot alter the target database (database B), as it is generated and consumed by a vendor's tool that we use to export and import data from their application.

I understand and commend the above writers for coming up with solutions to users' problems. However, I cannot let it stand when poor software design/implementation gets in the way of users using a product as the users expect it to function.

David Henninger
For me, the admission of updating one record at a time is a red flag. Are you sure it can't be done with a batch SQL UPDATE? Secondly, there is no surprise here. Jet/ACE stores records in variable length fields. Changing the data in a field of non-fixed length is going to change the length of the record and going to require rewriting it in a different data page. This is all by design and really not very different from any database engine that has ever existed (except those from the days when disk space was far more restricted than it was when Jet was designed in the early 90s).
David-W-Fenton