views:

181

answers:

6

Hi,

So, at my workplace, they have a huge access file (used with MS Access 2003 and 2007). The file size is about 1.2GB, so it takes a while to open the file. We cannot delete any of the records, and we have about 100+ tables (each month we create 4 more tables, dont ask!). How do i improve this, i.e. downsizing the file?

thanks.

+5  A: 

You can do two things:

  • use linked tables
  • "compact" the database(s) every once in a while

The linked tables will not in of themselves limit the overall size of the database, but it will "package" it in smaller, more manageable files. To look in to this:

'File' menu + 'Get External data' + 'Linked tables'

Linked tables also have many advantages such as allowing one to keep multiple versions of data subset, and selecting a particular set by way of the linked table manager.

Compacting databases reclaims space otherwise lost as various CRUD operations (Insert, Delete, Update...) fragment the storage. It also regroup tables and indexes, making search more efficient. This is done with

  'Tools' menu + 'Database Utilities' + 'Compact and Repair Database...'
mjv
i have tried compacting the db, it made no diff :(
aZn137
@aZn137 The lack of size reduction after compacting the db may either be the result of an overly fragmented table (I've seen that) or a normal thing if the CRUD operations during the lifetime of the database have been such that it didn't produce significant amounts of fragmentation. Reloading the data into a brand-new database (and possibly separate ones, cf linked tables) can be the only way of "fixing" an otherwise "uncompressable" database. Do make copies of the original, if only to protect against typos and other human-caused mishaps, not to mention MS-Access "blow-ups".
mjv
+1  A: 

You should really think about your db architecture. If there aren't any links between the tables you could try to move some of them to another database (One db per year :) as a short-term solution..

svens
+2  A: 

You're really pushing up against the limits of MS Access there — are you aware that the file can't grow any larger than 2GB?

I presume you've already examined the data for possible space saving through additional normalization? You can "archive" some of the tables for previous months into separate MDB files and then link them (permanently or as needed) to your "current" database (in which case you'd actually be benefiting from what was probably an otherwise bad decision to start new tables for each month).

But, with that amount of data, it's probably time to start planning for moving to a more capacious platform.

Larry Lustig
+1 for good advice, but it's clear from the description that the schema is completely improperly designed -- adding 20 new tables each month is a dead giveaway! While upsizing will avoid the 2GB hard limit of Jet/ACE, it won't fix the underlying problem, which is that the design is simply WRONG.
David-W-Fenton
A: 

well .. Listen to @Larry, and keep in mind that, on the long term, you'll have to find another database to hold your data!

But on the short term, I am quite disturbed by this "4 new tables per month" thing. 4 tables per month is 50 per year ... That surely sounds strange to every "database manager" here. So please tell us: how many rows, how are they built, what are they for, and why do you have to build tables every month?

Depending on what you are doing with your data, you could also think about archiving some tables as XML files (or even XLS?). This could make sense for "historic" data, that do not have to be accessed through relations, views, etc. One good example would be the phone calls list collected from a PABX. Data can be saved as/loaded from XML/XLS files through ADODB recordsets or the transferDatabase method

Philippe Grondier
i checked, and its not 4, but ~20+ tables per month. again, dont ask.
aZn137
well each month we import new data from a mass file. that file will then be processed into diff groups, each group earns its own table of that month. some group has couple, some have tens, hundreds of records. each record contains 10-20 fields.
aZn137
The structure you're describing is so wrong as to be worthy of The Daily WTF (http://dailywtf.com). You may not be able to solve the bloating problem without fix the obviously wrong application design.
David-W-Fenton
@aZn137, according to what you are telling me, one temporary solution could then be to have xml files instead of tables in an Access file. These xml files can be generated depending on the views/data each group of user needs to see. And by the way I do agree with @David saying you are facing an obvious application design issue!
Philippe Grondier
A: 

Adding more tables every month: that is already a questionable attitude, and seems suspicious regarding data normalisation.
If you do that, I suspect that your database structure is also sub-optimal regarding field sizes, data types and indexes. I would really start by double checking those.

If you really have a justification for monthly tables (which I cannot imagine, again), why not having 1 back-end per month ?
You could also have on main back-end, with, let's say, 3 month of data online, and then an archive db, where you transfer your older records.
I use that for transactions, with the main table having about 650.000 records, and Access is very responsive.

iDevlop
If the db is the one you refer to in your referential integrity question, I am not surprised you have problems. Its design is a mess.
iDevlop
i tried splitting the database, the thing took almost 2 hours, then gave me an error, "max count lock exceeded" or some junk...
aZn137
The error message you got should also give you a HELP link that takes you to instructions on how to change a registry key that allows you to do up the maximum number of locks.
David-W-Fenton
For splitting, try this: copy the db to a local drive (and let the users know it is unavailable), then do your stuff in exclusive mode on the local drive. Afterwards, copy it back to the server.
iDevlop
i have tried modifying the registry (to have 20,000). this time, it just froze. no error message, but no progress either.
aZn137
A: 

A couple of “Grasping at straws” ideas

Look at the data types for each column, you might be able to store some numbers as bytes saving a small amount per record

Look at the indexes and get rid of the ones you don’t use. On big tables unnecessary indexes can add a large amount of overhead.

I would + 2^64 the suggestions about the database design being a bit odd but nothing that hasn’t already been said so I wont labour the point

Kevin Ross