views:

421

answers:

8

I have created an MS Access 2003 application, set up as a split front-end/back-end configuration, with a user group of about five people. The front end .mdb sits on a network file server, and it contains all the queries, forms, reports, and VBA code, plus links to all the tables in the back end .mdb and some links to ODBC data sources like an AS/400. The back end sits on the same network file server, and it just has the table data in it.

This was working well until I "went live" and my handful of users started coming up with enhancement requests, bug reports, etc. I have been rolling out new code by developing/testing in my own copy of the front-end .mdb in another network folder (which is linked to the same back-end .mdb), then posting my completed file in a "come-and-get-it" folder, alerting the users, and they go copy/paste the new front-end file to their own folders on the network. This way, each user can update their front end when they're at a 'stopping point' without having to boot everyone out at once.

I've found that when I'm developing now, sometimes Access becomes extremely slow. Like, when I am developing a form and attempt to click a drop-down on the properties box, the drop-down arrow will push in, but it will take a few seconds before the list of options appears. Or there's tons of lag in selecting & moving controls on a form. Or lots of keyboard lag.

Then, at other times, there's no lag at all.

I'm wondering if it's because I'm linked to the same back end as the other users. I did make a reasonable effort to set up the queries, forms, reports etc. with minimal record locking, if any at all, depending on the need. But I may have missed something, or perhaps there is some other performance issue I need to address.

But I'm wondering if there is an even better way for me to set up my own development back-end .mdb, so I can be testing my code on "safe" data instead of the same live data as the rest of the users. I'm afraid that it's only a matter of time before I corrupt some data, probably at the worst possible moment.

Obviously, I could just set up a separate back-end .mdb and manually reconfigure the table links in the front end every time, using the Linked Table Manager. But I'm hoping there is a more elegant solution than that.

And I'm wondering if there are any other performance issues I should be considering in this multi-user, split database configuration.

EDIT: I should have added that I'm stuck with MS Access (not MS-SQL or any other "real" back end); for more details see my comment to this post.

A: 

There are two rules for developing against live data

The first rule is . . . never develop against live data. Not ever.

The second rule is . . .never develop against live data. Not ever.

You can programatically change the bindings for linked tables, so you can write a macro to change your links when you're deploying a new version.

The application is slow because it's MS Access, and it doesn't like many concurrent users (where many is any number > 1).

Binary Worrier
Also put your Dev data mdb in a private folder on your local machine, then if you forget to change the linked table bindinds, the new version just won't work
Binary Worrier
Let me get this straight, just so I'm clear. It's not a good idea to develop against live data, right?Shouldn't there be a third rule, like "If you develop against live data...."
Dave Nicks
like ". . . when you wake up and realise what you're doing and shout "Stop", then stop?" Yup, something like that :)
Binary Worrier
I completely agree with the don't work against live data. Indeed I would change the background colour of the main menu to orange or red just for me.However I have clients with 25 users happily running against an Access database. In 10 years they've only had 3 or 4 corruptions with 2 of those because of a flaky Terminal Server/Citrix box. And they never lost data or required a backup to be restored. A few tables have upwards of 600K and 800K records. Yes, it should be upsized. But a Dilbert's PHB got it cancelled. I later found out he got escorted out by security guards.
Tony Toews
+1  A: 

Use VBA to unlink and re-link your tables to the new target when switching from dev to prod. It's been to many years for me to remember the syntax--I just know the function was simple to write.

Or use MS-Access to talk to MS-Access through ODBC, or some other data connection that lives outside of the client mdb.

As with all file base databases, you will eventually run into problems with peak usage or when you go over a small magical number somewhere between 2 and 30.

Also, Access tends to corrupt frequently, so backup, compact and repair need to be done on an frequent basis. 3rd party tools used to exist to automate this task.

As far as performance goes, the data is being processed client side, so you might want to use something like netmeter to watch how much data is going over the wire. The same principle about indexing and avoiding table scans apply to file base dbs as well.

MatthewMartin
+1 For extra info, minus a gazillion for undoing years of therapy and bring all that stuff back to me . . . the horror . . . oh god the HORROR (weeps piteously)
Binary Worrier
I like the sound of this approach - I will have to do some digging to find out how to do the linking via VBA.
Dave Nicks
Access only tends to corrupt if networks are in poor shape anyhow. Or if on a wireless or WAN connection.
Tony Toews
The worst MS-Access corruption I ever dealt with had to do with using VBA and COM objects. The front end corrupted like crazy even though it was local, hence the common advice to split the back end and the front end.
MatthewMartin
@Tony Toews: I don't know much about Access corruption but I think David W. Fenton does and from his recent posts on SO I've learned that corruption tends to happen when someone leaves Name Autocorrect or Compact On Close turned on, edits MEMO columns' data with bound controls, uses lots of VBA in their frontend file, uses the dot operator (rather than the bang) when referring to a Form control... there are probably more that I haven't recalled. So as a casual observer it seems to me there are many routes to corruption that are non-obvious to a non-expert Access user.
onedaywhen
@owndaywhen Yes, David knows a lot about corruption. But then so do I. Microsoft Access Corruption FAQ at http://www.granite.ab.ca/access/corruptmdbs.htm is my website. <smile>And I disagree with a lot of his comments. Name Autocorrect can cause problems but generally only for the developer. And I think MS has fixed a lot of the problems it caused. Compact On Close is not a good idea but doesn't cause corruptions of it itself. I've never had problems with editing Memo fields using bound controls. Lots of VBA? Some of my apps have 20K or even 70K lines of code.
Tony Toews
@owndaywhen Continued. I use the dot operator all the time in VBA code. Whenever possible. However maybe David and I should start a completely new thread and duke it out. <smile>
Tony Toews
As I say elsewhere, causation is a diffcult concept. Do you really believe that a welding machines on the other side of a wall is a *cause* of .mdb corruption? I don't see anything on your pages that identifies corruption in a .mdb as being caused by the Access database engine itself, so I think your repository is incomplete to say the least.
onedaywhen
I appreciate the humour could you rather challenge David in comments on SO when he makes these misstatements, please? I think this would have more impact.
onedaywhen
I'm seeing these comments long after they were posted, but there are a number of misstatements by @onedaywhen that need to be corrected. Name Autocorrect does not corrupt DATA, but it can easily lead to corruption of your VBA project. As I say below, I have never said that Compact on Close CAUSES corruption, nor that lots of VBA leads to corruption. These are all invented by @onedaywhen out of whole cloth, or due to his lack of comprehension of whatever it was I was saying at the time he developed these misapprehensions. I've dealt with other of his assertions in comments on another answer.
David-W-Fenton
In regard to the welding machine: electrical interference can cause unreliable network connections. It is the connection dropping that causes the corruption, but when that is caused by electrical interference from, say, a welder on the other side of the wall, I don't see much utility in *not* saying that the welding machine's operation caused the corruption. If it hadn't been there, the electrical interference wouldn't have been produced, and then the connection would not have dropped and the corruption would not have occurred. It's not the direct cause, but it's the ultimate cause.
David-W-Fenton
If the welding machines are the *cause* of the problem, then we should find which manufactures' welding equipment causes Access databases to corrupt, the models or perhaps individual machines themselves, and see if we can get the Access team to 'patch' the welding machines to solve the problem?
onedaywhen
A: 

You might also find some of the answers to this question (how to extract schemas from access) to be useful as well. Once you've extracted a schema using one of the techniques that were suggested you gain a whole range of new options like the ability to use source control on the schemas, as well as being able to easily build "clean" testing environments.

Edit to respond to comment: There's no easy way to source control an Access database in it's native format, but schema files are just text files like any other. Hence, you can check them in and out of the source control software of your choice for easy version control/rollbacks.

Or course, it relies on you having a series of scripts set up to re-build your database from the schema. Once you do, it's normally fairly trivial to create an option/alternative version that rebuilds it in a different location, allowing you to build test environments from any previous committed version of the schema. I hope that clarifies a bit!

mavnn
Dave Nicks
Added a bit more to hopefully explain my reasoning a bit better.
mavnn
Dave Nicks
Thanks for posting that link. I've added the solution I've been using to that thread as well.
Tony Toews
A: 

You need to understand that a shared mdb file for the data is not a robust solution. Microsoft would suggest that SQL Server or some other server based database would be a far better solution and would allow you to use the same access front end. The migration wizard would help you make the changeover if you wanted to go that way.

As another uses pointed out, corruption will occur. It is simply a question of how often, not if.

To understand the performance issues you need to understand that to the server the mdb file with the data in it is simply that, a file. Since no code runs on the server, the server does not understand transactions, record locking etc. It simply knows that there is a file that a bunch of people are trying to read and write simultaniously.

With a database system such as SQL Server, Oracle, DB2. MySQL etc. the database program runs on the server and looks to the server like a single program accessing the database file. It is the database program (running on the server) that handles record locking, transactions, concurrency, logging, data backup/recovery and all the other nice things one wants from a database.

Since a database program designed to run on the server is designed to do that and only that, it can do it far better and more efficently that a program like Access reading an writing a shared file (mdb).

JonnyBoats
Thanks for the insight on the back end. I wish I had a real DB I could run against. If there was a decent, open-source back end that could sit on a file server (not a dedicated db server, to which I don't have access rights) and serve a half-dozen users, maybe it would be worth looking into.
Dave Nicks
For a 5-user app, a properly-designed Access app will be fine, and there won't be any corruption, EVER, unless there are flakey components in your operating environment (hardware and/or software).
David-W-Fenton
...unless they leave Name Autocorrect or Compact On Close turned on, edit MEMO columns' with bound controls, use lots of VBA in their frontend file, use the dot operator (rather than the bang) when referring to a Form control... and those are just a few of the corruption scenarios I recently learned from David W. Fenton ;-)
onedaywhen
I'm coming late to this (don't know how I could have missed these misleading comments), but I've never claimed that Compact on Close *CAUSES* corruption, only that it could cause you to lose data *if* it triggers when your database is corrupted. I have never said that editing Memo fields with bound controls CAUSES corruption, only that memo pointers are one of the most volatile types of data in Jet tables, and editing them unbound is safer *if* some condition occurs that could potentially corrupt them.
David-W-Fenton
I have never said at any point that lots of VBA leads to corruption, though if you don't take care of your VBA code by following the principles I outlined in my answer in this thread you are more prone to VBA project corruption.
David-W-Fenton
As to the dot vs. bang, there have been reports of repeated code corruption with the dot that disappears when the code is switched to use the bang. If you consider that with the dot operator Access is creating an implicit property wrapper around your control, a property that you have no control over and that can get out of synch with your actual control, you might hesitate -- do you want to depend on code you can't control functioning reliably 100% of the time? I don't consider better Intellisense to be worth that risk (I also don't think controls should be treated like they are properties).
David-W-Fenton
Seems you want to have things both ways. For example, if Compact on Close only causes data loss (a form of data corruption, no?) in a very narrow set of circumstances why then waste your time recommending against it? That said, I offer sincere apologies if I've wrongly attributed things to you. My assertion were based on things I read that you wrote on stackoverflow, too long ago to find now. Please be aware that if I have misinterpreted what you have written then it was a genuine mistake and I was not motivated to mislead.
onedaywhen
+1  A: 

Many good suggestions from other people. Here's my 2 millicents worth. My backend data is on server accessed through a Drive mapping. In my case, the Y drive. Production users get the mapping through a login script using active directory. Then the following scenarios are easily done by batch file:

  • Develop against local computer by doing a subst command in a batch file
  • run reports against last nights data by pointing Y to the backup server (read only)
  • run reports against end of month data by pointing to the right directory
  • test against specialized scenarios by keeping a special directory

In my environment (average 5 simultaneous users, 1000's of rows, not 10,000's.) corruption has occurred, but it's rare and manageable. Only once in the last several years have we resorted to the previous days backup. We use SQL Server for our higher volume stuff, but it's not as convenient to develop against, probably because we don't have a SQL admin on site.

Knox
I'm too far out of the IS group to get this one. The back end is on a mapped drive, although I'm using UNC naming for the links because it felt more robust (like, I was even able to copy the front end to my home PC and pull up data via VPN without mapping drives). But I don't know that I have rights to set up user scripts, unless you're talking a .BAT file or something. Plus it's getting out of my league.
Dave Nicks
if you wanted it to work this way, then set up the back end onto the mapped drive, rather than the UNC. This doesn't change the IT environment at all. Then for your own development, use a bat file only on your machine to map the normal letter drive using a SUBST command to a folder on your computer. This can be done with a BAT file. The SUBST command adds a letter drive, but to your own harddisk. Very easy and doesn't involve changing any rights, logins, scripts, or anything else. Doesn't need your server admin to do anything. Good luck in any case.
Knox
A few of my clients have had 600K and 800K rows in some tables. With years going between problems.
Tony Toews
And all of the swans I have ever observed have been white.
onedaywhen
+8  A: 

If all your users are sharing the front end, that's THE WRONG CONFIGURATION.

Each user should have an individual copy of the front end. Sharing a front end is guaranteed to lead to frequent corruption of the shared front end, as well as odd corruptions of forms and modules in the front end.

It's not clear to me how you could be developing in the same copy of the front end that the end users are using, since starting with A2000, that is prohibited (because of the "monolithic save model," where the entire VBA project is stored in a single BLOB field in a single record in one of the system tables).

I really don't think the problems are caused by using the production data (though it's likely not a good idea to develop against production data, as others have said). I think they are caused by poor coding practices and lack of maintainance of your front end code.

  1. turn off COMPILE ON DEMAND in the VBE options.

  2. make sure you require OPTION EXPLICIT.

  3. compile your code frequently, after every few lines of code -- to make this easy, add the COMPILE button to your VBE toolbar (while I'm at it, I also add the CALL STACK button).

  4. periodically make a backup of your front end and decompile and recompile the code. This is accomplished by launching Access with the /decompile switch, opening your front end, closing Access, opening your front end with Access (with the SHIFT key held down to bypass the startup code), then compacting the decompiled front end (with the SHIFT key held down), then compiling the whole project and compacting one last time. You should do this before any major code release.

A few other thoughts:

  1. you don't say if it's a Windows server. Linux servers accessed over SAMBA have exhibited problems in the past (though some people swear by them and say they're vastly faster than Windows servers), and historically Novell servers have needed to have settings tweaked to enable Jet files to be reliably edited. There are also some settings (like OPLOCKS) that can be adjusted on a Windows server to make things work better.

  2. store your Jet MDBs in shares with short paths. \Server\Data\MyProject\MyReallyLongFolderName\Access\Databases\ is going to be much slower reading data than \Server\Databases. This really makes a huge difference.

  3. linked tables store metadata that can become outdated. There are two easy steps and one drastic one to be taken to fix it. First, compact the back end, and then compact the back end. That's the easy one. If that doesn't help, completely delete the links and recreate them from scratch.

  4. you might also consider distributing an MDE to your end users instead of an MDB, as it cannot uncompile (which an MDB can).

  5. see Tony Toews's Performance FAQ for other generalized performance information.

David-W-Fenton
Wow, I think I just sat down to a four-course meal on making my Access application more efficient.To clarify, we are not all sharing the same front end file. I work on a front end in a dev folder, then launch it by copy/paste to a release folder, and each user copies/pastes it to their own file from which they work. This allows each user to migrate to new versions on their own timing.It's a Windows file server. The path to the back end is 67 chars long; I may be able to shorten it to 22. And I'll certainly look into the other suggestions. Thanks again!!
Dave Nicks
+1. Nice post.......
Mitch Wheat
+2  A: 

1) Relink Access tables from code http://www.mvps.org/access/tables/tbl0009.htm

Once I'm ready to publish a new MDE to the users I relink the tables, make the MDE and copy the MDE to the server.

2) I specifically created the free Auto FE Updater utility so that I could make changes to the FE MDE as often as I wanted and be quite confident that the next time someone went to run the app that it would pull in the latest version. For more info on the errors or the Auto FE Updater utility see the free Auto FE Updater utility at http://www.granite.ab.ca/access/autofe.htm at my website to keep the FE on each PC up to date.

3) Now when working on site at a clients I make the updates to the table structure after hours when everyone is out of the system. See HOW TO: Detect User Idle Time or Inactivity in Access 2000 (Q210297) http://support.microsoft.com/?kbid=210297 ACC: How to Detect User Idle Time or Inactivity (Q128814) http://support.microsoft.com/?kbid=128814

However we found that the code which runs on the timer event must be disabled for the programmers. Otherwise weird things start happening when you're editing code.

Also print preview would sometimes not allow the users to run a menu item to export the report to Excel or others. So you had to right click on the Previewed report to get some type of internal focus back on the report so they could then export it. This was also helped by extending the timer to five minutes.

The downside to extending the timer to five minutes was if a person stays in the same form and at the same control for considerable parts of the day, ie someone doing the same inquiries, the routine didn't realize that they had actually done something. I'll be putting in some logic sometime to reset this timer whenever they do something in the program.

4) In reference to another person commenting about scripts and such to update the schema see Compare'Em http://home.gci.net/~mike-noel/CompareEM-LITE/CompareEM.htm. While it has its quirks it does create the VBA code to update tables, fields, indexes and relationships.

Tony Toews
Dave Nicks
A: 

If you want to update the back end MDB schema automatically when you release a new FE to the clients then see Compare'Em http://home.gci.net/~mike-noel/CompareEM-LITE/CompareEM.htm will happily generate the VBA code need to recreate an MDB. Or the code to create the differences between two MDBs so you can do a version upgrade of the already existing BE MDB. It's a bit quirky but works.

I use it all the time.

Tony Toews
"It's a bit quirky but works" -- I'd be interested to learn how it handles a CHECK constraint whose definition exceeds 255 characters. The INFORMATION SCHEMA accessed via OLE DB always curtails this, ADOX doesn't recognize a CHECK as a constraint, DAO has was crippled in the Jet 4.0 era, etc. Speaking of which, does it support the Jet 4.0 feature set: WITH COMP text data types, BINARY data type?
onedaywhen
No idea. I always do my BE MDB updates using the tool's DAO collection VBA code.
Tony Toews
Ah, Jet 3.51 Access95 stuff. Never mind, I was wondering whether it had been updated more more recent features.
onedaywhen
And creating tables and fields using DAO and collections has been around since at least Access 2.0. What more recent features do you mean? You don't have anywhere near the feature set using ADOX or DDL.
Tony Toews
(yawn) Here we go again: for Jet 4.0 tables and fields SQL DDL + ADOX can create everything DAO can plus much that DAO cannot e.g. CHECK constraints, NO INDEX FKs, scale and precision for DECIMAL, fixed width text data type NCHAR(), WITH COMPRESSION text data types, BINARY data type, default values for PROCEDURE parameters, explicit seed and increment for IDENTITY autonumber, ... then there's ACE... really, I'd hope an Access MVP would know this stuff, available now for a decade. There are a few things DAO can do exclusively but not in the area of tables and columns.
onedaywhen
Is there some more sarcasm here?
Tony Toews
No sarcasm in this one. I seem to have to constantly remind Access MVPs about the Jet 4.0 feature set and it gets boring. Could you put this on the Access MVPs site please? or perhaps your own? Then I could link to that (yes, I am serious) rather than list them out each time.
onedaywhen
If you think there are some things in the area of tables and columns that DAO can exclusively create then let me know and I'll see if they can be created using SQL DDL + ADOX. This is not sarcasm but a challenge that we both might enjoy and learn something from. (P.S. sincere thanks for picking me up on the sarcasm elsewhere, I've changed it to something I hope is more professional yet challenging for you i.e. how can Access MVPs influence the enhancement of the engine towards features that regular/power users of Access will want to use. Thanks for listening.)
onedaywhen