tags:

views:

166

answers:

3

Has anyone tried including Visual Foxpro Databases (ver 7) in SVN? What were the advantages/disadvantages of including it in? What is the best approach to handle VFP Db in SCM when there are rows that needs to be included in the source control?

+2  A: 

Whilst I haven't used SVN, I have used VFP with both VSS and Vault. With both of these, I manually add files to source control, rather than trying to use some form of integration within the Dev environment.

There are basically two ways you could approach this:

  1. Just manually add the .DBC, .DCT, .DCX and all of the .DBF, .FPT and .CDX
  2. Write a script from the database to create the structure (I use a modified version of GenDBCX), and script creation of any data records you want to preserve on a program or class.
kevinw
+4  A: 

Christof Wollenhaupt has a tool called "TwoFox" that does a good job converting DBCs and other Fox source files to XML -- the article describing it is http://www.foxpert.com/docs/cvs.en.htm. If you're just asking about dropping the DBF files into SVN, though, you can import them as binary files, and lose the ability to compare/merge between versions, or use CURSORTOXML (that was in 7, wasn't it?) to convert the DBFs to XML before checking them in.

SarekOfVulcan
+1  A: 

My setup:



  • Debian on an P4 workstation, running:
    • Subverison via Apache2
    • Trac with hooks into Subversion
    • regular nightly backups of both Subversion and the Trac database

Frankly, I don't check in the multi-megabyte databases that we have because the repository would bloat to about 20+ Gbyte in size from that alone. We regularly have 1.6Gb tables (and their memos and indexes) and it's just not worth the hours wasted waiting for a 1-hour-plus commit on 20Gbytes of table changes. Instead, we clone the data from our production system and use that to "refresh" things, and rebuild our database container to have fresh links to the tables. The "refresh" process is done about once a month and takes far less time, usually 40 minutes; contrast this with wasting hours every day.

I haven't had a need to check in data into the repository, even once. Schema management has been simplified by following a single rule for the time being: only refresh data after all patches meant for production have been pushed into production, which implies the schema for both environments will be consistent. For now, I can get away with this, although that will have to change in the future...

If you just need schema changes

If you find that you are needing to check in tables because you are trying to capture their schema and not necessarily the data they contain, you might want to look at writing a small tool that pumps out the schema into a text file and commiting that to the repo, instead of shipping the kitchen sink off to be digested.

If you absolutely need to check in data

If there is data in the tables that is critical to controlling program flow (data-as-code, and not just data that is processed by the program), you might consider trimming your data to the bare minimum required and checking in just the resulting stub tables by adding them manually to the repo. While subversion will handle binary objects, you'll want to keep these to a minimum size and commit them as little as possible so that your repo doesn't bog down. Be sure you check in the individual tables you're after, and not just all *.dbf, or you may be in for a rude shock when someone else tries to push several gigabytes of data into your repo because the working copy doesn't mask out all tables.

Avery Payne