views:

308

answers:

6

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:

How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.

Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?

Specific examples:

  • Qt moc pre-processor.

  • ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).

Basically these frameworks/middleware need to generate their own code before your application can link to it.

From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..

What do you think?

+3  A: 

Backup everything including the setup of the IDE, operating system, etc. This is what i do

1) Store all 3rd party libraries in source control. I have a branch for all the libraries.

2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.

3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.

I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.

Andrew Keith
i like your virtualpc/vmware idea. i've been thinking about doing something like that to preserve the state of the build too. thanks for the idea.
ShaChris23
+1 for the VMWare image concept.
Bob Aman
+1  A: 

What about adding 1 step.

A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.

KyleLanser
can Nant script help installing visual studio add-ons like Qt plugin, and automatically click next button for me?
ShaChris23
+1  A: 

This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.

In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.

We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.

Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.

The main advantages for us of this approach is:

  1. We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
  2. We can easily replace failed machines.
  3. We have a known environment for testing (we install everything to a simulated 'production server' before going live).
  4. We (the software team) version control critical configuration details and any explicit pre-processing steps.
ire_and_curses
hey...thanks this is helpful.
ShaChris23
how do you "push this out to a virgin machine"? is it simply deploying the image onto a minted machine?
ShaChris23
Yes. We're working on providing a minimal kernel that can be run off a CD or USB stick and automatically pulls down and installs the image to a new machine from the network, but we haven't moved to this model yet.
ire_and_curses
i have this idea..i dont know if it's too crazy. but i was thinking about creating a script/program for each software release that will, for each check out, automatically check the machine configuration (OS, compiler, IDE, IDE add-ons, etc) and uninstall, install appropriate tools. it will have to involve GUI automation (to complete some third party installer like Qt) as well. have you ever thought of this? or am i making too complicated? the thing about my application is that it's a high-performance system..so we cannot just image a disk and run under VMWare because then it would be too slow.
ShaChris23
btw..thanks for mentioning fsvs...i ought to look into that one.
ShaChris23
@ShaChris23: It's possible, but from scratch this is complicated. Automating steps like checking the machine configuration is quite a lot of work. The more general your machine configs, the more work. If you can get away with machine-specific configs, then this is a much simpler solution. GUI installer automation sounds like a nightmare to me. Personally, I would stay away from that if at all possible. It may be easily doable, but I have no experience in that respect. You can also look at leveraging a package management system like RPM or APT if you are going to have complicated dependencies.
ire_and_curses
it's not particularly machine-specific. it's more like software project specific. by that i mean, for software project A to compile it might require software library/framework B, C, and D. while another software project K might require software library/framework H, F, C to compile.
ShaChris23
thanks for the RPM and APT that you mentioned..
ShaChris23
+2  A: 

I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.

This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').

All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.

disown
thanks for your answer. it certainly gave me some helpful insights to think about.
ShaChris23
+1  A: 

Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)


I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.

Somewhat hurried description follows:

Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.

Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there

So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.

  • %work% <- local store for all temp files, unzip, and for each working copy's temp files
  • %work%/_cache <- downloaded zips (2 gb)
  • %work%/_local <- local zips (for development or retrieved in other manners while travvelling)
  • %work%/_unzip <- unzips of files in _cache (10 gb)
  • %work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
  • %work%/D_trunk/ <- store for working copy checked out to d:/trunk
  • %work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2

So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).

When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.

In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).

We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)

It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)

First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)

The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:

  • extend it to use cmake instead of raw vcproj-files, to make it more cross-platform.
  • script the entire process from checkout/download of third-party packages to building and zipping them (including storing the download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)


As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though

Marcus Lindblom
can you elaborate on what you meant by "Qt's Visual Studio add-in, which stores this in the .vcproj files" ?
ShaChris23
with the system you have implemented, how do you take care of the need of where different software in repository might need to use different version of the same library? i.e. software A might need precompiler version 3.0, while software B might need precompiler version 3.5? but the developer wanted to check out both A and B on his machine.
ShaChris23
Qt VS add-in: http://qt.nokia.com/downloads/visual-studio-add-in. It is a Visual Studio specific plugin that manages Qt tasks. Specifically, when adding files with Q_OBJECT in them, it automatically creates build-rules to run moc on the header files, and includes the created moc_xxx.cpp files in the project.
Marcus Lindblom
I've added some explanation. Basically in the repo, there is a file for each third-party lib, such as Qt. As development progresses, this file is updated as we upgrade to new releases. For example, to upgrade, I put together a new zip, such as qt-4.5.2-x86-vc90.7z, test it locally and change our source, upload that file to the server, change the Qt-module config to use that file and commit. When other devs update, they need to restart Visual Studio using my bat file, which triggers a download of that zip, repoints PATH/INCLUDE so they get the new moc and new headers/libs to build with.
Marcus Lindblom
Oh, and I check for stuff that needs installing and licensing, such as Visual Studio, Matlab or Maya. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there.
Marcus Lindblom
A: 

I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.

If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.

Your build process would look like the following:

  • Middleware repo containing middleware code
  • Build server, building middleware
  • Push middleware build output to project repository as 3rd party references
Johannes Rudolph