views:

352

answers:

1

Where I work we need to rethink the way we develop software and keep track of each released version. Do you have any suggestions to solve our problems?

  1. We develop on Windows in C++ using VS 2005 (and C++ Builder for some interface stuff)

  2. We use GIT but in the worse possible way imaginable. We are somewhat open to move to another source control.

  3. We have 40+ in-house developed DLL. Many of those can be updated frequently.

  4. We have a few dramatically different projects that depend on those DLL.

  5. We deliver 100+ systems a year, each of those requires custom configuration. The majority also requires custom patches. We try as much as we can to bring those patches back into the main trunk but forks are inevitable.

  6. If a few years down the road we have to update a client's system, we should be able to get back the code used for that release and all the environment parameters. We need a way to validate that this code match the binaries on the client's system. Getting back the code should be as trivial as possible and maybe with the exception of the compiler we should have everything needed to compile by doing a few simple operations.

  7. A programmer should be able to release an update for a client's system without depending on any other programmer no matter in what project (DLL) the patch is. He should be able to do it rapidly (less than 30 minutes). That make the concept of a single official release almost impossible.

  8. Exchanging code between developer working on the same project should be easy and fast.

  9. Considering our huge code base we want to limit how much a developer has to recompile when he gets a patch (sharing the binaries is a must).

  10. A developer should be able to switch from one client's system release or branch to another easily (it's common to have to work on more than one release at the same time).

Edit: -We don't use makefile so far but that's something we are willing to consider. Everything is built using VS solutions.

+3  A: 

Git in itself is perfectly suited for having a multitude of branches of source code. However, the maintenance of those branches will always reside at the user and lies outside the scope of a given version control system.

The only problem with Git is that it does not scale well for tracking compiled binary data over time. Binary data is mostly use-once and the diff/patch aspect which is important for source code is not important for compiled binary data. Instead, just create a .zip file for each source code version in Git containing a pre-compiled version of each DLL and put those .zip files on a network share.

If you've done that, it sounds like you should invest time into your build system to be efficient. The version system can help here, but you're probably running into build problems anyways:

  • Your Build System should only compile DLLs when the source has changed or the DLLs on which it depends has its interface changed. How tricky this is depends on the language used: C# DLLs have quite a strict interface which make this quite easy, while C has not an interface to speak of (just add one #define to a source file, and everything might have to be compiled)
  • Your Build System should reuse pre-compiled DLLs which should be stored somewhere. Preferably not in the same way as the source code since Git is not optimized for this.
  • Your Build System should cope branch changed OK. For example, Visual Studio leaves some files behind and does not always detect properly when a full rebuild has to be done.
  • Your Build System might have to use a compiler from a fixed location instead of the compiler-of-the-day installed on your developpers PC. You might want to put it under version control also, or at least make this dependency explicit.

In the end we rolled our own build system which was limited in speed by the time to do a stat() of all the files involved when nothing was changed. However, this took some time to build. Things to consider when building a system your own:

  • First construct a dependency graph: DLLs depends on its source files and other DLLs.
  • Use the modification times (mtime) of the files as a kind of 'version' of a file. Keep a cache of those mtimes, and consider a change in mtime a reason to update your cache.
  • When one mtime of a file has changed, you have to rebuild the DLL it belongs to. After a rebuld of the DLL, check whether the interface has changed. If the interface has changed, rebuild all the DLLs which depend on this one. This takes a graph traversal to only process DLLs once.
  • Bonus for making the compile run in parallel. Since you know your dependency graph, you also know which DLLs can be build in parallel.

And the good thing is that this is all version control system independent, so it's not wasted time. It might even be that a simple approximation is enough to be able to do it <30 minutes. It depends.

Rutger Nijlunsing
Yes. I edited my question to reflect that this is also part of our problem.
Simon T.