views:

177

answers:

5

Hi!

I am soon starting to maintain a line of products containing variants of the same embedded software. Since I've been playing with git for one year and appreciate it very much, I'm likely to use it for source control.

There are several options I can see for maintaining the variants of the firmware, but none pleases me too much. What are the best practices you apply for your own work?

Alternatives I can think of:

  • defines. Pre-processing. Pros: everything is always present in the source code, it is harder to miss the update of one of the products. Cons: harder to read. It might be ok while we have only two variants, when it becomes four or more it will be a pain. Also, it seems harder to apply the DRY principle (Don't Repeat Yourself).

  • one branch per product variant. When changes that apply to all products are included, the change must be merged to the other products. Cons: if a commit contains both changes for all products and changes for the specific variant, there will be trouble. Of course, you can make sure that a commit contains only one kind of change: this-product-changes, or the-whole-family-changes. But try to force that on a team?? Plus then merging wouldn't work, we should be cherry-picking instead. Right?

  • a core repository as a submodule. Make all files that contain core functionality a repository on its own. All products contain a version of the core repository as a sub-module. Cons: I can't see that there wouldn't eventually be variants of the core submodule. Then we're in trouble again, and then we'd use defines or something bad again. A core repository with branches? Then we're back to the previous alternative: a change that applies to all branches must be merged but merging includes also the product specific stuff.

  • create a repository per module. For example a repository for a display driver, another one for the power management hardware, yet another for the user input interface, ... Pros: good modularity. Make a new product by just picking up the modules you need as submodules! All submodules might have branches, if for example a variant uses the hardware in a different way. Cons: Lots and lots of modules, each keeping track of a couple of files (an include file and a source file). A hassle. Someone makes an important update in some module? Then someone needs to include the change in the other branches of this module, if appropriate. Then someone must also update the submodule in every product repository. Quite some work, and we kindof lose the snapshot side of git.

How do you do it, and how has it been working? Or how would you do it?

I have a feeling that I should get experienced with cherry picking.

+6  A: 

I'd try to go for #defines as much as possible. With a proper code you can minimize the impact on readability and repetitions.

But at the same time #define approach can safely be combined with splitting and branching, application of which depends on the nature of the codebase.

Michael Krelin - hacker
+4  A: 

I am not sure if that's "best practice", but the Scintilla project uses something for years that is still quite manageable. It has only one branch common to all platforms (mostly Windows/GTK+/Mac, but with variants for VMS, Fox, etc.).

Somehow, it uses the first option which is quite common: it uses defines to manage little platform specific portions inside the sources, where it is impractical or impossible to put common code.
Note that this option is not possible with some languages (eg. Java).

But the main mechanism for portability is using OO: it abstracts some operations (drawing, showing context menu, etc.) and uses a Platform file (or several) per target, providing the concrete implementation.
The makefile compiles only the proper file(s) and uses linking to get the proper code.

PhiLho
OO would be nice, but this is embedded code in assembler, with not even a C compiler available.I understand that the concepts of OO can be applied anyway, though.Assembling and linking only the necessary files is an option, but I believe that two such files (one per variant) would be very similar, thus repeating one another.
Gauthier
Having 'compat' files, be it one, or one per unique platform, is I think a good idea.
Jakub Narębski
+2  A: 

I think that the appropriate answer depends in part on how radically different the variants are.

If there are small portions that are different, using conditional compilation on a single source file is reasonable. If the variant implementations are only consistent at the call interface, then it may be better to use separate files. You can include radically variant implementations in a single file with conditional compilation; how messy that is depends on the volume of variant code. If it is, say, four variants of about 100 lines each, maybe one file is OK. If it is four variants of 100, 300, 500 and 900 lines, then one file is probably a bad idea.

You don't necessarily need the variants on separate branches; indeed, you should only use branches when necessary (but do use them when they are necessary!). You can have the four files, say, all on a common branch, always visible. You can arrange for the compilation to pick up the correct variant. One possibility (there are many others) is compiling a single source file that knows which source variant to include given the current compilation environment:

#include "config.h"
#if defined(USE_VARIANT_A)
#include "variant_a.c"
#elif defined(USE_VARIANT_B)
#include "variant_b.c"
#else
#include "basecase.c"
#endif
Jonathan Leffler
I would much rather do this in the makefile, I dislike including c files with #include: they would not show up in my makefile.I agree that there is no silver bullet, the solution depends on the variance of the variants :) What I am afraid of is to start with #defines and see the product family grow to 10 variants.
Gauthier
If you are generating makefiles (as opposed to using a version-controlled makefile), then go with the direct approach of listing the correct files in the makefile. This approach works OK (not necessarily recommended, but it does work) when the makefile is not dynamically generated. My fundamental message is "one size does not fit all" and different solutions are relevant depending on the details. If you think you may have 10 variant items, then design your system to work with as many variants as that. You don't say whether it is 10 binary decisions, or 1 of 10 alternatives, or something else.
Jonathan Leffler
So far I meant 1 of 10 alternatives. Thank God :) I do not understand why having version controlled makefiles (one per variant) would invalidate this method?
Gauthier
It depends how you structure your makefiles. If you build one of ten different targets depending on the variant you need - and each variant has its own lists of files that are needed to build it - then you are fine. It also depends whether you are always going to build all ten variants in the same build or whether you only build one at a time. I'm work on a system where the same makefile is used on 6 or 7 different platforms (not bothering to count the 32-bit vs 64-bit differences), but at any one time, it is only building for a single platform and the same target is used on each platform...
Jonathan Leffler
..There, it is a nuisance to have `make` work out which set of files are relevant to each platform - hence the inclusion technique is used in places. We also have a system using `#ifdef`. Generally, there are many more `#ifdef` scenarios than `#include` scenarios. So, your mileage will vary, as they say. [_And the markup editor for comments gets back-tick handling around C pre-processors wrong! Drat it!!! Either that or I made the same mistakes twice in a row._]
Jonathan Leffler
+3  A: 

You should strive to, as much as possible, keep each variant's custom code in its own set of files. Then your build system (Makefile or whatever) selects which sources to use based on which variant you are building.

The advantage to this is that when working on a particular variant, you see all of its code together, without other variants' code in there to confuse things. Readability is also much better than littering the source with #ifdef, #elif, #endif, etc.

Branches work best when you know that in the future you will want to merge all of the code from the branch into the master branch (or other branches). It doesn't work as well for only merging some changes from branch to branch (although it can certainly be done). So keeping separate branches for each variant will probably not yield good results.

If you use the aforementioned approach, you don't need to try to use such tricks in your version control to support your code's organization.

Dan Moulding
+1 for advice about branching only if planning to merge in again (although there are exceptions). Also for the risk of using VCS tricks to support the code.See my comment to PhiLho about DRY with such custom code files.
Gauthier
When I'm working on a particular variant, I often want to see the other variants' code. This is a good use for code folding..
Jeanne Pindar
+1  A: 

You are in for a world of hurt!

Whatever you do, you need an automatic build environment. At the very least you need some automatic way of building all the different versions of your firmware. I've had issues of fixing a bug in one version and breaking the build of a different version.

Ideally you would be able to load the different targets and run some smoke tests.

If you go the #define route, I would put the following someplace where the variant is checked:

#else 
    #error You MUST specify a variant!
#endif

This will make sure all the files are built for the same variant during the build process.

Robert
Sure, the #error.And even, if several variant defines are defined, undefined all but one (or send another #error).
Gauthier