views:

84

answers:

5

I've heard more than one person say that if your build process is clicking the build button, than your build process is broken. Frequently this is accompanied with advice to use things like make, cmake, nmake, MSBuild, etc. What exactly do these tools offer that justifies manually maintaining a separate configuration file?

EDIT: I'm most interested in answers that would apply to a single developer working on a ~20k line C++ project, but I'm interested in the general case as well.

EDIT2: It doesn't look like there's one good answer to this question, so I've gone ahead and made it CW. In response to those talking about Continuous Integration, yes, I understand completely when you have many developers on a project having CI is nice. However, that's an advantage of CI, not of maintaining separate build scripts. They are orthogonal: For example, Team Foundation Build is a CI solution that uses Visual Studio's project files as it's configuration.

+1  A: 

If you have a hands-off, continuous integration build process it's going to be driven by an Ant or make-style script. Your CI process will check the code out of version control when changes are detected onto a separate build machine, compile, test, package, deploy, and create a summary report.

duffymo
If you need to have a CI system, then of course you'd need to configure it lol. Not a bad answer, but I'm curious if there's something not CI related here.
Billy ONeal
+1  A: 

Let's say you have 5 people working on the same set of code. Each of of those 5 people are making updates to the same set of files. Now you may click the build button and you know that you're code works, but what about when you integrate it with everyone else. The only you'll know is that if you get everyone else's and try. This is easy every once in a while, but it quickly becomes tiresome to do this over and over again.

With a build server that does it automatically, it checks if the code compiles for everyone all the time. Everyone always knows if the something is wrong with the build, and what the problem is, and no one has to do any work to figure it out. Small things add up, it may take a couple of minutes to pull down the latest code and try and compile it, but doing that 10-20 times a day quickly becomes a waste of time, especially if you have multiple people doing it. Sure you can get by without it, but it is so much easier to let an automated process do the same thing over and over again, then having a real person do it.

Here's another cool thing too. Our process is setup to test all the sql scripts as well. Can't do that with pressing the build button. It reloads snapshots of all the databases it needs to apply patches to and runs them to make sure that they all work, and run in the order they are supposed to. The build server is also smart enough to run all the unit tests/automation tests and return the results. Making sure it can compile is fine, but with an automation server, it can handle many many steps automatically that would take a person maybe an hour to do.

Taking this a step further, if you have an automated deployment process along with the build server, the deployment is automatic. Anyone who can press a button to run the process and deploy can move code to qa or production. This means that a programmer doesn't have to spend time doing it manually, which is error prone. When we didn't have the process, it was always a crap shoot as to whether or not everything would be installed correctly, and generally it was a network admin or a programmer who had to do it, because they had to know how to configure IIS and move the files. Now even our most junior qa person can refresh the server, because all they need to know is what button to push.

Kevin
How does the build system fix that for you? EDIT: Nvm. So you're saying the main point is for CI stuff.
Billy ONeal
Yeah, pretty much like I did. What was new here?
duffymo
@Kevin: If you have a build system *more complicated* than clicking build, I understand completely. However, I've been told several times that even if it's as simple as clicking build than your process is broken. @duffy: I don't see anything new here. Do you? :)
Billy ONeal
The thing is that by clicking build, you have to assume that you have the most current version of the code. If you are the only person working it that's easy, if not then you have to coordinate to get it. With an automation server, that's not necessary, because it will be smart enough to always get the most current version of the code. Our build server also check 4 different branches all the time to make sure that they all work, which is something that can't be done by a person very easily.
Kevin
Suppose you need to get a new developer set up. Or maybe on of your boxes dies and you need to reinstall it. Or any number of other reasons you might ned to set up a new build environment. If your IDE is your build process, alot of that stuff is things that you can't check into source control, and basically have to remember. Whereas your makefile *can* be checked in and kept up to date.
Anon.
@Kevin: That does seem useful for teams. In my case, however, I do not have the resources to be running a dedicated build machine. It's not a bad answer to my original question -- I just don't have use for a CI server at this point, given the project is a one man show :)
Billy ONeal
@Anon: Why can't an IDE file be checked into source control? It's just an XML file. I blow away and clone my repository on a regular basis without problems and I don't currently use a build system like this.
Billy ONeal
It depends, it may not be beneficial for you. For us, we have different needs. For us to build the MSIs etc., and get ready to deploy code, it takes us about 30 steps all in all. Generate scripts, setup MSIs, build config files etc. Can you do basic stuff by pressing build, sure, but the advanced stuff will be much harder, or not possible at all. It all depends on what you need done. Now that I have a build server, would I go back? No, because now that I know what I can do with it, I never want to be limited by not having one again. The extra money spent will always be worth it to me.
Kevin
Here's a way to think about it a build server on linux probably costs about what 400 bucks for hardware? So if I'm a consultant and I bill at 50 dollars an hour, that means if it saves me 8 hours of time where I can bill, it's paid for itself. Anything after that, I'm making money, on essentially doing nothing than letting a machine sit there and do the work for me. Now does this apply all the time, no, but I bet that eventually it will pay off.
Kevin
@Billy: Just take a peek inside, say, a `.csproj` file. If you use any nonstandard libraries, either you need to specify their path in the project file, or ensure the IDE is set up correctly to find them. IDE configuration is not exactly something you can stick in source control.
Anon.
@Anon: With regards to path to project file or nonstandard libraries, you have the same issues with a build script.
Billy ONeal
+1  A: 

Aside from continuous integration needs which everyone else has already addressed, you may also simply want to automate some other aspects of your build process. Maybe it's something as simple as incrementing a version number on a production build, or running your unit tests, or resetting and verifying your test environment, or running FxCop or a custom script that automates a code review for corporate standards compliance. A build script is just a way to automate something in addition to your simple code compile. However, most of these sorts of things can also be accomplished via pre-compile/post-compile actions that nearly every modern IDE allows you to set up.

Truthfully, unless you have lots of developers committing to your source control system, or have lots of systems or applications relying on shared libraries and need to do CI, using a build script is probably overkill compared to simpler alternatives. But if you are in one of those aforementioned situations, a dedicated build server that pulls from source control and does automated builds should be an essential part of your team's arsenal, and the easiest way to set one up is to use make, MSBuild, Ant, etc.

mattmc3
A: 

the IDE build systems I've used are all usable from things like Automated Build / CI tools so there is no need to have a separate build script as such.

However on top of that build system you need to automate testing, versioning, source control tagging, and deployment (and anything else you need to release your product).

So you create scripts that extend your IDE build and do the extras.

Keith Nicholas
A: 

One practical reason why IDE-managed build descriptions are not always ideal has to do with version control and the need to integrate with changes made by other developers (ie. merge).

If your IDE uses a single flat file, it can be very hard (if not impossible) to merge two project files into one. It may be using a text-based format, like XML, but XML it notoriously hard with standard diff/merge tools. Just the fact that people are using a GUI to make edits makes it more likely that you end up with unnecessary changes in the project files.

With distributed, smaller build scripts (CMake files, Makefiles, etc.), it can be easier to reconcile changes to project structure just like you would merge two source files. Some people prefer IDE project generation (using CMake, for example) for this reason, even if everyone is working with the same tools on the same platform.

BennyG
Hmm.. yes but the IDE files are getting created in any case. Nobody wants to spend all day writing code without code completion features.
Billy ONeal
I know plenty of developers using text editors without code completion. In any case, my point was that IDE files don't have to be version controlled if you use an IDE-neutral build description file. You can version control that file and let developers generate the projects for their IDE of choice.
BennyG