views:

1250

answers:

13

I've been designing a compiler framework (targeting .NET) for a while now and I've been thinking more and more about deprecating the command line interface. A lot of my compiler's flexibility comes from the ability to define custom pipeline elements (to handle DSLs, macros (which have their own DSL to define), etc) and the command line ends up very verbose and tedious. This also makes it more difficult to design a NAnt task that will expose all of the functionality in a simple way.

The alternative I've been toying around with is actually making the NAnt task load the compiler module and call it directly, so the command line is a nonissue. This will remove the overhead of invoking the compiler -- a non-negligible amount of time when you're doing a lot of compilation, as it has to initialize for each and every compilation -- and allow you to more easily define custom pipelines.

My question is this: Why hasn't this been done before? It seems a natural progression to more tightly integrate the compiler with the build system. Assuming you keep the command line interface around, there are few downsides and a lot of benefits. For that matter, has anyone done this already?

Edit: To clarify, I'm not proposing the complete abolition of the compiler command line interface, just treating it as the legacy interface.

Further edit: A lot of people have responded in favor of the command line interface, which has led me to think more about what can be done to improve it. Is there a reason why accepting basic info -- output filename, language to compile with, possibly source files (if there aren't custom steps to be put onto their compilation), etc -- over the command line and then taking pipeline info and such over STDIN would be an issue? This seems like a nice middle ground that would be easy to use from most any existing build system.

A: 

One thing that immediately springs to mind is scripting/automation. I'm not very familiar with the .NET world, but I'd imagine that not everyone wants to use NAnt, but they may still want to be able to create scripts that include compilations (e.g., nightly builds).

Einar
For cases like that, you can use the command line interface or directly use the compiler APIs via Powershell (something which I plan on implementing, despite disliking Powershell, personally). Thanks for the input :)
Cody Brocious
+4  A: 

I'd say the command line interface is so ubiquitous and will remain so because of the many build systems that rely on it. So many projects rely on makefiles or scripts and batches to build and test, that considering alternative ways of driving the compilers are rarely even considered. Personally, I've always been a fan of command-line building rather than hiding things in the IDE.

By the way, you might want to take a look at MSBuild. Don't quote me, I might be wrong, but what it does sounds kind of similar to what you'd like to do via NAnt.

Chris Charabaruk
This is just off the top of my head, but what would you think about a compiler that took its basic info (language to compile, output format and filename, possibly source files) on the command line and then accepted pipeline info and such over stdin, so you can use heredocs in your scripts/makefiles?
Cody Brocious
I don't entirely grok.
Chris Charabaruk
Rather than integrating tightly with the build system or adding a bunch of hard to read options to the command line, pass the complex data over STDIN. Allow the user to specify what pipeline compiles each file and such. This way the command line interface is still usable and makes things simpler.
Cody Brocious
You mean like a response file, except treated specially on STDIN rather than listed in the command line. I don't know, sounds more complicated than really needed.Besides, it's not like anyone writes out every single used option in build scripts. Environment vars exist for a reason.
Chris Charabaruk
+2  A: 

Tightly coupling the compiler with the build system seems like a good idea, as long as you're happy with the build system in question. If a coder wants to use your compiler with, say, SCons, an in-house script, or even traditional make, it doesn't work out so well.

Another thing to think about is the initial mental overhead. If someone wants to try out your compiler, he doesn't necessarily want to learn your build system at the same time. Software that's easy to start using has a greater chance to catch on: just look at Rails.

Edit: Your new proposal to take extra information over STDIN seems reasonable. Just be very careful to make a clear, consistent distinction between the kinds of options set on the command line versus those piped in later. This is an API issue, not a UI issue, so any future changes would be expensive.

skymt
I've considered both of these points. My idea on the first is that the API to the compiler should be simple and well documented so it can be implemented from any build system. To the second, the command line interface should be easy to use in the beginning, even if it's limiting for complex cases.
Cody Brocious
+1  A: 

I believe the Java compiler already has a non-command-line interface (com.sun.tools.javac) which Ant uses for compilation. It's neat, and should be supported by more compilers.

ETA: Re pipeline: I think a command-line option that specifies the use of such parameter-passing over stdin would be very helpful, especially in getting over the command-line length limit. And you can use it to send non-textual data too.

Chris Jester-Young
IIRC there are interfaces in .NET for compiling C# and VB, although I'm unsure of MSBuild uses them. But the idea certainly is there!
Chris Charabaruk
A: 

Some like the possibility to compile from a command file (or shell script or equivalent) without needing being in another application, be it an IDE or build tool. Such that the Delphi compiler comes with a command line compiler and another integrated within the IDE. They both share the same source code though.

François
+2  A: 

If you provide ability for decent scripting of tasks, I'd go with GUI-only approach. However, I'm not even sure it's possible. How could you possibly cover everything and anything that some moderately complex batch file can do?

I can invoke wget or svn to fetch me some files or code before the build. I can invoke command line e-mail tool to send me the results, and a sceenshot :) I can also invoke InnoSetup to build the setup.exe and command line FTP client to upload the resulting binary package to my release server. But, this example is just a tip of the iceberg.

To state simply: build is not always initiated by the human user.

Milan Babuškov
Well, that's why I've primarily focused on ease of use from something like a NAnt build script, where you can still do such invocations. That said, check out my comment on Chris Charabaruk's answer please, as it seems like it could be a middle ground.
Cody Brocious
+2  A: 

For complex developer-oriented toolsets it is poor design to remove options.

That kind of approach leads to a monolithic application that railroads the developer into following a handful of predefined processes.

Best to keep the tools loosely coupled and focus on making each tool robust and with a well-defined interface.

That way, the total can be greater than the sum of its parts.

Ed Guiness
As I stated in my post, I'm not planning on killing off the command line interface, as it's a valuable tool. I just want to improve usability by adding what I see might be a much more flexible invocation style. I'm also open to ways of making command line-based compilation more flexible.
Cody Brocious
OK, so you're not plan on killing the command line immediately, but isn't treating it as legacy the first step down that road? Look at the architecture of Powershell in contrast.
Ed Guiness
I really don't see ever killing it as likely. It's useful and usable everywhere. I've added an addendum to my question that I'd love some input on. I clearly need to consider this more before I move forward.
Cody Brocious
A: 

"My question is this: Why hasn't this been done before? It seems a natural progression to more tightly integrate the compiler with the build system. Assuming you keep the command line interface around, there are few downsides and a lot of benefits. For that matter, has anyone done this already?"

This has been done before. Try separating the command line VB6 from the Link stage. (It can be done by nasty hacks, that is a whole other story). And it was bad news (well for me anyway).

With a command line interface you can add other object modules and resources.

A command line interface provides a "standard" interface that any and all systems can interact with.

With a command line you can write batch files and make files and and have complete systems builds that are unattended, and zipped and/or copied and whatever else without further interaction.

David L Morris
I agree with this, but at the same time there's no reason you can't do automated builds with NAnt and similar systems -- I do so on check-in on a number of projects. That said, I'd love your input on the addendum to my question. Thanks :)
Cody Brocious
I am not quite sure what you are advocating. One of my points was that the command line is standard. I want my compilers to run, not wait for some input (which is what STDIN implies). Why not the command line? Just run as normal and hide the window?
David L Morris
+1  A: 

It has been done before. For instance there are currently some plans to move gcc in that direction. One of the main advantages to it is to give more flexible interface to the compiler's state, making it useful for editors and IDEs who don't want to reimplement half a C compiler just to give useful source introspection.

[Edit:] Here's an interview discussing this, giving some more details about the benefits of the approach, and possibilities it may enable.

Brian
Ah, that's great. I'm going to give that a look and pull in some ideas. I knew someone had to have worked on this before :)Thanks!
Cody Brocious
+17  A: 

Some answers here were in the tone that we're stuck with command line, and that's why. I disagree. If we pass compiler options via command line (XML configuration file, anyone?), then almost every standard tool can interface with the compiler.

We can easier integrate different things. Different compilers on different platforms may have different command line parameters for disabling warnings etc. but passing file names etc. is basically the same. A sophisticated config file probably wouldn't.

Command line is also more orthogonal than GUI tools (Pragmatic Programmer book tells really well why, and why it's so important). With shell you can accomplish things it wasn't explicitly designed to do. With GUI you usually have to explicitly build support for something, if you want it to be possible.

phjr
+1  A: 

I have an automated build process which involves many different steps: compiling, obfuscation, signing, zipping, and backup. Command line support is crucial in all of these.

Rob
A: 

I'm using Eclipse for both C++ and Java, and quite happy with the integration. The command line becomes a list of issues to fix where I can reach each problematic piece of code by a double click.

Thorsten79
+14  A: 

I'm surprised nobody has mentioned the Unix philosophy.

This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.

Obviously the output of the compiler won't be text, it'll be object code. However, it's nice that the inputs to it (the command line arguments, and the source code) are. That allows the "programs to work together" part of the unix philosophy to function so ubiquitously. Want to make a program to auto-generate source code? Sure, use any language you like, as long as it can write text files. Want to make a tool (ie: build system) that runs the compiler? All it needs to do is generate text.

By the same token, it's nice to have all the other programs in binutils produce text as their output. I've had to script nm and objdump on more than one occasion. In one case I was able to detect configuration problems automatically as part of the build process, that previously were only detectable at runtime on an embedded system. Detecting the problems at runtime wasted 20 minutes of an engineers time if they knew what they were doing. If they didn't, it'd waste hours of their time before they showed the problem to someone who'd seen it before.

KeyserSoze