views:

1308

answers:

11

So I found out that C(++) programs actually don't compile to plain "binary" (I may have gotten some things wrong here, in that case I'm sorry :D) but to a range of things (symbol table, os-related stuff,...) but...

  • Does assembler "compile" to pure binary? That means no extra stuff besides resources like predefined strings, etc.

  • If C compiles to something else than plain binary, how can that small assembler bootloader just copy the instructions from the HDD to memory and execute them? I mean if the OS kernel, which is probably written in C, compiles to something different than plain binary - how does the bootloader handle it?

edit: I know that assembler doesn't "compile" because it only has your machine's instruction set - I didn't find a good word for what assembler "assembles" to. If you have one, leave it here as comment and I'll change it.

+1  A: 

They compile to a file in a specific format (COFF for Windows, etc), composed of headers and segments, some of which have "plain binary" op codes. Assemblers and compilers (such as C) create the same sort of output. Some formats, such as the old *.COM files, had no headers, but still had certain assumptions (such as where in memory it would get loaded or how big it could be).

On Windows machines, the OS's boostrapper is in a disk sector loaded by the BIOS, where both of these are "plain". Once the OS has loaded its loader, it can read files that have headers and segments.

Does that help?

Steven Sudit
+1  A: 

To answer the assembly part of the question, assembly doesn't compile to binary as I understand it. Assembly === binary. It directly translates. Each assembly operation has a binary string that directly matches it. Each operation has a binary code, and each register variable has a binary address.

That is, unless Assembler != Assembly and I'm misunderstanding your question.

Daniel Bingham
That's true in most cases. Some assembly languages have psuedo-operations which are sorta macros.
Paul Nathan
Assembler !== binary. In assembler you can use symbolic names, labels and so on, which has no direct representation in binary, they need to be replaced by actual numbers. If you add some code before label, then that label should be moved to some other address. Assembler is simple programming language, which translates directly to binary, but is not binary itself.
MBO
Almost directly. The same opcode compiles to different binary depending on details such as how the data is addressed. Likewise, even an assembler will sneak in prefix operators as needed. So while there is a very, very close relationship, they're not quite 1:1.
Steven Sudit
I think you are a bit confused about registers. You are correct that there isn a on-to-one corespondence between an assembler opcode and a machine code instruction, however.
anon
@Paul Nathan: Good point. Macro-assemblers are a step closer to compilers.
Steven Sudit
@Neil: You're right to point out that registers, by definition, don't have addresses, as they're not in memory. However, on architectures with a large number of general-purpose registers (many RISC CPU's), we can be forgiven for thinking of the register number as an address "of sorts".
Steven Sudit
It depends a bit on what assembler you use, though most assemblers these days are macro assemblers, offering a bit more.
wich
@Neil, that would be between an assembly mnemonic and a cpu opcode, or machine instruction.
wich
@wich You are correct there.
anon
Assembler (a human readable macro language which is translated to machine code) != Assembly (the binary file generated by common language infrastructure compilers, where each operation has a binary string). I think you may have misunderstanding.
Pete Kirkham
A: 

С(++) (unmanaged) really compiles to plain binary. Some OS-related stuff - are BIOS and OS function calls, they're different for each OS, but still binary.
1. Assembler compiles to pure binary, but, as strange as it gets, it is less optimized than C(++)
2. OS kernel, as well as bootloader, also written in C, so no problems here.

Java, Managed C++, and other .NET stuff, compiles into some pseudocode (MSIL in .NET), which makes it cross-OS and cross-platform, but requires local interpreter or translator to run.

alemjerus
Every "fact" in this answer is wrong.
anon
Assembler is as optimized as you make it. C++, managed or otherwise, normally compiles into complex executables with headers and segments, not plain binary. The BIOS and the early parts of the OS are plain binary.
Steven Sudit
Neil - why not correct it then?
Mr-sk
"microcode" is a completely misleading word to use when you're referring to "intermediate code" -- and intermediate code is actually considered "binary" (probably not *native* binary).
Mehrdad Afshari
Plain binary? Everything stored on a hard drive is binary, that statement is meaningless.
jsoverson
"Assembler compiles to pure binary, but, as strange as it gets, it is less optimized than C(++)" What is that even supposed to mean? There are misleading issues with this accepted answer.
ThePosey
jsoverson: In this context, "plain binary" refers to opcodes without the headers and segments.
Steven Sudit
ThePosey: My guess is that they're trying to say that assemblers don't optimize code, whereas compilers typically do (when not in debug mode). Not claiming their answer was clear or correct, just that they might have been thinking of the right thing.
Steven Sudit
A: 

As I understand it, a chipset (CPU, etc.) will have a set of registers for storing data, and understand a set of instructions for manipulating these registers. The instructions will be things like 'store this value to this register', 'move this value', or 'compare these two values'. These instructions are often expressed in short human-grokable alphabetic codes (assembly language, or assembler) which are mapped to the numbers that the chipset understands - those numbers are presented to the chip in binary (machine code.)

Those codes are the lowest level that the software gets down to. Going deeper than that gets into the architecture of the actual chip, which is something I haven't gotten involved in.

Laizer
True, but not an answer to the asked question.
Andrew Medico
I was aiming for the 'does machine code compile to binary' side of the question. Tried to paint the relationship, rather than just saying 'not really'.
Laizer
+1  A: 

There are two things that you may mix here. Generally there are two topics:

The latter may compile to the former in the process of assembly. Some intermediate formats are not assembled, but executed by a virtual machine. In case of C++ it may be compiled into CIL, which is assembled into a .NET assembly, hence there me be some confusion.

But in general C and C++ are usually compiled into binary, or in other words, into a executable file format.

Kornel Kisielewicz
The thing to remember is that the CIL is contained inside a COFF executable.
Steven Sudit
+20  A: 

Let's take a C program.

When you run 'gcc' or 'cl' on the c program, it will go through these stages:

  1. Preprocessor lexing(#include, #ifdef, trigraph analysis, encoding translations, comment management, macros...)
  2. Lexical analysis(producing tokens and lexical errors).
  3. Syntactical analysis(producing a parse tree and syntactical errors).
  4. Semantic analysis(producing a symbol table, scoping information and scoping/typing errors).
  5. Output into assembly(or another intermediate format)
  6. Optimization of assembly(as above). Probably in ASM strings still.
  7. Assembling of the assembly into some binary object format.
  8. Linking of the assembly into whatever static libraries are needed, as well as relocating it if needed.
  9. Output of final executable in elf or coff format.

In practice, some of these steps may be done at the same time, but this is the logical order.

Note that there's a 'container' of elf or coff format around the actual executable binary.

You will find that a book on compilers(I recommend the Dragon book, the standard introductory book in the field) will have all the information you need and more.

As Marco commented, linking and loading is a large area and the Dragon book more or less stops at the output of the executable binary. To actually go from there to running on an operating system is a decently complex process, which Levine in Linkers and Loaders covers.

I've wiki'd this answer to let people tweak any errors/add information.

Paul Nathan
Hmm, the Dragon book is mostly about parsing. I'd recommend "Linkers and Loaders" by Levine, http://www.iecc.com/linker/ which is also available on the web.
Marco van de Voort
Linkers and loaders is also a good book.
Paul Nathan
Actually, in the "logical" order, lexical analysis occurs before preprocessing, because the preprocessor operates on a stream of tokens. That's how it is defined in the C standard, and that is also how it happens in modern versions of gcc (when the preprocessor was rewritten and turned into a lexing library).
Thomas Pornin
Thomas: Interesting! I am out of date
Paul Nathan
C standard, 5.1.1.2 suggests that traditional lexing is logically separate from preprocessor lexing.
Paul Nathan
A: 

There's plenty of answers above for you to look at, but I thought I'd add these resources that'll give you a flavour of what happens. Basically, on Windows and linux, someone has tried to create the tiniest executable possible; in Linux, ELF, windows, PE.

Both run through what is removed and why and you use assemblers to construct ELF files without using the -felf like options that do it for you.

Hope that helps.

Edit - you could also take a look at the assembly for a bootloader like the one in truecrypt http://www.truecrypt.org or "stage1" of grub (the bit that actually gets written to the MDR).

Ninefingers
+9  A: 

There are different phases in translating C++ into a binary executable. The language specification does not explicitly state the translation phases. However, I will describe the common translation phases.

Source C++ To Assembly or Itermediate Language

Some compilers actually translate the C++ code into an assembly language or an intermediate language. This is not a required phase, but helpful in debugging and optimizations.

Assembly To Object Code

The next common step is to translate Assembly language into an Object code. The object code contains assembly code with relative addresses and open references to external subroutines (methods or functions). In general, the translator puts in as much information into an object file as it can, everything else is unresolved.

Linking Object Code(s)

The linking phase combines one or more object codes, resolves references and eliminates duplicate subroutines. The final output is an executable file. This file contains information for the operating system and relative addresses.

Executing Binary Files

The Operating System loads the executable file, usually from a hard drive, and places it into memory. The OS may convert relative addresses into physical locations. The OS may also prepare resources (such as DLLs and GUI widgets) that are required by the executable (which may be stated in the Executable file).

Compiling Directly To Binary Some compilers, such as the ones used in Embedded Systems, have the capability to compile from C++ directly to an executable binary code. This code will have physical addresses instead of relative address and not require an OS to load.

Advantages

One of the advantages of these phases is that C++ programs can be broken into pieces, compiled individually and linked at a later time. They can even be linked with pieces from other developers (a.k.a. libraries). This allows developers to only compiler pieces in development and link in pieces that are already validated. In general, the translation from C++ to object is the time consuming part of the process. Also, a person doesn't want to wait for all the phases to complete when there is an error in the source code.

Keep an open mind and always expect the Third Alternative (Option).

Thomas Matthews
Which was really interesting when we had 100kwords memory, but is it nowadays still an advantage or more an artefact? A compilation granularity that would utilize available memory better (e.g. to avoid repeated header reparsing, relative slow disk I/O or even just binary startup time) would be more in line with modern requirements?
Marco van de Voort
+1  A: 

To answer your questions, please note that this is subjective as there are different processors, different platforms, different assemblers and C compilers, in this case, I will talk about the Intel x86 platform.

  1. Assemblers do not compile to pure binary, they are raw machine code, defined with segments, such as data, text and bss to name but a few, this is called object code. The Linker steps in and adjusts the segments to make it executable, that is, ready to run. Incidentally, the default output when you compile using gcc is 'a.out', that is a shorthand for Assembler Output.
  2. Boot loaders have a special directive defined, back in the days of DOS, it would be common to find a directive such as .Org 100h, which defines the assembler code to be of the old .COM variety before .EXE took over in popularity. Also, you did not need to have a assembler to produce a .COM file, using the old debug.exe that came with MSDOS, did the trick for small simple programs, the .COM files did not need a linker and were straight ready-to-run binary format. Here's a simple session using DEBUG.
1:*a 0100
2:* mov AH,07
3:* int 21
4:* cmp AL,00
5:* jnz 010c
6:* mov AH,07
7:* int 21
8:* mov AH,4C
9:* int 21
10:*
11:*r CX
12:*10
13:*n respond.com
14:*w
15:*q

This produces a ready-to-run .COM program called 'respond.com' that waits for a keystroke and not echo it to the screen. Notice, the beginning, the usage of 'a 100h' which shows that the Instruction pointer starts at 100h which is the feature of a .COM. This old script was mainly used in batch files waiting for a response and not echo it. The original script can be found here.

Again, in the case of boot loaders, they are converted to a binary format, there was a program that used to come with DOS, called EXE2BIN. That was the job of converting the raw object code into a format that can be copied on to a bootable disk for booting. Remember no linker is run against the assembled code, as the linker is for the runtime environment and sets up the code to make it runnable and executable.

The BIOS when booting, expects code to be at segment:offset, 0x7c00, if my memory serves me correct, the code (after being EXE2BIN'd), will start executing, then the bootloader relocates itself lower down in memory and continue loading by issuing int 0x13 to read from the disk, switch on the A20 gate, enable the DMA, switch onto protected mode as the BIOS is in 16bit mode, then the data read from the disk is loaded into memory, then the bootloader issues a far jump into the data code (likely to be written in C). That is in essence how the system boots.

Ok, the previous paragraph sounds abstracted and simple, I may have missed out something, but that is how it is in a nutshell.

Hope this helps, Best regards, Tom.

tommieb75
+11  A: 

C typically compiles to assembler, just because that makes life easy for the poor compiler writer.

Assembly code always assembles (not "compiles") to relocatable object code. You can think of this as binary machine code and binary data, but with lots of decoration and metadata. The key parts are:

  • Code and data appear in named "sections".

  • Relocatable object files may include definitions of labels, which refer to locations within the sections.

  • Relocatable object files may include "holes" that are to be filled with the values of labels defined elsewhere. The official name for such a hole is a relocation entry.

For example, if you compile and assemble (but don't link) this program

int main () { printf("Hello, world\n"); }

you are likely to wind up with a relocatable object file with

  • A text section containing the machine code for main

  • A label definition for main which points to the beginning of the text section

  • A rodata (read-only data) section containing the bytes of the string literal "Hello, world\n"

  • A relocation entry that depends on printf and that points to a "hole" in a call instruction in the middle of a text section.

If you are on a Unix system a relocatable object file is generally called a .o file, as in hello.o, and you can explore the label definitions and uses with a simple tool called nm, and you can get more detailed information from a somewhat more complicated tool called objdump.

I teach a class that covers these topics, and I have students write an assembler and linker, which takes a couple of weeks, but when they've done that most of them have a pretty good handle on relocatable object code. It's not such an easy thing.

Norman Ramsey
Most C compilers compile directly to relocatable machine code. It is faster to skip the slow textual step. Some (like 16-bit compilers capable of .COM files) can generate non-relocatable code directly.One could argue though that in directly machinecode generating compilers, the assembler is a relative separate standing part.
Marco van de Voort
Relocatable code is not a requirement of C, and many platforms don't use it.
Potatoswatter
Is there any script for your course available online?
Lothar
@Lothar my course is online at http://www.cs.tufts.edu/comp/40. For past years, see my home page. For obvious reasons the answers are not online.
Norman Ramsey
A: 

You have a lot of answers to read through, but I think I can keep this succinct.

"Binary code" refers to the bits that feed through the microprocessor's circuits. The microprocessor loads each instruction from memory in sequence, doing whatever they say. Different processor families have different formats for instructions: x86, ARM, PowerPC, etc. You point the processor at the instruction you want by giving it the address of the instruction in memory, and then it chugs merrily along through the rest of the program.

When you want to load a program into the processor, you first have to make the binary code accessible in memory so it has an address in the first place. The C compiler outputs a file in the filesystem, which has to be loaded into a new virtual address space. Therefore, in addition to binary code, that file has to include the information that it has binary code, and what its address space should look like.

A bootloader has different requirements, so its file format might be different. But the idea is the same: binary code is always a payload in a larger file format, which includes at a minimum a sanity check to ensure that it's written in the correct instruction set.

C compilers and assemblers are typically configured to produce static library files. For embedded applications, you're more likely to find a compiler which produces something like a raw memory image with instructions beginning at address zero. Otherwise, you can write a linker which converts the output of the C compiler into whatever else you want.

Potatoswatter