tags:

views:

198

answers:

7

Hi, I just want to ask, is there any special C standard for MCU? I ask because so far when I programmed something under Windows OS, it doesn´t matter which compiler I used. If I had compiler for C99 I knew what I can do with it.

But recently I started to program in C microcontrollers, and I was shocked, that even its still C in its basics, like loops, variables creation and so, there is some syntax type I never seen in C for desktop computers. And furthermore, the syntax is changing from version to version. I use AVR-GCC compiler, and in previous versions, you used function for Port I/O, now you can handle Port like variable in new version.

So, I just wondered, what defines what functions and how have to be implemented into compiler to be still called C? Thanks.

+8  A: 

Embedded systems are weird and sometimes have exceptions to "standard" C.

From system to system you will have different ways to do things like declare interrupts, or define what variables live in different segments of memory, or run "intrinsics" (pseudo-functions that map directly to assembly code), or execute inline assembly code.

But the basics of control flow (for/if/while/switch/case) and variable and function declarations should be the same across the board.

and in previous versions, you used function for Port I/O, now you can handle Port like variable in new version.

That's not part of the C language; that's part of a device support library. That's something each manufacturer will have to document.

Jason S
+1 If you write C code that conforms to the C standard, it should compile with any C compiler, embedded or otherwise. That being said, most embedded device vendors will ship extensions or libraries that make using their devices easier. These are *not* standardized and vary from vendor to vendor and version to version. If you want to keep your code easily portable between platforms, use standard C as much as possible and try to keep all vendor-specific features abstracted away as much as possible.
bta
+2  A: 

The vast majority of the standard C language is common with microcontrollers. Interrupts do tend to have slightly different conventions, although not always.

Treating ports like variables is a result of the fact that the registers are mapped to locations in memory on most microcontrollers, so by writing to the appropriate memory location (defined as a variable with a preset location in memory), you set the value on that port.

Al
What is the circumstance when interrupt handling *doesn't* 'have slightly different conventions'?
Will Dean
On some systems, the "real" interrupt handler is generated by the linker and run-time library. That interrupt handler will save all registers and then call a specified routine whose calling convention is the same as any other.
supercat
+1  A: 

As previous contributors have said, there is no standard as such, mainly due to different architectures.

Having said that, Dynamic C (sold by Rabbit Semiconductor) is described as "C with real-time extensions". As far as I know, the compiler only targets Rabbit processors but there are useful additional keywords (eg costate, cofunc, waitfor), some real peculiarities (eg #use mylib.lib instead of #include mylib.h - and no linker), and several omissions from ANSI C (eg no file-scope static variables).

It's still described as 'C' though.

MikeJ-UK
+3  A: 

I have never seen a C compiler for a microcontroller which did not have some controller-specific extensions. Some compilers are much closer to meeting ANSI standards than others, but for many microcontrollers there are tradeoffs between performance and ANSI compliance.

On many 8-bit microcontrollers, and even some 16-bit ones, accessing variables on a stack frame is slow. Some compilers will always allocate automatic variables on a run-time stack despite the extra code required to do so, some will allocate automatic variables at compile time (allowing variables that are never live simultaneously to overlap), and some allow the the behavior to be controlled with a command-line options or #pragma directives. When coding for such machines, I sometimes like to #define a macro called "auto" which gets redefined to "static" if it will help things work faster.

Some compilers have a variety of storage classes for memory. You may be able to improve performance greatly by declaring things to be of suitable storage classes. For example, an 8051-based system might have 96 bytes of "data" memory, 224 bytes of "idata" memory which overlaps the first 96 bytes, and 4K of "xdata" memory. Variables in "data" memory may be accessed directly. Variables in "idata" memory may only be accessed by loading their address into a 1-byte pointer register. There is no extra overhead accessing them in cases where that would be necessary anyway, so idata memory is great for arrays. If array q is stored in idata memory, a reference to q[i] will be just as fast as if it were in data memory, though a reference to q[0] will be slower (in data memory, the compiler could pre-compute the address and access it without a pointer register; in idata memory that is not possible). Variables in xdata memory are far slower to access than those in other types, but there's a lot more xdata memory available.

If one tells an 8051 compiler to put everything in "data" by default, one will "run out of memory" if one's variables total more than 96 bytes and one hasn't instructed the compiler to put anything elsewhere. If one puts everything in "xdata" by default, one can use a lot more memory without hitting a limit, but everything will run slower. Best is to place frequently-used variables that will be directly accessed in "data", frequently-used variables and arrays that are indirectly accessed in "idata", and infrequently-used variables and arrays in "xdata".

supercat
+1  A: 

Wiring has a C-based language syntax. Perhaps you might want to see what makes it as such.

Christian Sciberras
+3  A: 

The C language assumes a von Neumann architecture (one address space for all code and data) which not all architectures actually have, but most desktop/server class machines do have (or at least present with the aid of the OS). To get around this without making horrible programs the C compiler (with help from the linker) often support some extensions that aid in making use of multiple address spaces efficiently. All of this could be hidden from the programmer but it would often slow down and inflate programs and data.

As far as how you access device registers -- on different desktop/server class machines this is very different as well, but since programs written to run under common modern OSes for these machines (Mac OS X, Windows, BSDs, or Linux) don't normally access hardware directly this isn't an issue. There is OS code that has to deal with these issues, though. This is usually done through defining macros and/or functions that are implemented differently on different architectures or even have multiple versions on a single system so that a driver could work for a particular device (such an Ethernet chip) weather it were on a PCI card or a USB dongle (possibly plugged into a USB card plugged into a PCI slot), or directly mapped into the processor's address space.

Additionally, the C standard library makes more assumptions than the compiler (and language proper) about the system that hosts the programs that use it (libc). These things just don't make sense when there isn't a general purpose OS or filesystem. fopen makes no sense on a system without a filesystem, and even printf might not be easily definable.

As far as what avr-gcc and its libraries do -- there's lots of stuff that goes into how this is done. The avr is a Harvard architecture with memory mapped device control registers, special function registers, and general purpose registers (memory addresses 0-31), and a different address space for code and constant data. This already falls outside of what standard C assumes. Some of the registers (general, special, and device control) are accessible via special instructions for things like flipping single bits and read/writing to some multi-byte registers (a multi-instruction operation) implicitly blocks interrupts for the next instruction (so that the second half of the operation can happen). These are things that desktop C programs don't have to know anything about, and since avr-gcc comes from regular gcc it didn't initially understand all of these things either. That meant that the compiler wouldn't always use the best instructions to access control registers, so:

*(DEVICE_REG_ADDR) |= 1; // set BIT0 of control register REG

would have turned into:

temp_reg = *DEVICE_REG_ADDR;
temp_reg |= 1;
*DEVICE_REG_ADDR = temp_reg;

because AVR generally has to have things in its general purpose registers to do bit operations on them, though for some memory locations this isn't true. avr-gcc had to be altered to recognize that when the address of a variable used in certain operations is known at compile time and lies within a certain range it can use different instructions to preform these operations. Prior to this avr-gcc just provided you with some macros (that looked like functions) that had inline assembly to do this (and use the single instruction inplemenations that gcc now uses). If they no longer provide the macro versions of these operations then that's probably a bad choice since it breaks old code, but allowing you to access these registers as though they were normal variables once the ability to do so efficiently and atomically was implemented is good.

nategoose
von Neumann, not "von Newman". (http://en.wikipedia.org/wiki/John_von_Neumann)
Jason S
I don't think any C implementation should particularly care whether the architecture is Von Neumann or not, since direct casting between function pointers and data pointers is forbidden, and I don't think there's any requirement that they be the same size. To be sure, on a Von Neumann machine one may be able to do some totally non-portable tricks to generate code on the fly, but nothing in the C standard assumes such a thing, and many modern platforms would forbid it anyhow.
supercat
@supercat: You're right on that. Von Neumann is a little too strict. Harvard should strictly fit standard C except that it often you are able to store constant data within the code section of those and pointers to those aren't usable in the same way as regular data pointers. `printf("%s", str);` won't work if `str` is in the code area. Even more noticeably incompatible with C is when the stack and data are in different address spaces.
nategoose
@nategoose: Some C implementations require different pointer types for different memory spaces, but nearly all the ones I've seen offer a read-only "universal" pointer type (sometimes it's a byte larger than other pointer types). I'm not sure why C requires the execution stack to be in the same address space as data; I think systems would probably be more secure if it weren't (recursion requires a stack in data space, to be sure, but I know of nothing that would prevent return addresses from being stored on a different stack).
supercat
@supercat: Last sentence of the first paragraph addressed this. This would be done by having the universal pointer be something like `struct universal_pointer { enum data_area type; union { void * __stack stk; void * __data data; void * __code code; } ;` and either include all code to access each type inline with every access to one of those, call a function that had that code in it, or in the case of repeated access operations (like memcpy or strlen) figure out which code to call repeatedly. Some systems do use different data and return stacks (at least in part).
nategoose
+7  A: 

is there any special C standard for MCU?

No, there is the ISO C standard. Because many small devices have special architecture features that need to be supported, many compilers support language extensions. For example because an 8051 has bit addressable RAM, a _bit data type may be provided. It also has a Harvard architecture, so keywords are provided for specifying different memory address spaces which an address alone does not resolve since different instructions are required to address these spaces. Such extensions will be clearly indicated in the compiler documentation. Moreover extensions in a conforming compiler should be prefixed with an underscore, however many provide unadorned aliases for backward compatibility, their use should be deprecated.

when I programmed something under Windows OS, it doesn´t matter which compiler I used.

Because the Windows API is standardized (by Microsoft), and it only runs on x86, so there is no architectural variation to consider. That said, you may still see FAR, and NEAR macros in APIs, and that is a throwback to 16bit x86 with its segmented addressing, which also required compiler extensions to handle.

that even its still C in its basics, like loops, variables creation and so,

I am not sure what that means. A typical microcontroller application has no OS or a simple kernel, you should expect top see a lot more 'bare metal' or 'system-level' code, because there are no extensive OS APIs and device driver interfaces to do lost of work under the hood for you. All those library calls are just that; they are not part of the language; it is the same C language; jut put to different work.

there is some syntax type I never seen in C for desktop computers.

For example...?

And furthermore, the syntax is changing from version to version.

I doubt it. Again; for example...?

I use AVR-GCC compiler, and in previous versions, you used function for Port I/O, now you can handle Port like variable in new version.

That is not down to changes in the language or compiler, but more likely simple 'preprocessor magic'. On AVR all I/O is memory mapped, so if for example you include the device support header, it may have a declaration such as:

#define PORTA (*((volatile char*)0x0100))

you can the write:

PORTA = 0xff ;

to write 0xff to memory mapped the register at address 0x100. You could just take a look at the header file and see exactly how it does it.

The GCC documentation describes target specific variations; AVR is specifically dealt with here in section 6.36.8, and in 3.17.3. If you compare that with other targets supported by GCC, it has very few extensions, perhaps because the AVR architecture and instruction set were specifically designed for clean and efficient implementation of a C compiler without extensions.

So, I just wondered, what defines what functions and how have to be implemented into compiler to be still called C?

It is important to realise that The C language is a distinct entity from its libraries, and that functions provided by libraries are no different from the ones you might write yourself - they are not part of the language - so it can be C with no library whatsoever. Ultimately library functions are written using the same basic language elements. You cannot expect the level of abstraction present in say the Win32 API to exist in a library intended for a microcontroller. You can in most cases expect at least a subset of the C Standard Library to be implemented since it was designed as a systems level library with few target hardware dependencies.

I have been writing C and C++ for embedded and desktop systems for years and do not recognise the huge differences you seem to perceive, so can only assume that they are the result of a misunderstanding of what constitutes the C language. The following may help:

Clifford