tags:

views:

766

answers:

10

Here is the question, How did C (K&R C) look like? The question is about the first ten or twenty years of C's life?

I know, well I heard them from a prof in my uni, that C didn't have the standard libraries that we get with ANSI C today. They used to write IO routines in wrapped assembly! The second thing is that K&R book, is one the best books ever for a programmer to read, This is what my prof told us :)

I would like to know more about good ol' C. For example, what major difference you know about it compared to ANSI C, or how did C change programmers mind about programming?


Just for record, I am asking this question after reading mainly these two papers:

They are about C++, I know! thats why I wanna know more about C, because these two papers are about how C++ was born out of C. I am now asking about how it looked before that. Thanks Lazarus for pointing out to 1st edition of K&R, but I am still keen to know more about C from SO gurus ;)

+3  A: 

Speaking from personal experience, my first two C compilers/dev environments were DeSmet C (16-bit MS-DOS command line) and Lattice C (also 16-bit MS-DOS command line). DeSmet C came with its own text editor (see.exe) and libraries -- non-standard functions like scr_rowcol() positioned the cursor. Even then, however, there were certain functions that were standard, such as printf(), fopen() fread(), fwrite() and fclose().

One of the interesting peculiarities of the time was that you had a choice between four basic memory models -- S, P, D and L. Other variations came and went over the years, but these were the most significant. S was the "small" model, 16-bit addressing for both code and data, limiting you to 64K for each. L used 24-bit addressing, which was a 16-bit segment register and a 16-bit offset register to compute addresses, limiting you to 1024K of address space. Of course, in a 16-bit DOS world, you were confined to a physical limitation of 640K. P and D were compromises between the two modes, where P allowed for 24-bit (640K) code and 64K data, and D allowed for 64K code and 640K data addressing.

Bob Kaufman
Thankfully, Lattice decided to make the experience as Unix-like as possible, helped by the fact that MS-DOS 2.0 did the same. And when Microsoft licensed the Lattice compiler, they did the same.
kdgregory
Note that the memory models were a Microsoft invention. Unix C never had such monstrosities.
Jay
Actually, they were an Intel invention. Mainstream Unix has had the luxury of running on processor architectures with contiguous (and typically large) address spaces: PDP-11, VAX, 68000, 80386, and so on. Unless you were going to limit yourself to a 64k data segment, you needed to have some mechanism to address the rest of the 1M in an 8086. I haven't used the early Intel C compiler (which I don't believe came from either Lattice or Microsoft), but suspect it had some sort of explicit segmentation control.
kdgregory
+3  A: 

Wikipedia has some information on this topic.

Nathan Taylor
Thanks Nathan. Wikipedia page is really brief if I want to know more about the subject.
AraK
Sure thing, I noticed that as well.. just after I posted. :(
Nathan Taylor
+6  A: 

Have a look at the 'home page' for the K&R book at Bell Labs, in particular the heading "The history of the language is traced in ``The Development of the C Language'', from HOPL II, 1993"

Lazarus
+10  A: 

Well, for a start, there was none of that function prototype rubbish. main() was declared thus:

/* int */ main(c,v)
int c;
char *v[];
{
    /* Do something here. */
}

And there was none of that fancy double-slash comments either. Nor enumerations. Real men used #define.

Aah, brings a tear to my eyes, remembering the good old days :-)

paxdiablo
the good ole days, where everything was worse, and therefore better.
Stefano Borini
Actually the return types were omitted too. Early C always returned one word. And an int argument was default, so a simple "main(c,v)char*v[];{/*...*/}" was typical.
Andy Ross
I'd forgotten that bit, @Andy.
paxdiablo
"Real men used #define." today the use such fancy things such as const variables or inline functions or, god forbid, _lambdas_That reminds me, I'd like to have a lambada now :-)
Johannes Rudolph
+3  A: 

Here is one example of the code that changed with ANSI C for the better:

double GetSomeInfo(x)
int x;
{
    return (double)x / 2.0;
}

int PerformFabulousTrick(x, y, z)
int x, int y;
double z;
{
    /* here we go */
    z = GetSomeInfo(x, y); /* argument matching?  what's that? */
    return (int)z;
}
plinth
You forgot the parens around the return argument. Some pre-ANSI compilers actually required them. ;)
Bob Kaufman
plinth
Never particularly liked the parens, myself. It always bothered me how they seemed to be the accepted style at the time, despite the fact that so many sources asserted "avoid unnecessary parens". Some compilers implemented return() as a function call, which simply added to the confusion.
Bob Kaufman
+2  A: 

I first started working with C on VAX/VMS in 1986. Here are the differences I remember:

  • No prototypes -- function definitions and delcarations were written as
    int main() /* no void to specify empty parameter list */
    {
      void foo(); /* no parameter list in declaration */
      ...
    }
    ...
    void foo(x,y)
      int x;
      double y;
    {
      ...
    }
  • No generic (void) pointer type; all of the *alloc() functions returned char * instead (which is part of why some people still cast the return value of malloc(); with pre-ANSI compilers, you had to);

  • Variadic functions were handled differently; there was no requirement for any fixed arguments, and the header file was named differently (varargs.h instead of stdarg.h);

  • A lot of stuff has been added to math.h over the years, especially in the C99 standard; '80s-vintage C was not the greatest tool for numerical work;

  • The libraries weren't standardized; almost all implementations had a version of stdio, math, string, ctype, etc., but the contents were not necessarily the same across implementations.

John Bode
I was a VAX/VMS guy, too. In Unix- and PC-lands, returning 0 from main() means everything is good an normal. But VMS interpretted the return value as a standard error code, where odd values were good and even ones were bad. Ported programs invariably exited with "Access Violation" messages, because that's what 0 mapped to in the system message tables. Even with ANSI, you couldn't `return EXIT_SUCCESS`, you had to use `exit(EXIT_SUCCESS);` and then add a `/*NOTREACHED*/` comment for lint and finally `return 0;` to satisfy the compiler warning.
Adrian McCarthy
<varargs.h> was quite a late development; there were a variety of other much more dubious techniques used prior to that.
Jonathan Leffler
+2  A: 

Look at the code for the Version 6 Unix kernel - that was what C looked like!

See Lion's Commentary on Unix 6th Edition (Amazon).

Also, it would be easier if you told us your age - your profile says you're 22, so you're asking about code prior to 1987.

Also consider: The Unix Programming Environment from 1984.

Jonathan Leffler
Thanks, seems great resource to follow :)
AraK
A: 

I started using C in the early 1980's. The key difference I've seen between now and then was that early C did not have function prototypes, as someone noted. The earliest C I ever used had pretty much the same standard library as today. If there was a time when C didn't have printf or fwrite, that was before even my time! I learned C from the original K&R book. It is indeed a classic, and proof that technically sophisticated people can also be excellent writers. I'm sure you can find it on Amazon.

Jay
A: 

You might glance at the obfuscated C contest entries from the time period you are looking for.

quillbreaker
+2  A: 

While for obvious reasons the core language came before the library, if you get hold of a first edition copy of K & R published in 1978 you will find the library very familiar. Also C was originally used for Unix development, and the library hooked into the I/O services of the OS. So I think your prof's assertion is probably apocryphal.

The most obvious difference is the way functions were defined:

VOID* copy( dest, src, len )
    VOID* dest ;
    VOID* src ;
    int len ;
{
   ...
}

instead of:

void* copy( void* dest, void* src, int len )
{
    ... 
}

for example. Note the use of VOID; K&R C did not have a void type, and typically VOID was a macro defined as int*. Needless to say, to allow this to work, the type checking in early compilers was permissive. From a practical point of view, the ability of C to validate code was poor (largely through lack of function prototypes and weak type checking), and hence the popularity of tools such a lint.

In 1978 the definition of the language was the K&R book. In 1989 it was standardised by ANSI and later by ISO, the 2nd edition is no longer regarded as the language definition, and was based on ANSI C. It is still the best book on C IMO, and a good programming book in general.

There is a brief description on Wikipedia which may help. Your best bet is to get a first edition copy of K&R, however, I would not use it to learn C, get a 2nd ed. for that.

Clifford