views:

265

answers:

5

I've always wondered this, and still haven't found the answer. Whenever we use "cout" or "printf" how exactly is that printed on the screen?. How does the text come out as it does...(probably quite a vague question here, ill work with whatever you give me.). So basically how are those functions made?..is it assembly?, if so where does that begin?. This brings on more questions like how on earth have they made openGl/directx functions..

break it down people break it down.:)

A: 

Crack open the source to glibc and see for yourself.

Short answer, a lot of C code, sprinkled occasionally, with some assembler.

JUST MY correct OPINION
+1  A: 

Well, they go through a bunch of library functions, and eventually end up calling a write() system call, that sends the data to the appropriate file descriptor, which then causes it to turn up in a read() call in the terminal emulator (or command window shell, if this is Windows). The terminal/shell causes that data to be painted on the screen, probably by way of a bunch more system calls to send it to the graphics system.

Windows and Unix/Linux terminology is quite different, especially the concept of a shell is not at all the same thing in each. But the use of read() and write() calls is pretty similar in both cases.

System calls are special functions that cause the kernel to do specific things; how they're implemented is pretty magical, and very dependent on what sort of processor you have, but usually it's by causing some kind of recoverable processor error that the kernel has to tidy up.

Andrew McGregor
+18  A: 
Norman Ramsey
+1 You beat me to it. :)
vladr
Then the whole font stuff adds a couple of dozen even more complex steps!
Martin Beckett
... unless your adapter is in good ol' text mode, that is. ;)
vladr
I think you've left out the most interesting/convoluted/murky part: terminal handling. To this day, it's still mostly mystery to me.
Alex B
thanks for your answer, that process sounds quite difficult...
sil3nt
Where would I be able to find more information on the different sections you've listed there.
sil3nt
@sil3nt: The place to go might be a good textbook on operating systems. I haven't taught the subject in almost 15 years, and I am out of touch about what the good books are, but I think people still like Andy Tanenbaum's work in this area.
Norman Ramsey
cheers, thanks.
sil3nt
A: 

The magic really happens in the device driver. The OS presents an interface for application programmers to hook into. This gets massaged somewhat (e.g. buffered) and then sent to the device. The device then takes the common representation and transforms it into signals the particular device can understand. So ASCII gets displayed in somme reasonable format on the console, or to a PDF file, or to a printer, or to disk, in the form appropriate for that device. Try something other than ASCII (or UTF8) that the driver does not understand and you will see what I am talking about.

For things the OS cannot handle (special graphics cards for example) the app writes the data directly to device memory. This is how something like DirectX works (to drastically oversimplify).

Each device driver is different. But each is the same in terms of how they interface with the OS, at least for each class of device (disk, NIC, keyboard, etc).

Lance Diduck
+16  A: 

So basically how are those functions made?..is it assembly?, if so where does that begin?. This brings on more questions like how on earth have they made openGl/directx functions.

Those functions can be assembly or C, it doesn't change much (and, anyway, you can do in C virtually anything you can do in assembly.) The magic ultimately happens at the interface of software and hardware -- how you get there from printf and cout << can be as trivial as a few pointer operations (see the 286 example below, or read about cprintf further down), or as complex as going through multiple layers of diverse system calls, possibly even going over networks, before eventually hitting your display hardware.

Imagine the following scenarios:

  1. I dig up my old 286 from under the dust and fire up MS-DOS; I compile and run the following program in real mode:

    void main(void) {
      far long* pTextBuf = (far long*)0xb8000L;
      /* Poor man's gotoxy+cprintf imitation -- display "C:" (0x43,0x3a) in
         silver-on-black letters in the top-left corner of the screen */
      *pTextBuf = 0x073a0743L;
    }
    
  2. I am connecting with my laptop's Windows HyperTerminal to my serial port, which is hooked up with a cable to the back of a SUN box, through which I can access my SUN box's console. From that console I ssh into another box on the network, where I run my program which does printf, piping its output through more. The printf information has traveled through a pipe through more, then through an SSH pseudo-tty through the network to my SUN box, from there through the serial cable onto my laptop, through Windows' GDI text drawing functions before finally appearing on my screen.

Adding more detail to Norman's answer, hopefully more in the direction of your original question:

  • printf and cout << usually perform writes to stdout -- typically buffered writes, but that has not always been the case
    • back in the day, various compiler vendors (Borland, Microsoft), especially on DOS, provided you with functions like cprintf, which wrote directly to video memory without making any system calls, memcpy-style (see my 286 example above) -- more on that further down
  • writing to stdout is a system call, be it write under *nix, WriteFile or WriteConsole under Windows, INT 21, 9 under DOS, etc.
  • the advantage of going through the stdout abstraction is that it allows the operating system to do some internal plumbing and perform redirection (be it to a tty descriptor, to a pipe, to a file, to a serial port, to another machine via a socket etc.)
    • it also indirectly makes it possible to have multiple applications' stdouts coexist on the same screen, e.g. in different windows -- something that would be much harder to do if each application tried to write directly to video memory on its own (like cprintf did on DOS -- not what would be called today a true or usable multi-tasking operating system.)
  • nowadays, a graphical application such as your rxvt console window application, PuTTY telnet/ssh client, Windows console, etc. will:
    • read your application's stdout:
      • from a tty descriptor (or equivalent) in the case of rxvt or of the Windows console
      • from a serial port if you are using something like Realterm to connect to an embedded system or to an older SUN box console
      • from a socket if you are using PuTTY as a telnet client
    • display the information by rendering it graphically, pixel by pixel, into the graphical application's window buffer/device context/etc.
      • this is typically done through yet another layer of abstraction and system calls (such as GDI, OpenGL etc.)
      • the pixel information ultimately ends up in a linear frame buffer, that is, a dedicated memory range (back in the days of 8MHz CPUs, well before AGP, this area could reside in system RAM, nowadays it could be megabytes and megabytes of dual-port RAM on the video card itself)
      • the video card (what used to be called a RAMDAC), would periodically read the frame buffer memory range (e.g. 60 times a second when your VGA adapter was set for 60Hz), scanline after scanline (possibly doing palette lookups too), and transmit it to the display as either analogue or digital electrical signals
  • back in the day, or even today when you boot your *nix box in single-user mode or go full-screen in a Windows console, you graphics adapter is actually in text mode
    • instead of a liner frame buffer, one (be it the cprintf implementation or the OS) writes to the much smaller 80x25 or 80x50 etc. text buffer array, where (e.g. in the case of VGA) only two bytes are necessary to encode each character value such as A or or (1 byte) as well as its color attributes (1 byte) -- that is, its foreground (4 bits, or 3 bits + brightness bit) and background colors (4 bits, or 3 bits + blink bit)
    • for each pixel on each scanline, the RAMDAC:
      • would keep track of which text column and which text row that pixel belongs to
      • would look up that column/row position's character value and attributes
      • would look the character value against a simple bitmap font definition
      • would see whether the pixel being rendered, in the character value's glyph bitmap definition, should be set to foreground or background, and what color that would be based on the character attribute at that position
      • possibly flip the foreground and background on even seconds if the blink bit was set or the cursor is showing and is at the current position
      • draw the pixel

Start at the History of Video Cards and GPU pages on Wikipedia for a more in-depth look at how we got where we are today.

Also look at How GPUs Work and How Graphic Cards Work.

Cheers, V.

vladr
Start a command prompt, go full-screen (Alt+Enter), and type `debug` then, at the `-` prompt, type `f b800:0000 640 23 2e` to fill the top 10 rows with yellow-on-green pound `#` signs.
vladr