So basically how are those functions made?..is it assembly?, if so where does that begin?. This brings on more questions like how on earth have they made openGl/directx functions.
Those functions can be assembly or C, it doesn't change much (and, anyway, you can do in C virtually anything you can do in assembly.) The magic ultimately happens at the interface of software and hardware -- how you get there from printf
and cout <<
can be as trivial as a few pointer operations (see the 286 example below, or read about cprintf
further down), or as complex as going through multiple layers of diverse system calls, possibly even going over networks, before eventually hitting your display hardware.
Imagine the following scenarios:
I dig up my old 286 from under the dust and fire up MS-DOS; I compile and run the following program in real mode:
void main(void) {
far long* pTextBuf = (far long*)0xb8000L;
/* Poor man's gotoxy+cprintf imitation -- display "C:" (0x43,0x3a) in
silver-on-black letters in the top-left corner of the screen */
*pTextBuf = 0x073a0743L;
}
I am connecting with my laptop's Windows HyperTerminal to my serial port, which is hooked up with a cable to the back of a SUN box, through which I can access my SUN box's console. From that console I ssh into another box on the network, where I run my program which does printf
, piping its output through more
. The printf
information has traveled through a pipe through more
, then through an SSH pseudo-tty through the network to my SUN box, from there through the serial cable onto my laptop, through Windows' GDI text drawing functions before finally appearing on my screen.
Adding more detail to Norman's answer, hopefully more in the direction of your original question:
printf
and cout <<
usually perform writes to stdout
-- typically buffered writes, but that has not always been the case
- back in the day, various compiler vendors (Borland, Microsoft), especially on DOS, provided you with functions like
cprintf
, which wrote directly to video memory without making any system calls, memcpy
-style (see my 286 example above) -- more on that further down
- writing to
stdout
is a system call, be it write
under *nix, WriteFile
or WriteConsole
under Windows, INT 21, 9 under DOS, etc.
- the advantage of going through the
stdout
abstraction is that it allows the operating system to do some internal plumbing and perform redirection (be it to a tty descriptor, to a pipe, to a file, to a serial port, to another machine via a socket etc.)
- it also indirectly makes it possible to have multiple applications'
stdout
s coexist on the same screen, e.g. in different windows -- something that would be much harder to do if each application tried to write directly to video memory on its own (like cprintf
did on DOS -- not what would be called today a true or usable multi-tasking operating system.)
- nowadays, a graphical application such as your
rxvt
console window application, PuTTY telnet/ssh client, Windows console, etc. will:
- read your application's
stdout
:
- from a tty descriptor (or equivalent) in the case of
rxvt
or of the Windows console
- from a serial port if you are using something like Realterm to connect to an embedded system or to an older SUN box console
- from a socket if you are using PuTTY as a telnet client
- display the information by rendering it graphically, pixel by pixel, into the graphical application's window buffer/device context/etc.
- this is typically done through yet another layer of abstraction and system calls (such as GDI, OpenGL etc.)
- the pixel information ultimately ends up in a linear frame buffer, that is, a dedicated memory range (back in the days of 8MHz CPUs, well before AGP, this area could reside in system RAM, nowadays it could be megabytes and megabytes of dual-port RAM on the video card itself)
- the video card (what used to be called a RAMDAC), would periodically read the frame buffer memory range (e.g. 60 times a second when your VGA adapter was set for 60Hz), scanline after scanline (possibly doing palette lookups too), and transmit it to the display as either analogue or digital electrical signals
- back in the day, or even today when you boot your *nix box in single-user mode or go full-screen in a Windows console, you graphics adapter is actually in text mode
- instead of a liner frame buffer, one (be it the
cprintf
implementation or the OS) writes to the much smaller 80x25 or 80x50 etc. text buffer array, where (e.g. in the case of VGA) only two bytes are necessary to encode each character value such as A
or ▒
or ♣
(1 byte) as well as its color attributes (1 byte) -- that is, its foreground (4 bits, or 3 bits + brightness bit) and background colors (4 bits, or 3 bits + blink bit)
- for each pixel on each scanline, the RAMDAC:
- would keep track of which text column and which text row that pixel belongs to
- would look up that column/row position's character value and attributes
- would look the character value against a simple bitmap font definition
- would see whether the pixel being rendered, in the character value's glyph bitmap definition, should be set to foreground or background, and what color that would be based on the character attribute at that position
- possibly flip the foreground and background on even seconds if the blink bit was set or the cursor is showing and is at the current position
- draw the pixel
Start at the History of Video Cards and GPU pages on Wikipedia for a more in-depth look at how we got where we are today.
Also look at How GPUs Work and How Graphic Cards Work.
Cheers,
V.