tags:

views:

132

answers:

4

How do people make GUIs? I mean the basic building block or principle they used to draw visual components on the screen like KDE, Gnome, etc. Are there any simple examples about how to draw something like a rectangle on the screen by directly dealing with the hardware?

I am using a PC for those who are asking about my platform.

A: 

You probably need a graphics library such as, for example, OpenGL.

For direct hardware interaction, you probably need to do something like assembly, which is completely computer specific.

thyrgle
I know there are DirextX , OpenGL and others .. But I wanna understand the principle itself .. I mean the very low level principle of graphical user interface.
Lettuce
@Downvoter: Why?
thyrgle
@Lettuce: That's dependent on platform.
thyrgle
@Downvoters: Please, I am curious to know what is wrong with my answer.
thyrgle
I doubt you would use OpenGL for a normal GUI. A contemporary fancy one, maybe, but not for simple boxes, lines, and triangles.
EboMike
@EboMike: You would probably use what the vendor provides, but, what do you think they made it with? They probably wrote their own implementation of something like OpenGL and then wrote on top of that so we could use their frameworks to create a GUI.
thyrgle
@EboMike It's quite common for graphics libraries to take advantage of the GPU, building on top of OpenGL or DirectX. You can normally draw lines and boxes a lot faster building on top of those.
nos
A: 

If you are willing to look through a lot of source code, you might look at Mesa 3D, an open source implementation of the OpenGL specification.

Thomas
+2  A: 

The simple answer is bitmaps, in fact this would also apply to fonts on terminals in the early days.

The original GUI's, things like Xerox Parc's Alto GUI were based on bitmap displays, and the graphics were drawn with simple bitmap drawing tools and graphics libraries, using simple geometry to determine shapes like circles, squares, rectangles etc, and then map them to display pixels.

Today's GUI are the same, except with additional software and hardware that have sped up and improved the process, and performance of these GUIs.

The fundamental mapping of bits e.g. 10101010 to pixels is dependent on the display hardware, but at a simplistic level, you would provide a display buffer in memory and simply populate it's bytes with the display data.

So for a basic monochrome bitmap, you'd draw it by providing bits that represented the shape you want to draw, you can either position these bits, like this, a simple 8x8pix button.

01111110
10000001
10000001
10111101
10111101
10000001
10000001
01111110

Which you can see easier if I render it with # and SPACE instead of 1 and 0.

 ###### 
#      #
#      #
# #### #
# #### #
#      #
#      #
 ###### 

Which as a bitmap image would look like this : http://i.imgur.com/i7lVQ.png (I know it's a bit small :) but this is the sort of scale we would've begun at, when GUI's were first developed.)

If you had a more complex (e.g. 24 bit color display, you'd provide each pixel using a 24bit number.)

Obviously some bitmaps cannot be drawn manually (for example the border of a window), like we've done above, this is where geometry comes in handy, and we can use simple functions to determine the pixel values required to draw a rectangle, or any other simple shape, and then build from there.

Once you are able to draw graphics in this way on a display, you then hook a drawing loop onto a system interrupt to keep the display up to date (you redraw the display very often, depending on your system performance.) This way you can make it handle interaction from user devices, e.g. a mouse.

Back in the early days, even before Xerox Parc / Alto there were a number of early computer systems which had Vector based displays, these would make up an image by drawing lines on a CRT representation of a cartesian plane. However, these displays never saw mainstream use, except perhaps in some early video games, like Asteroids and Tempest.

slomojo
So how can you implement this on a computer ? I mean how can i really display this on the screen if i wanna do that. Any example in any language will be fine.
Lettuce
This could be done in Assembly Language, on an extremely basic level you would allocate a buffer (region of memory) that was sent to the display system (in our example a mono display), we'd call that a display buffer. - Then we can do various operations on the display buffer to render shapes or bitmap designs.
slomojo
Of course, nowadays you'd never NEED to touch anything as low level as this, and there are many different ways you can build GUI's. And building a full GUI from these very basic levels would be a pretty dumb thing to do, but you could do a number of exercises at this low level to understand the principals and mechanics at work.
slomojo
If you want to just play with bitmap rendering it's possible in Python - http://pyx.sourceforge.net/examples/bitmap/index.html - or ActionScript3 (using the flash.display.BitmapData and flash.display.Bitmap classes. (although I think only in 24 bit) - most languages will provide ways to draw bitmaps.
slomojo
Vector graphics displays like Tektronix terminals (http://en.wikipedia.org/wiki/Tektronix_4010) saw widespread use in the 70's-early 80's, at least in the scientific market.
David Gelhar
Yep, they certainly were never mainstream though.
slomojo
+5  A: 

Well okay, let's start at the bottom. You have a monitor that displays an image. This image is a matrix of pixels, say, 1600x1200 pixels with 24 bits depth.

The monitor knows what to display from the video adapter. The video adapter knows what to display through the "frame buffer", which is a big block of memory that - in this example - contains 1600 * 1200 pixels, usually with 32 bits per pixel in contemporary cards.

The frame buffer is often accessible to the CPU as a big block and memory that it can poke into directly, and some adapters have GPUs that allow for things like rendering stuff into the frame buffer, like shaded textured triangles, so the CPU just sends commands through a "command buffer", telling it what to draw and where.

Then you have the operating system, which loads a hardware driver that communicates with the video adapter.

The operating system usually offers functions to write to the frame buffer using functions. Win32 for example has lots of functions like BitBlt, Line, Text, etc. These will end up talking to the driver.

Then you have something like Java, that renders its own graphics, typically using functions provided by the operating system.

EboMike
So learning any tool that can deal with frame buffer can give me the freedom to make my own GUIs. Right ?
Lettuce
It still depends on what layer you're trying to settle on to. Are you writing your own operating system? Are you writing your own kernel? You cannot be platform-independent (unless you write your own kernel from scratch, have fun!). That's why we have things like Win32 or Java or QT - to allow you to be platform-independent, despite the fact that you need a custom driver for every graphics adapter out there, and that each operating system has its own way of exposing the driver to you.
EboMike