views:

88

answers:

4

Hello!

I am a student of Computer Science and have learned many of the basic concepts of what is going on "under the hood" while a computer program is running. But recently I realized that I do not understand how software events work efficiently.

In hardware, this is easy: instead of the processor "busy waiting" to see if something happened, the component sends an interrupt request.

But how does this work in, for example, a mouse-over event? My guess is as follows: if the mouse sends a signal ("moved"), the operating system calculates its new position p, then checks what program is being drawn on the screen, tells that program position p, then the program itself checks what object is at p, checks if any event handlers are associated with said object and finally fires them.

That sounds terribly inefficient to me, since a tiny mouse movement equates to a lot of cpu context switches (which I learned are relatively expensive). And then there are dozens of background applications that may want to do stuff of their own as well.

Where is my intuition failing me? I realize that even "slow" 500MHz processors do 500 million operations per second, but still it seems too much work for such a simple event.

Thanks in advance!

A: 

By what criteria do you determine that it's too much? It's as much work as it needs to be. Mouse events happen in the millisecond range. The work required to get it to the handler code is probably measured in microseconds. It's just not an issue.

Marcelo Cantos
I think he's exactly asking for some "quantification" of the workload required to understand whether it's too much or not. (obviously it's not, as it does work, but it's nicer when you have some numbers to look at)
nico
+2  A: 

My understanding is as follows:

Every application/window has an event loop which is filled by the OS-interrupts. The mouse move will come in there. All windows have a separate queue/process by my knowledge (in windows since 3.1)

Every window has controls. The window will bubble up this events to the controls. The control will determine if the event is for him.

So its not necessary to "compute" which item is drawn under the mouse cursor. The window, and then the control will determine if the event is for them.

Julian de Wit
+5  A: 

Think of events like network packets, since they're usually handled by similar mechanisms. Now think, your mouse sends a couple of hundred packets a second, maximum, and they're around 6 bytes each. That's nothing compared to the bandwidth capabilities of modern machines.

In fact, you could make a responsive GUI where every mouse motion literally sent a network packet (86 bytes including headers) on hardware built around 20 years ago: X11, the fundamental GUI mechanism for Linux and most other Unixes, can do exactly that, and frequently was used that way in the late 80s and early 90s. When I first used a GUI, that's what it was, and while it wasn't great by current standards, given that it was running on 20 MHz machines, it really was usable.

Andrew McGregor
A: 

You're pretty much right - though mouse events occur at a fixed rate(e.g. an USB mouse on linux gives you events 125 times a second by default - which really is not a lot),, and the OS or application might further merge mouse events that's close in time or position before sending it off to be handled

nos