views:

40

answers:

0

This questions is relevant to an application I'm currently working on, but I don't have much to go on in terms of finding an answer. The basic questions is what is the delay (or how to estimate it) between the time that a user presses one of the mouse buttons until the information is posted to the window message queue and is available for retrieval via GetMessage or PeekMessage functions?

Are we talking about an order of nanoseconds, microseconds, or milliseconds? Let's assuming that there is only one process running, so it basically has the CPU all to itself, minus whatever tasks the operating system must perform. As I see it, the OS first receives an interrupt from the device, the event details are stored in some internal structure, the OS then determines which process/window has the focus, and finally places the event onto the message queue. Is this a correct overview or am I missing some significant steps?

Finally, is there a better alternative than using message queues for getting input when performance (even 1 ms delay) is significant? A long time ago I remember implementing my own interrupt handlers for processing keyboard input in DOS. I don't know if something similar is possible these days, especially from the user mode applications, but is there still a way to remove some overhead associated with passing messages around?