views:

104

answers:

3

Consider that there are 100 plus ways of interrupts occuring from various sensors. There are chances that all can occur at the same time too. How can the software be designed to handle it efficiently ?

+7  A: 

It depends if you're optimizing for latency or throughput.

Since you asked about efficiency, I'll guess you're looking at throughput. In that case, one tried-and-true pattern is to have the interrupt handlers read the sensors, queue a command and state, and return immediately.

You have a non-interrupt software thread pick the commands off the queue and announce events for handlers. This minimizes your task switch time. You can use domain specific logic to combine commands, throw out commands that are no longer relevant, etc.

This is essentially how windowing systems work. Each mouse click, mouse movement, keyboard press, etc. results in a command being queued. The windowing system picks the commands off and calls a corresponding handler. There's extensive logic for throwing out commands that are not relevant by the time they are picked off the queue, for combining commands, and for expediting them.

Network stacks use the same model. Packets are queued by the network level, then a main loop picks them off and uses an inversion of control model to process each packet.

Rob
@Rob: Interesting to know the approach followed by network stacks to process packets.
S.Man
@sman: Used to do that for a living at Hewlett Packard. One interesting thing we found is that the task switch time for an RTOS just killed us. Ended up with a very simple architecture. Each layer would parse it's header, then call into the next. Interrupt handlers would simply empty the hardware buffer, queue the packet and set a "needs attention" semaphore.
Rob
+2  A: 

The rule of thumb is that interrupt handlers should do as little as they possibly need to do to handle the interrupt. Keep them "as short as possible".

For example, if your device has to receive messages on a serial port and respond to them: The UART serial RX interrupt handler should just read the incoming byte and store it in a buffer (and ensuring there isn't a buffer overflow). That's it. Then a main loop task should later process the data in the buffer, and create any response in a buffer so it can be transmitted by a serial TX interrupt handler.

In the past, I've seen embedded software where the interrupt handler did the entire communication protocol handling. It worked, but the interrupt handler took a long time to run and so delayed other interrupt handlers from running. That increases the risk that other interrupt handlers do not process their event in time.

Craig McQueen
+2  A: 

If your system really does have 100s of interupt sources, efficiency may not be the only problem. You may have to do a "holdoff analysis" in order to make sure you aren't going to fail requirements in the worst case.

First, measure the worst case time for each ISR. Then, for each interrupt X:

  1. Determine the deadline: what is the maximum time that can elapse between interrupt X occuring and disaster (like losing data, missing communications windows, etc...)
  2. Determine the worst case scenario of other ISR's that can hold off servicing interrupt X. Depending on the priority structure of your processor, you may have to consider interrupts that occur just before X, and ones that occur while X is pending.
  3. Add up all the times of the ISRs identifed in step 2. If the sum is greater than the deadline, you need to redesign.

Redesign can include making the ISRs faster, adjusting FIFO lengths, changing the frequency of interrupts (gathering more data less often or vice versa), adjusting sequences so certain interrupts are guaranteed not to occur simultaneously. There is no one-size-fits all strategy. (although faster ISRs are almost always a good thing.)

AShelly