views:

202

answers:

4

First, I'd like to establish the acceptable end-to-end latency for a real-time system in the financial world is less than 200ms. Okay, here's what I'm after. In the design of real-time systems, there are "design patterns" (or techniques) that will increase the performance (i.e. reduce processing time, improve scalability, etc).

An example of what I'm after is, the use of GUIDs instead of sequential numbers for allocation of primary keys. Rationale for GUIDs is that handlers have their own primary key generators without "consulting" each other. This allows for parallel processing to occur and permits scaling.

Here're some more. I'll try and add to the list when able to.

I bow to the collective wisdom of the community. Thanks heaps!

+2  A: 

For general real-time system work, the classic rule is to go after variability and kill it. Real hard real-time means using static schedules, streamlined operating systems, efficient device drivers, and rock-hard priorities. No dynamic or adaptive stuff is feasible, if you really want computation X to end within a known time-bound T.

I guess what you mean here is not really real-time in that respect, and I guess the system is a bit more complicated than reading sensors, computing control loop, activating actuators. Some more details would be nice to know waht the constraints are here.

jakobengblom2
+1  A: 

You've already mentioned Event Driven Architecture, I'd suggest you have a look at Staged Event Driven Architectures (SEDA).

A stage is essentially a queue for events and a function to operate on the event. The "unconventional" thing about this architecture is that each stage can be run in its own thread and the functions typically need asynchronous I/O, etc. Arranging programs in this way is awkward at first, but allows for all kinds of magic - like QoS, tweaked scheduling, etc.

See Welsh's Berkeley dissertation and his web site. You might also look at Minor Gordon's project (from Cambridge UK) called yield. He had some very good results. It may seem like the project is geared towards Python at first, but it can be used for pure c++ as well.

ceretullis
+1  A: 

As basic as it may sound, most line of business applications are filled with redundant calculations, eliminate them. Refactoring of calculations is the backbone of optimization patterns. Every time a processing cycle appears you have to ask:

What within this cycle is calculated with the same output it would have out of the cycle. As a basic example:

for(int i=0;i< x/2; i++)
  //do something

Here you can safely take x/2 and calculate it before the cicle and reuse that value (the modern compilers now take care of these trivial optimizations)

To see the ramifications of this simple rule I can give you the example applied to database querys. In order to avoid the INNER JOIN of two tables in order to get a highly recurrent field requent you can violate the normalization rules and duplicate it on the table relating to the one that has the value. This avoids repetitive table joining processing and can free up paralelization as only one of the tables needs to be locked on transactions. Example:

Client table querys need a client discount recurrently but the discount is saved in the Client type table.

Caerbanog
A: 

Don't "fix" anything unless you know for sure that it's "broken".

The first thing I'd do is tune the blazes out of that program that has to run fast. I would use my favorite technique. Then, chances are, there will be enough wiggle room to fool around with architecture.

Mike Dunlavey