views:

441

answers:

6

I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.

What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?

Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.

+5  A: 

With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.

Robert Harvey
+5  A: 

To clarify Bob's answer maybe:

Use the signal generator to generate a pulse at some varying frequency. Random distribution across some range would be best.

use the signal generator (trigger signal) to start the scope.

the RTOS has to respond, do it thing and send an output pulse.

feed the RTOS output into input 2 of the scope.

get the scope to persist/collect mode. get the scope to start on A , stop on B. if you can.

in an ideal workd, get it to measure the distribution for you. A LeCroy would. Start with a much slower trace than you would expect. You need to be able to see slow outliers. You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS. (This won't really happen in practice, but if you don't get outliers it is reasonably useful. ) If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work. Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve. Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.

If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.

Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)

Link from this Question to the paper when you've written it.

Tim Williscroft
Thanks for the extra detail. I'll let you know what comes of all this, probably as another answer to this question.
drhorrible
SD doesn't say much if you don't know the distribution. Characteristic of non-real-time system is that the task takes, say, 0.5 uS usually, but a whole 1 second sometimes - SD can be very low if 1 second spikes happen rarely, but the actual performance won't be acceptable even for soft real-time.
ima
IMA: edited to correct the impression that I exepect normal results.
Tim Williscroft
I would completely agree with this answer, if the question were about some hardware signal processor. But speaking of TS-7800 and other embedded computers - RT problems don't come from mehaherz jitter, they come from OS deciding to write something to Flash memory or interrupting to handle some low priority task on different port.
ima
A: 

I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.

Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.

The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.

Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.

Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.

So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.

P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.

ima
Can you link to any of these established procedures (code or description)? What are the 'standard' ports?
drhorrible
http://www.merriam-webster.com/dictionary/standard[2]
ima
Actually, interrupt latency for "fast" hardware is shocking, and modern PC hardware is NOT designed to do real time.You measurement regime assumes the measuring PC can perform measurements in real time. In practice the jitter experience makes this unlikely for the sort of times RTOS consideration matter for. Interrupt latency is approx 20uS for PC hardware. With an OS , approx 50-60uS. Test gear like a scope is designed to be able to sample at steady rates of 1 gigasample per second or higher; even cheap scopes do 100 megasamples per second, with no jitter.
Tim Williscroft
No, it assumes you know your testing device (PC) latency and jitter characteristics, so you can correct for them when collect large enough sample of data.
ima
Do you realize that TS-7800 is not very different from a PC in this aspect? It's a board with 500 Mhz ARM processor, DDR memory, flash, USB - basically, hand-held computer in different casing. It's not capable of producing even noise at those frequences. All we need is testing device which can process signal at sufficently faster rate, which PC is.
ima
Thanks, ima for _not_ answering any of my questions. Instead, you chose to say all the other answers are wrong, and then when I asked for clarification, you give me the non-technical definition of a very common word.
drhorrible
I gave you an answer, even if you don't like it. Dictionary perfectly explains what "standard ports" are, and procedure is generally described in the answer itself. As for links, I hope you are not banned in google?Well, good luck oscilloscoping device where FGPA is connected to a 50 Mhz bus and contents of those 128 Mb of memory can drastically change all characteristics.
ima
So, on the one hand, you (correctly) say that I have a complex machine I'm trying to test, but on the other hand, it should have a single standard port to test? As far as I can tell, it has many ports: USB, RS-232, RS-485, SATA ports, GPIO, all of which follow a standard. The definition you linked doesn't mention ports, so seriously, which one is "the" output port? As for links, if I had found enough via Google, I wouldn't have come here. So, give me a lmgtfy.com link, I don't care. It seems like you do know an answer, or I wouldn't keep responding to you.
drhorrible
Bingo. Those ports are standard ports. But one would expect _you_ to know what the output port is. If you didn't decide what ports you will use for input and output, what are you going to test? Oh, never mind, save the generous courtesy of responding to me.
ima
OK, now are you going to answer my question, or continue toying with me?
drhorrible
+1  A: 

Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.

Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.

In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.

You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.

This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks. Running a full RTOS with heavy computational load you probably only get soft real-time.

Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.

Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.

These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.

Hope this helps

Gerhard
Would it be possible to develop a procedure that can be run at different stages of application development? I want to have a baseline, and monitor the system as we write the software. We won't have actual hardware peripherals until much later in the project. Until then, we will have them all emulated by a desktop PC.Overall, I clearly don't have much experience in this area, but I think the constraints are tight. We have a task that must run at 100-250 Hz, with fresh sensor data before execution, and the resulting actuator commands must be sent before the next cycle.
drhorrible
Dr Horrible, you should build a stress test while evaluating the board in the first place. It doesn't have to do the right thing, jsut the right amount of computing, branching. It helps if you already know how to solve the problem.
Tim Williscroft
+1  A: 

I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.

I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.

San Jacinto
BTW, you can call support. I've gotten "Grant" 3 times now. He's pretty helpful.
San Jacinto
Use the gate array for the hard realtime bit. That's what it's for.
Tim Williscroft
hat's what we realized. Now using a different board, running VxWorks
drhorrible
+1  A: 

I think that this is not a hard real-time device, since it runs no RTOS.

swegi
That's what we realized. Now using a different board, running VxWorks
drhorrible