views:

168

answers:

3

I'd want to hear various opinions how to safely use c++ in mission critical realtime applications.

More precisely, it is probably possible to create some macros/templates/class library for safe data manipulation (sealing for overflows, zerodivides produce infinity values or division is possible only for special "nonzero" data types), arrays with bound checking and foreach loops, safe smartpointers (similar to boost shared_ptr, for instance) and even safe multithreading/distributed model (message passing and lightweight processes like ones are defined in Erlang languge).

Then we prohibit some dangerous c/c++ constructions such as raw pointers, some raw types, native "new" operator and native c/c++ arrays ( for application programmer, not for library writer, of course). Ideally, we should create a special preprocessor/checker, at least we must have some formal checking procedure, which can be applyed to sources using some tool or manualy by some person.

So, my questions:

1) Are there any existing libraries/projects that utilize such an idea? (Embedded c++ is apparently not of desired kind) ?

2) Is it a good idea at all or not? Or it may be useful only for prototyping some another hipothetical language? Or it is totally unusable?

3) Any other thoughts (or links) on this matter also welcome

Sorry if this question is not actually a question, offtopic, duplicate, etc., but I haven't found more appropriate place to ask it

+6  A: 

For good rules on how to write C++ for mission critical real-time applications have a look at the Joint Strike Fighter coding standards. Many of the rules there are based on the MISRA C coding standards, which I believe are proprietary. PC-Lint is a C++ code checker with rule sets like what you want (including the MISRA rules). I believe you can customize your own rules as well.

gregg
thank you for references, if It's actually possible to customize Lint it's an interesting possibility.
Thanks for the link.
John Dibling
+2  A: 

We use C++ in mission-critical real-time applications, although I suppose we have it easy (in theory) because we have to only provide real-time guarantees as good as the hardware our clients use. Thus, sufficient profiling lets us get by without mlockall() or stack pre-loading or any other RT traditions. As for the language itself, I think everyday modern C++ coding practices (ones that discourage C concepts) are entirely sufficient to write robust applications that can be used in RT contexts, given 21st century hardware.

Unit tests and QA should be the main focus of effort, instead of in-house libraries that duplicate existing language features.

Cubbi
maybe you are right.. it's better to emphasize on nonexisting capabilities such distributed/multithreading synchronization and memory model. Any overloads of + -/ * ... make no guarantee fromwriting a wrong program by bad programmer :) may be...
@user396672: we used to have some such special libraries, and they did make sense back in 1992 when C++ was immature, but they became unmaintainable over the decades while both the standard and the generic libraries such as boost became more efficient and much better understood. But yes, our priority-inheriting message-passing library is all our own.
Cubbi
+2  A: 

If you're writing critical high-performance realtime S/W in C++, you probably need every microsecond you can get out of the hardware. As such, I wouldn't necessarily suggest implementing all the extra checks such as ones that you mentioned, at least the ones with overhead implications on program execution. You can obviously mask floating point exceptions to prevent divide by zero from crashing the program.

Some observations:

  • Peer review all code (possibly multiple reviewers). This will go a long way to improving quality without requiring lots of runtime checks.
  • DO make use of diagnostic tools and non-release-only asserts.
  • Do make use of simulation systems to test on non-embedded hardware.
  • C++ was specifically designed without things like bounds checking for performance reasons.

In general I don't suggest arbitrarily restricting the language, although making use of RAII and smart pointers should have minimal overhead and provides a nice benefit.

Someone else pointed out that if you want Ada, just use Ada.

Mark B
Smart pointers, unfortuantally, has overhead. shared_ptr, for instance, use some intermediate object that actualy refers to object we intended to point to. RAII, you are right, has almost no overhead.Thank you for your answer, I really am not sure about necessity of formalized restrictions.. and I asked the qustion because a Im not sure :)
@user396672 That's why Stroustrup prefers unique_ptr over shared_ptr so much: http://www2.research.att.com/~bs/C++0xFAQ.html#std-shared_ptr
Cubbi