views:

91

answers:

1

People like Alexander Stepanov and Sean Parent vote for a formal and abstract approach on software design.
The idea is to break complex systems down into a directed acyclic graph and hide cyclic behaviour in nodes representing that behaviour.
Parent gave presentations at boost-con and google (sheets from boost-con, p.24 introduces the approach, there is also a video of the google talk).

While i like the approach and think its a neccessary development, i have a problem with imagining how to handle subsystems with amorphous behaviour.
Imagine for example a common pattern for state-machines: using an interface which all states support and having different behaviour in concrete implementations for the states.

How would one solve that?
Note that i am just looking for an abstract approach.

I can think of hiding that behaviour behind a node and defining different sub-DAGs for the states, but that complicates the design considerately if you want to influence the behaviour of the main DAG from a sub-DAG.

+1  A: 

Your question is not clear. Define amorphous subsystems.

You are "just looking for an abstract approach" but then you seem to want details about an implementation in a conventional programming language ("common pattern for state-machines"). So, what are you asking for? How to implement nested finite state-machines?

Some more detail will help the conversation.

For a real abstract approach, look at something like Stream X-Machines:

... The X-machine model is structurally the same as the finite state machine, except that the symbols used to label the machine's transitions denote relations of type X→X. ...

The Stream X-Machine differs from Eilenberg's model, in that the fundamental data type

X = Out* × Mem × In*,

where In* is an input sequence, Out* is an output sequence, and Mem is the (rest of the) memory.

The advantage of this model is that it allows a system to be driven, one step at a time, through its states and transitions, while observing the outputs at each step. These are witness values, that guarantee that particular functions were executed on each step. As a result, complex software systems may be decomposed into a hierarchy of Stream X-Machines, designed in a top-down way and tested in a bottom-up way. This divide-and-conquer approach to design and testing is backed by Florentin Ipate's proof of correct integration, which proves how testing the layered machines independently is equivalent to testing the composed system. ...

But I don't see how the presentation is related to this. He seems to speak about a quite mainstream approach to programming, nothing similar to X-Machines. Anyway, the presentation is quite confusing and I have no time to see the video right now.

First impression of the talk, reading the slides only

The author touches haphazardly on numerous fields/problems/solutions, apparently without recognizing it: from Peopleware (for example Psychology of programming), to Software Engineering (for example software product lines), to various programming techniques.

How the various parts are linked and what exactly he is advocating is not clear at all (I'm accustomed to just reading slides and they are usually consequential):

If/when I will see the video I will update this answer.

MaD70
Thanks for the input, will read through after work. Quickly for the common pattern: just a quick example resulting from my c++ background.
Georg Fritzsche
Well, i hadn't thought the language-agnosticness through - i guess this approach does only make sense with languages where you have generic programming. Additionally it comes from C++ and seems to be heavily influenced by the direction the STL started. While your links are interestening they don't really help me with my question.
Georg Fritzsche
Also i think approaches like Hoare logic are not popular for a reason - i know of proven micro-kernels, but not anything of relevant size. At least the direction the STL took got somewhat popular and that even although there is no way to really check for the modelled concepts at compile time.
Georg Fritzsche
Well, in the first part I was really trying to understand your question, I was trying to frame it with well-understood concepts (so my hint to FSMes stream X-machines). Sorry then, but I don't understand your question yet.w.r.t. my comments to slides: Hoare logic is not the better option, aPToP (a Pratical Theory of Programming) is quite more simple. For programming languages with sophisticated type systems difficulties varies.
MaD70
Anyway they are only examples. My contention is: why inventing a totally new vocabulary when previous options exist? I think that for comparative formal reasoning power, concepts doesn't seem more simple.
MaD70
Hm, i'll have to read into aPToP then. With my missing theoretical background (applied computer sciences) i tend to see most theoretical approaches as overkill and unsuitable for bigger project. The approach from the presentation resulted from formalizing practical considerations, which for me is more natural. Might be just prejudice on my part though :)
Georg Fritzsche
I found the entire C++ effort misguided and these C++ "concepts" in line with it: an exercise in complication. Proof of this is, for example, in our discussion on C++ parsing - see Ira Baxter's comment (http://stackoverflow.com/questions/1520734/what-libraries-are-available-for-parsing-c-to-extract-type-information/1598058#1598058): *the rules for this* [constructing the symbol table of a C++ program] *occupy the bulk of the 600 page reference manual*. With these premises, I can only imagine the effort to analyze and prove properties of a C++ program (is already difficult for a program in C).
MaD70
However, I have found useful for motivating my interest in aPToP this paper by Hehner, *What's wrong with formal programming methods?* (http://www.cs.toronto.edu/~hehner/wrong.pdf): he compares three well-knows formalism - Hoare's logic, Dijkstra's predicate transformer semantics and Jones's Vienna Development Method (VDM) - with his own aPToP.
MaD70
Also, from aPToP book preface: *The theory in this book is simpler than any of those just mentioned.* [Same three theory mentioned above] *In it, a specification is just a boolean expression. Refinement is just ordinary implication. This theory is also more general than those just mentioned, applying to both terminating and nonterminating computation, to both sequential and parallel computation, to both stand-alone and interactive computation.*
MaD70
Note in section **4.0.4 Programs** an important concept, often misunderstood: **A program is a specification** *of computer behavior; for now, that means it is a boolean expression relating prestate and poststate. Not every specification is a program. A program is an **implemented specification**, that is, a specification for which an implementation has been provided, so that a computer can execute it.* That is, a program written in a programming language (PL) is a formal specification **anyway**. The quality of such PL (formal notation) is crucial in reasoning formally of such specification.
MaD70
Of course a PL is not sufficient for expressing **all** specifications, i.e. even not implementable specifications that need to be refined in programs.
MaD70
I updated my answer with a link to Hoare's CSP, to avoid giving the impression that Hoare did not produce anything from the '60s.
MaD70
I'll try to get into the wrong.pdf/aPToP at the weekend. I know, C++ is not a language people like to formally reason about in detail, but then again i think about the flexibility and expressiveness it has which allows to work around quite some limitations i just bump into in most other languages i have ventured in. Hm, but maybe languages that are too expressive simply don't work when also aiming for complex applications and correctness.
Georg Fritzsche
Unfortunately aPToP doesn't have tool support currently (you need to encode its laws in a theorem prover) and is not integrated with a programming language. ATS (compile to C), Qi (compile to Common Lisp) and Epigram (interpreter) are complete solutions. ATS seems more practical (if you don't program in Lisp) and is in current develop. With ATS you can go from "usual" static typing (well, usual if you are used to ML or Haskell) to full programming with theorem proving. The performance are near C (at least in the single core version). I agree that we are not completely there yet, but...
MaD70
.. we are near to a usable formal reasoning about programs infrastructure. Note that with fully verified libraries and formally specified API much of the cost of formally reasoning is shifted away from application developers.
MaD70
w.r.t. flexibility and expressiveness: no, I don't think so. Standard ML, Haskell, ATS, and so on, are powerful programming languages in which you can express whatever program succinctly. The difference is that they are well thought out PLs from the start, not accretions on C syntax and semantics
MaD70