views:

64

answers:

5

I'm looking for tools that help me evaluate the performance of a software architecture. For this specific project I need to model a [distributed] system of a modest size that is comparable to message oriented middleware (MOM). Based on a model I'd like to measure the system's performance under certain circumstances. Also, the tool(s) should help me making decisions as to how a change to the architecture would affect the performance of the system.

Here's an example question (staying with the MOM analogy) that I'd like to be able to answer:
How would the throughput (measured in messages/s) of the whole system change if the persistence layer was changed from an SQL back end to some fancy new NoSQL back end with eventual consistency? In a [simplified] model the component that needs to make something persistent (i.e. write to the DB) has an operation that is delayed by X ms until the persistence provider acknowledges. If the persistence back-end is changed and the acknowledgement is instant then the said delay would drop to Y ms. How would decreasing this delay affect the throughput of the system?

Note that I'm primarily interested in ready to use software products or modeling techniques rather than research material, but nevertheless feel free to mention noteworthy academic resources.

A: 

Though primarily meant for networking research, the ns-3 simulator could be used to model and simulate your application. It probably depends on how network-centric your application is. ns-3 has an Application class as part of its object model that is meant to be used to model everything above TCP/UDP. You could write a very simplified version of your application logic that only sends gibberish over the network and introduces delays here and there for specific operations. ns-3 offers good traceability.

paprika
A: 

A simulator framework like SimPy could be of value to model and simulate the behavior of the system. In contrast to something like ns-3 you don't have ready-made parts available but you are not constricted to network-centric simulation.

With this approach you have all the freedom of the world for modeling, but changing parts in your model could be very time consuming if you don't start with a good object model: it's probably a good idea to use generic concepts like "channels" for communication between components rather than directly/explicitly connecting components. OO concepts and best practices apply.

paprika
A: 

This master thesis evaluates several Architecture description languages (ADLs) and their applicability to evaluate the performance of an architecture. It concludes that current ADLs do not support the evaluation of non-functional attributes in terms of performance predictions. The thesis also introduces a software called SAPE (Software Architecture Performance Evaluation), that -- as its name already suggests -- is meant to help with the evaluation of the performance aspects of a software architecture. It seems this software is not available anywhere online though.

paprika
A: 

This paper provides an overview of several approaches to derive performance models from [formal] software architecture specifications. The specification language used in most methologies is UML, the performance models [quote:] include queueing networks (QN) and their extensions called Extended Queueing Networks (EQN) and Layered Queueing Networks (LQN), Stochastic Timed Petri nets (STPN), Stochastic Process Algebras (SPA) and simulation models.

paprika
+1  A: 

The example you give is more of a change in the design and implementation - not the architecture. Sure, the NoSQL implementatin might be faster and increase thoughput overall but it's implementation performance that you'd be measuring.

I'd suggest that the performance of an architecture is based more on the number of components involved and how they are arranged - and it depends on where you draw the line between "architecture" and "design" (and implelentation detail).

Roger Sessions spent a lot of time looking at the impact of complexity in IT systems (Service Orientated Architecture in particular). Personally, I suspect there's merit in the idea that a more complex architecture might not be as efficient and therefore not as fast.

I'm not sure you can really test the "performance" of an architecture - from the point of view that it exists only "on paper". Aircraft that look perfect on paper have been known to kill test pilots.

In terms of software, I aware that various modelling systems have functionality that lets you run through a process and locate bottlenecks; the only one I know that specifically does this is ProVision (but there's probably others).

Adrian K