Modeling functionality vs. model execution.

Sooner or later, every system modeler will come across the concept of “model execution“.  During the 1990’s, “executable system models” seemed to be all the rage among forward-thinking systems engineers (like me), right up there with “object oriented systems engineering”!  There were a number of system modeling tools that explicitly supported the notion of executable system models, notably Ascent Logic Corporation’s RDD-100 and Vitech Corporation’s CORE.  The author personally used RDD-100 on several programs, and is also familiar with CORE.  Both are based on the Alford/Long SREM methodology, which provides sufficient rigor that the models may be machine interpretable, and thus executable.  Behavior is represented using behavior diagrams (RDD) or enhanced functional flow block diagrams (EFFBDs) (CORE), and this is managed separately from the system structure.  Model execution imposes physical constraints on the execution of the designed behavior.  Note that these tools were developed before UML or SysML were available.

Proponents of “model execution” claim that it provides the only way to check the dynamic consistency of the model.  Executing the model in a tool like RDD-100 or CORE provided a way to animate the behavior diagrams, consistent with the resources and constraints imposed by the physical structure as modeled.  This quickly exposed race conditions, starved resources, and lockouts or logic errors.   It did not, however, accurately model overall system performance, nor was it intended to!  This executable system model is still a descriptive model, not an analytical model.   This is an important distinction… When detailed design or environmental details are added to a system model, it looses its value as a well balanced, clearly bounded system design framework!  System model execution is not a substitute for a robust analysis plan, and will not take the place of a modeling and simulation effort!

Consider this dramatic example of this kind of mis-application:  One presenter at an early RDD-100 National User’s Group conference described how he successfully modeled the complete set of low-level network bus protocols, and simulated a multi-node computer network… all on a tool that doesn’t even compile!  He seemed quite proud of the fact that each simulation run took about a week!  He could have done the same job in OPNET in a matter of minutes.

It’s important at this point to distinguish between “executable models” and “code generation” or the generation of code, which may in turn execute.  In the RDD-100/CORE tradition, the term “executable model” implies that a comprehensive simulation environment is included in the tool and is available for use by the systems engineer/modeler.  In fact, RDD-100 did not generate code at all, and it did not compile… It ran interpretively in a huge SmallTalk image file, and was extremely inefficient from a computational perspective!  There are a large number of UML based tools that can generate code.  Just because they can generate code does NOT mean that they can build “executable system models”!  The simulation environment and the initial conditions must also be available and easily manageable by the modeler before I would declare any tool capable of building executable system models.

So here is the author’s assessment of the “burden” of model execution: In addition to a semantically correct system behavioral model, the tool (and modeler) need also to support:

  • A simulation environment, including means for keeping track of simulation time and resources
  • A stimulus generator or input file.
  • A visualization/animation capability … Not just animating the diagrams, but providing a way to track how values change over time, including any outputs.  Sometimes, animation of a mockup HSI is important.
  • A way to monitor resource constraint/utilization based on how behavior has been allocation to structure

The Rhapsody folks have promised to incorporate a simulation engine into the tool, but the author has not seen it operate yet.  It is certain to be based on the existing code generation capability of the tool.  MagicDraw and Artisan Studio also claim to have model execution capability.  Experience has shown that maintaining a goal of model execution will significantly restrict how behaviors can be represented in these tools… for example, functional hierarchy is impossible in Rhapsody if you want to generate code (or presumably execute).

One could logically ask why Matlab, Simulink, or Extend couldn’t be used to provide system model execution… clearly that they can, but one must question their ability to adequately represent an abstract descriptive system model.  Simulink has improved significantly over the years, and it handles abstraction a lot better than it used to.  If clear segregation of form and function is important, however, the author doubts that these system level simulation tools are up to the task yet.  This is an area for further inquiry.

If model execution is so good, then why would anyone NOT want to make their system models executable?  In a word, time.  It takes a great deal of time to take an already useful descriptive system model and make it animate properly.   One of the conclusions reached during the CC&D Pilot project was that it took just as long to get a system model to animate as it took to build it in the first place.  Of course, this was using a code generation tool, not an executable system modeling tool, which significantly extended the effort required.  Derek Hatley, while teaching a class in 1994, made it clear that he considered model execution to be a waste of time.  He argued that race conditions and lockouts are discovered by simple static analysis, without the need for a simulation.  This may be true, but it is certainly compelling to see the diagrams animate, and to generate event traces from the model directly.

I often hear the criticism that “SysML doesn’t execute”, which is perfectly true.  SysML was not  inherently designed execute, nor was it designed to calculate (see the parametrics section).  It was designed to be compatable with emerging UML standards for executable semantics, such as Foundational UML (fUML),and it’s associated action language (Action Language for fUML, a.k.a. “ALF“).    Some SysML tools are beginning to incorporate fUML, but it has yet to be leverage to provide SysML model execution.

There is hope that a SysML model could be linked or transformed in a way that could accommodate execution in Simulink or Extend.  While this may eventually be possible, most attempts so far have involved manually re-building the SysML model in the other tool.  This quickly leads to model maintenance and configuration difficulties. Rhapsody for example provides a way to incorporate Simulink modules as blocks in a SysML model, but that’s really not the same thing.

So, should you invest in making your SysML model executable?  There are certainly advantages, not the least of which is that it will motivate your modeling team!  It also provides a good milestone for model completion and maturity.  It is strongly advised to keep “model execution” from becoming an analysis activity in its own right, and if you do execute, do it in as abstract or unrealistic way as is tolerable.  Leave realism to the Modeling & Simulation experts!

Leave a Reply

Your email address will not be published. Required fields are marked *