The Model Driven Software Network

Raise your level of abstraction

MDD meets TDD: mapping requirements as model test cases

Executable models, as the name implies, are models that are complete and precise enough to be executed. One of the key benefits is that you can evaluate your model very early in the development life cycle. That allows you to ensure the model is generally correct and satisfies the requirements even before you have committed to a particular implementation platform.

One way to perform early validation is to automatically generate a prototype that non-technical stakeholders can play with and (manually) confirm the proposed model does indeed satisfy their needs (like this).

Another less obvious way to benefit from executable models since day one is automated testing.

Read more

 

Views: 575

Add a Comment

You need to be a member of The Model Driven Software Network to add comments!

Join The Model Driven Software Network

Comment by Rafael Chaves on November 16, 2011 at 8:09

Andreas, while I fully agree whether users care about usability, ease of use, performance, etc, I don't think that is useful for deciding what is problem domain. While those things do need to be provided, they are highly dependant on the technical choices made when implementing the solution, and hence are indeed in the realm of the technical stakeholders.

 

When deciding whether something is technical vs. problem domain, one technique that I (implicitly) use is to think about how the problem would be described or addressed without computers altogether. Whatever that has in common with a computer-based solution is part of the domain, everything else is technical drudgery.

Comment by Andreas Leue on November 14, 2011 at 16:46

Ok, thanks for clarifying, I think I understand the approach.

 

As for Tx etc., I just would extend the boundary between domain/technical a little to that "interaction" sphere. I agree concerning actual race conditions (since the promise of a dbms is indeed to guarantee that you can pretend everything is serial and ignore the conflicts). But the transaction boundaries are important (do you have to reenter 2 pages of data or just one entry?), as are the consequences of conflicts, I mean, most of the music of an inventory system plays in the conflicts, the system must not only behave correct, but also user friendly in all these situations, must provide guidance etc.. I can't see why this shouldn't be regarded as problem domain, I wouldn't let a technician (alone) decide about that (dumping a db exception is a correct solution, but not acceptable). Same holds for all those "interaction" aspects (traditional "UI logic").

 

Comment by Rafael Chaves on November 14, 2011 at 15:12

Andreas, I'd say the approach of focusing on domain concerns and postponing any technically-oriented discussions/decisions it is not about technical constraints, but a matter of process aiming to deliver software that does what customer expects it to do in a timely manner.

Domain experts hold the domain knowledge, the technical team knows how to build software (regardless the domain). The goal is to allow developers to learn as much and as fast about the domain as possible, come up with a solution, and have that solution be validated by domain experts (the technical folks cannot do that by themselves). The target architecture plays no role in this process, and as such can and should be left out.

Translating that solution specified in terms of the problem domain to a technical architecture is something that developers can do with their eyes closed (that is their realm), and need no help with.

 

Re: are transactions domain or technical? I'd still think transactions are not relevant in (non-technical) problem domains. At a technology independent level, there are no race conditions, atomicity, consistency, isolation and durability are a given, just as if there was only one user using the system at all times, the OS never needs upgrades, the storage never gets full, the network never goes down (because those things don't exist).

Comment by Andreas Leue on November 14, 2011 at 9:06

> > ... prototype and "real" solution...
> ...can leave out many aspects...

I was just wondering whether this is a conscious decision, say like "I want to limit these tests in the sense of unit testing, to decrease complexity" or whether it is a necessity driven by technical constraints, like "it takes to long to creeate the full system so it's faster feedback to use a prototype".

> ...ACID transactions, security, usability, performance, reliability...
> > ...not a pure implementation detail...

Depending on the project, say, in a non trivial inventory management system, it's all about keeping consistent data in the db and between db and reality. Transactional behaviour is intermangled with reality
and visible ("first make a tick on that row in the sheet here, then press ok in that wizard....").

But to be clear, that's not an argument against unit testing and separation of concerns, it just shall illustrate that in some cases these issues belong to the domain.

> > ...tests...sequences of actions following pathes trough the system...
> ...why ...can't ...be encoded as test cases that hit the back-end API...
> ...end-to-end in a single test case... rather ...different test cases...

It's more a pratical decision than a theoretically founded one.Basically the measurement of quality of a test is: if it helps to detect errors, it is a good test. I once heared a good guiding principle (I have forgotten the source) "Don't think: it will work so I do not have to test it; instead think: if it really works so why don't you test it with confidence?" (almost always such tests will reveal problems).

So, we write unit tests during development, but also write end-to-end tests additionally, if resources are available by different persons than the ones who wrote the unit tests. (Even more: we do "recording", i.e. after the system has been evaluated by humans, results of tests are stored and then checked automatically on the CI server, which gives additional confidence, detects errors, even if it is not really going by the book of TDD/unit tests).

Regarding back-end API, all these levels contribute to the functioning of the system: the core business logic level (BLC, core classes), the business logic interaction level (BLI, as we call it, containing factories, workspaces, domain transactions etc.), the abstract/virtual user interface level (VUI, enriched with security, filtering of visibility etc.)

> ...core...can/should... be fully modeled...ignoring any actors...
> ...tests should be written against...

Corresponding to separation in BLC, BLI and VUI I advocate testing all these three levels.

Specifically, the BLI (interaction) layer contains strongly domain related concerns which are actor related, too. It is a level inbetween UI and BLC (core), it contains e.g. stateful transactions (typically modelled as state machines), aspects traditionally modeled as "UI logic", but since these are closely related to BL all UI independent parts should be separated from UI.

I can imagine this part as being described and tested within TextUML, too, and I think it would be benefitial to do so.

> > ...test case models...oomodels.org...
> ... get lost navigating around...usability/documentation...

Ok... thank you very much for that honest feedback. That should definitely not be the first impression of the site.

Andreas

Comment by Rafael Chaves on November 12, 2011 at 7:32

Argh, Ning just silently discarded a good part of my post. I will be briefer now...

An open issue is how tasks can be automated, and at which level. It could be done by again recording the task on an (sub-)UI level, but other approaches should be considered.

I'd say the core of an application can (and should) be fully modeled while still ignoring any actors of the system, but providing an abstract interface, and that is what tests should be written against. Providing an interface for a specific actor (like an end user) is important, but that can be dealt with independently from the core of the application (and as such is not a requirement for assessing the correctness of the application).

How do we achieve test case coverage?

Yeah, states and transitions make sense for state machines just as lines of code and branches for imperative behavior.

I browsed your site and visited the test case models (tutorials); I wanted to point you to our open repository oomodels.org

Thanks. I must confess I don't quite get that site, and always get lost navigating around. Maybe it could benefit from some work on usability/documentation?

 

Cheers,

 

Rafael

Comment by Rafael Chaves on November 12, 2011 at 7:20

Lots of interesting points, Andreas, that is material for many threads. :)

What I don't get is why you distinguish between prototype and "real" solution; isn't that the same, just with a different maturity? If not so, isn't the prototyping environment then "just another execution environment"?

Prototypes can leave out many aspects that are essential to a production-ready implementation. A prototype may completely ignore (or provide suboptimal support for) many aspects (say, ACID transactions, security, usability, performance, reliability etc) and still provide everything that is necessary for non-technical stakeholders to evaluate the solution from the PoV of the domain. I am not 100% sure of what point in the original post you are referring to though.

And as a sidenote, I think (db) transactions are not a pure implementation detail, but affect the interaction with a model even on an abstract level.

Not sure I agree - could you clarify? A same model could be realized as an application that supports ACID transactions or not, without making any changes to the model, so that suggests it does not need to be reflected in the model.

Some time ago, we experimented with automatic test case generation, so at least some CRUD operations could be tested with no effort. But that is rather useless on the project level, since it tests the generator itself

Totally agreed - the most valuable test cases can be applied to the model as well, and those need to be manually specified. There is some interesting work around automatic generation of test cases (probing boundary conditions etc), but those could be applied to models as well. 

However, I do see value in generating (functional, not CRUD) 3GL tests from the modeled tests - to provide an additional layer of confidence in the generated solution.

Much of the high level (not unit) tests we do in reality are recurring sequences of actions following pathes trough the system, changing the instances here and there (...)

Not sure why those can't just be encoded as test cases that hit the back-end API. Not sure though about the value of testing end-to-end in a single test case. I'd rather have different test cases, one for each important transition in the flow (adding items to cart, checking out, performing payment).  

One thing we found is great is to have statemachines for your classes, and all operations on the class are transitions of the statemachine (behaviour modelling). It is a really powerful tool with respect to code quality.

Totally. The fact that state machines are not a 1st class citizen in our languages of choice forces us to hacks and brittle code to handle the dynamic aspect of programs. H. S. Lahman eloquently argues for the importance of state machines in software development in his great book, Model Based Development. TextUML still does not support state machines, but that is in the plans.

In the end, I think we want to have some integration with workflows/processes of the one or the other kind. They provide a good glue between requirements analysis, maybe use cases, actual automated workflows, and test pathes through the system.

I too am curious about how to relate DDD using models and process modeling. But I have my hands/head full already addressing the technical aspects of MDD, and how developers do their work, so for now I am willingly limiting non-technical people involvement to evaluating prototypes. At some point I should get there (BPM vs. MDD) though.

An open issue is how tasks can be automated, and at which level. It could be done by again recording the task on an (sub-)UI level, but other approaches should be considered.

I am may

Comment by Andreas Leue on November 11, 2011 at 23:47

Hi Rafael,

yes, testing with/on models is an interesting and rewarding issue. Some remarks and thoughts.

  • What I don't get is why you distinguish between prototype and "real" solution; isn't that the same, just with a different maturity? If not so, isn't the prototyping environment then "just another execution environment"? And as a sidenote, I think (db) transactions are not a pure implementation detail, but affect the interaction with a model even on an abstract level.
  • Speaking of automation,what can models help us? Some time ago, we experimented with automatic test case generation, so at least some CRUD operations could be tested with no effort. But that is rather useless on the project level, since it tests the generator itself, nothing else, and if it can produce a working instance browser and editor in one case, it can do so a second time, that's not news.
  • Much of the high level (not unit) tests we do in reality are recurring sequences of actions following pathes trough the system, changing the instances here and there. What we did is to automate these with "abstract UI" tests, i.e. scripts that are recorded and running below the frontend renderer, but on top of an abstract interaction layer (our VUI). That's ok, but it's still work to record them, and I think we can do better with models.
  • One thing we found is great is to have statemachines for your classes, and all operations on the class are transitions of the statemachine (behaviour modelling). It is a really powerful tool with respect to code quality. You can instrument all states and transitions with invariants, pre- and postconditions, and we really detected a couple of errors very early, which would have required more intensive testing otherwise. They are used at runtime as well as during execution of test scripts.
  • In the end, I think we want to have some integration with workflows/processes of the one or the other kind. They provide a good glue between requirements analysis, maybe use cases, actual automated workflows, and test pathes through the system. Normal processes (e.g. BPMN) need to be augmented with a) assertions inbetween the activites and b) with automated task execution. Regarding (a), UBPML (ubpml.org) to me is the language of choice since it integrates processes with objects in certain states, which is exactly what is needed. The one and same diagram can be used from analysis to testing and to runtime execution. It should be easy to integrate that with your textuml approach)
  • An open issue is how tasks can be automated, and at which level. It could be done by again recording the task on an (sub-)UI level, but other approaches should be considered.
  • How do we achieve test case coverage? The state machines and the workflows provide a good base for that. At least each state of a state machine should have been reached, each transition fired and in each task and path in the process similarily. With a good analysis in the beginning, you get confidence if the process is tested in that way (we've done this "by hand" in a current project, and at least the confidence part is ok, the amount of work was "improvable").
  • I browsed your site and visited the test case models (tutorials); I wanted to point you to our open repository oomodels.org ; it contains a variety of unit test case models - you are very welcome to use them. All you need to do is to write a download converter template (see example), and you can download all models in the repository to your engine. If you're interested, I'm glad to help you to do this.

Andreas

Badge

Loading…

© 2019   Created by Mark Dalgarno.   Powered by

Badges  |  Report an Issue  |  Terms of Service