The Model Driven Software Network

Raise your level of abstraction

Last October 7, Steven Kelly, member of this network, pointed to a research on UML productivity, made by W. J Dzidek, E. Arisholm and L.C. Briand, which shows not so much benefits between a UML conducted project and the same project developed in "pure java".
Do you agree in such a poor performance with UML? There are bad assumptions in that research? That comparation afects all UML, all model based development?
Productivity is probably the most expected profit when modelling. It implies that there is no changes following UML?
You can see Steven´s note at http://www.metacase.com/blogs/stevek/blogView?showComments=true&...
The research available at http://www.ipd.uka.de/Tichy/uploads/folien/149/DzidekArisholmBriand...

Views: 649

Reply to This

Replies to This Discussion

I called Steven on this at the conference since I knew there were several people in the audience who's been productive with UML-based model-driven development. Thanks for bringing this new research to our attention.
If you're using UML as intended, you won't get a significant increase in productivity. The empirical studies last century showed results from -10% to +15%. The reason is simple: drawing a class and filling in attributes and operation signatures is no faster than typing the same information in a class skeleton.

However, if you just use the UML syntax, but change the semantics so a "class" means a lot more, you can generate lots of code for a little input. The most common example is database app, where a "class" maps to a relational table, plus a normal class, plus the O-R mapping between them, plus a basic form & list UI (either web or desktop GUI). We have an example of that for Google Gears in the Web Application Example in MetaEdit+.

Another approach is to use UML Class Diagrams for the skeletons, but some other language for the meat of the application. That's the approach taken in the real-time and executable UML worlds, with either some kind of extended state diagram or textual action language giving the bulk of any productivity increase. Productivity there is based on how closely this other language maps to the needs of your own domain; whether it is still UML is more a marketing decision that a fact. After all, executable UML existed long before UML, in Shlaer-Mellor.

Finally, you can also get some productivity by partially turning UML into a DSM language using profiles and custom generators. The amazing thing to me is not that this can succeed, but how little it succeeds in practice. Of course it will be a hack and a lot of work, fighting against the tool all the way, but why do so many such languages simply fail to raise the level of abstraction - and hence productivity? If people have empirical experiments showing productivity increases with such languages, plus the time taken to build the languages, generators and tool support, I'd love to see them.
My personal view on this topic.
I'd say this paper is not conclusive at all (for instance, I've seen this paper used in discussions both to defend and attack UML so the numbers that show are ambiguos enough to be used in both directions).

Therefore, I think we need many more empirical studies (with replication) to have a real basis for the discussion. And, btw, productivity should take into account the maintenance phase.

I also think that there are some activities and/or domains in the development process in which nobody should argue that modeling + code-generators solve the problem faster than manually writing code. I'm thinking in SQL DDL scripts. Even companies that do not believe in MDD use some kind of database tool
to generate the "create table" scripts from some kind of UML or ER model. This area has been deeply studied and there are many tools able to generate 100% of the code so practitioners are confortable using them. We should get to the point when we have the same situation with more aspects of the software.
Jordi,
I agree with you regarding scarcity of studies. Anyway, in this case, there are a good starting point: the explicit proposal for testing not a clean model, but a change process. This would be the better scenario for modelling; and surprisingly there is no large differences.
At first insight it seems to be some "contamination": Rarely a live project with a history of modifications is readable from the source code aspect. This model is well documented for the java code, a hard to see scenario.
...and the UML models are in sync with the source code, also an uncommon scenario!
Right, Steven
What I want to put on focus is the quality of that research. I´m confident that difference between coding and generating from models raises as long as the life of the project enlarges. It implies that the fourth developer that takes source code does not shares eiter knowledge nor philosophy with the starting team. Code will differs widely, and the system will become patched. This scenario was not tested.
Being said that, I need to declare that I don´t use UML, nor write code directly, but I´m convinced that modeling tools must perform better.
In some way, I´m surprised on a such poor performance, and wonders if it is tool dependent. I can´t belive that "drawing" models could be a lateral activity.
I too have my doubts about this study.

- They claim that the system under development is non-trivial. I'm sorry, but a system containing 6 packages and 50 classes does not qualify for me as "non-trivial". Just as a comparison, the UML model at my current client contains about 5000 packages and 40000 elements from which about 10000 are classes. As the study shows using UML was especially beneficial when dealing with a "first-time" scenario, where the developer doesn't know the system yet. When dealing with a "real" system that has a significant size the developers will more often then not find themselves in this "first-time" scenario.

- The study talks about developers reading and updating the UML model. They do not mention technical, functional or business analysts at all. At most of my clients the developers weren't even allowed to update the UML model. That was the privilege of the analysts. The developers only read (parts of) the UML model and used it as specification of the system to build. This study does not investigate the impact of using UML on the analysis stage of the process, but focuses only on the development.

From what I've experienced at clients it is always beneficial to use UML (or another modelling language), usually in the creation phase, but mainly in the maintenance phase of a system.
Agreed, Geert
You are pointing to the main failure in the experiment. It´s again a small test, not a robust one. I have written on it some dais ago in spanish:http://cuartageneracion.blogspot.com/2009/11/criticas-uml.html
..."I have written on it some dais ago"...
It means "some days ago", sorry the mispelling.
I haven't read the research paper yet (it's being printed right now), but I have a feeling that it depends on the situation you are in.
If your team consists of very experienced, efficient programmers, switching to UML as a design approach and maybe even using it for code generation won't bring much productivity gains. Partly due to resistance to change, partly because these people are not willing to work with (just) the generators and platforms provided by the UML tool vendors. As we have heard and said on numerous occasions, the latter are suboptimal in most situations anyway.

On the other hand, in environments with less experienced developers I have seen productivity improvements even by only using UML as a drawing aid. Allowing these people to create UML diagrams but not to start immediately when they get their requirements and assignment, it is possible to force them to think before the act (acting=programming in this case). Here, you will also face some resistance to change, but less experienced, younger developers are more easily convinced by their superiors - after all, they are techies, not eloquent sales people. I've seen examples where a developer assigned to an 8 week task would spend a week longer than estimated on creating a design specification in this manner, but giving up a week of coding time to compensate. In the end, they finished up to 10 days early, and in one case we only found 1 bug in the software of one developer, and that was one of the kind that only occurs after integration of software components.

So, even UML can help, but the current flow of MDSD and DSL driven approaches may in the end be more effective - if only because the focus is on productivity, whereas with UML the focus was mainly on getting to speak a common language. Actually, attempts by tool vendors to implement MDSD and code generation with UML have been part of the damage UML's image has had to endure. Not everything is OMG's fault ;-)
+1

Angelo Hulshout wrote:
> [..]
> So, even UML can help, but the current flow of MDSD and DSL driven
> approaches may in the end be more effective - if only because the focus
> is on productivity, whereas with UML the focus was mainly on getting to
> speak a common language. Actually, attempts by tool vendors to implement
> MDSD and code generation with UML have been part of the damage UML's
> image has had to endure.

Whilst not in a position to speak for Steve Kelly, I'm sure he would agree vehemently. It should all be about productivity. Sadly, it became more about justifying the high price of commercial tools.

I recently did some consultancy for a large client who have bought into Model Driven Development, with strong sponsorship at the highest level of the organisation.

Their approach centres on a well-known, respected UML tool and its recommended process. Depressingly, I don't believe they're getting /any/ productivity gain compared to traditional coding.

I suspect it's this sort of example that Steve Kelly was highlighting.

> Not everything is OMG's fault ;-)

Suppose I'd disagree to an extent there. The OMG sticks stoically to their position of "defining standards" without requiring reference implementations. It's been an abject failure.

- CORBA failed because there was no agreement on the wire protocol, and therefore interoperability among different vendors' ORBs wasn't there;
- MDA failed because there was no real, practical, usable, commonly understood implementation that enabled people to realise the purported benefits;
- MOF failed because it was supposed to be a language for defining languages but is really a language for describing UML;

- The attempts to formalise UML semantics failed because the spec is riddled with bugs and holes - er, sorry, areas 'left open to interpretation'.

- QVT. Enough said.

I could go on. The only real success is the UML notation, which provides a reusable syntax and nothing more. So perhaps the lack of MDSD adoption isn't entirely the OMG'S fault, but I'd contend they're one of the biggest culprits.

- Scott.

RSS

Badge

Loading…

© 2014   Created by Mark Dalgarno.   Powered by

Badges  |  Report an Issue  |  Terms of Service