Raise your level of abstraction
I've a read number of posts regarding DB schemas being driven by UML models. I am wondering how useful UML is in generating non-structural parts of the application. What about activity diagrams and sequence diagrams? Does anyone have experience in generating more than just the class shell from UML? I have found this question intiguing enough to set up a 6 questions survey. I would appreciate it if anyone wanted spare 2 minutes to complete it. I look forward to feeding results back to the forum if there are any.
first, I'd like to point out that MDx is not a complete solution, but playing changing roles in the process of finding a complete solution. To illustrate this, compare these three scenarios:
- MDx around 1990: one model, a fixed metamodel, some horrific complex templates, a very limited resulting application
- MDx around 2010: various models, varying metamodels, few transformations, more or less complex templates, more or less configurable resulting applications
- MDx around 2030: a small set of compatible models, a small set of standardized metamodels, some domain or project specific customisations, a model compiler or interpreter, whose details are only known to a few experts and which are actually not of interest anymore to most developers, which are very well crafted and mature, like operating systems today or like 3gl languages today
That said, I can give you a snapshot of approaches my company uses at present. It differs from where we came from, what we will do in the future, and what ohers prefer.
From left (business) to right (code) our chain looks like this:
translating from a domain specific model (DSM, e.g. insurance) to a universal model (UM), neutral class model.
This step reduces complexity by constraining language features and mapping domain terminology to universal terminology
comparable to PIM/PSM transformation
The ASM is more specific than the UM, but not as specific as the PSM. It does contain information of the business logic layer architecture, but it does not contain information about specific user interfaces or databases.
Practically it reduces complexity by splitting up the MDA-PIM-PSM transformation into two parts, and each part becomes more managable by that .
For the above two transformations DSM2UM and UM2ASM we use a technology called "Object Construction Plans" (OCP). This is a universal, open source tool which reads XML descriptions of object networks and instantiates such networks, vaguely comparable to the spring framework DI/IOC component.
The OCP tool provides many modern features to handle complexity (modularisation, object orientation, on demand compilation etc.), see http://xocp.org/quickstart/toc for an overview.
It is applied to M2M transformations by writing "OOViews", i.e. specifying the target of the transformation declarative as an object structure which relies on a different source object structure.
It reduces complexity, since these OOViews declare the metamodel of the source as well as the transformaion itself in one and the same description, which is very convenient, but still formally precise.
In addition, the metamodel of the target structure is automatically derived from Java code, which is very convenient, too.
So instead of two metamodels, two corresponding Java class models, plus a transformation you just have one Java class model plus an OOView declaration, which is a reduction of artefacts you have to manage.
The next step is to transform the ASM into source code, which we call "Generic Business Logic" (GBL). This is a traditional code generation step, but it does not create a complete (fixed) application. Instead there is code generated for the Business Logic Core (BLC layer, containing POJOs), code for a socalled Business Logic Interaction layer (BLI, containing factories, workspaces and domain transactions), as well as adapter classes to connect with the user interface and to the database.
Here, too complexity is removed since the templates are not concerned with details of specific infrastructure (ui, db, communications).
The technology we use for the ASM to GBL transformation is an open source generator, see http://oogenerator.org/quickstart/toc (it is only partially translated to english and not complete). The generator software is mature and sophisticated, you can write very readable, compact, powerful and manageable templates with it. To us, these features are extremely important, since it makes a difference if you have to handle an amountful transformation.
Finally, the generic UI and DB information has to be interpreted at runtime. This is, absolutely, not an easy task, and the renderers and db connection code is indeed challenging.
The advantage is, you only have to write this code once, since these components are universal. So, complexity is reduced by removing it from each individual project and put into a single product.
Hope this helps,
It seems to be depending a lot on tools applied. For example at OOP conference there was few years ago a talk (Ströbele, OOP 2005) stating that it took 25 man-years to build UML tool with Eclipse using some its frameworks. Obviously most companies can't invest as much to tool development. Another reason is also how well the domain can be defined and narrowed down.
My experiences show quite different figures. For example Panasonic has reported that it took 15 days (http://www.dsmforum.org/events/DSM07/papers/safa.pdf), Polar says 7-8 days (http://www.dsmforum.org/events/DSM09/Papers/Karna.pdf), Nokia Siemens Networks mentions 5 days (http://www.metacase.com/cases/architectureDSMatNSN.html), Sandvik stated two days and few others can been seen here: http://www.metacase.com/blogs/jpt/blogView?showComments=true&en...
Finally I would like to emphasize that like in all software development the initial development time is not the only thing that counts as almost always the DSL and related generators need to be updated (domain changes, we learn how to model and generate etc). If tools quarantee that older models open and work even if the language is updated the savings in tool development are again significant. Contrast this to tools which just crash even when the language is changed even a bit or ask that developers start grepping earlier models :-(
There are several MDx related tasks:
1 - design of transformation language(s)
2 - implement transformation tool(s)
3 - design modelling language(s)
4 - implement transformations
5 - design your app
6 - implement your app
If you do all in one project, it's most probably overkill^10. There might be specific scenarios where you are able to do 3-6, but to my observations this is not majority of projects. In typical enterprise scenarios you should focus on 5/6 and maybe do some lighter DSL stuff concerning 3/4. Developing a general purpose 3/4 will definitely at least "stress" your budget, developing a reasonable general purpose chain 1-4 may easily consume years and require experience, it's out of scope of customer projects, it's product development.
Yes ! it's not impossible to tackle those 20% of the code
But i think if you try to generate that remaining 20% of the code,
the overhead or net effort put in to the MDD stack would be unjustified
for the potential benefits of the MDD. It would really eclipse or overshadow the
benefits of the MDD. MED is best to generate the rote and repetitive code patterns in the, where comes the innovative part of the code, MDD breaks. i would assert we should not try apply MDD to the part of software those are innovative and not patterned.
Currently I am working on a small web application (as part of my master thesis) and I am using uml activity diagrams to model behaviour. But even for my small application I had to create a lot of different stereotypes which are applied to actions.
While I think it is possible to model all necessary behaviour with this approach, it may not be worth it, since one has to create a lot of stereotypes to cover all single actions.
Regarding action semantics:
Do tools provide build in generators which would generate lets say java code from the action language, or do the developers have to write the transformation rules from action semantics to the target platform themselves, just as they have to do it for their static parts?
For action semantics, code generation is not as trivial as it is when generating code from structural models only. I am not sure the existing template engines can be used - I know I couldn't.
I believe the established executable UML products provide built-in support for code generation so that this address internally. For AlphaSimple, we are currently using built-in templates for the structural aspects, and am generating behavior with Java code. We may change it to expose the templates so users can customize the structure-based generation, but the behavior-based generation in principle is going to be internal/fixed. But once support for the first target platform is complete, we will be looking into opening it up.
I'd be interested in what kinds of stereotypes you used in your diagrams; imho the key to succesfully generate UI & behaviour is to do the "right" abstration cuts in your target architecture, it does not suffice to just take the "traditional ui/behaviour code" and try to model it in action diagrams.
As for action semantics: our framework allows to specifcy code snippets in various languages and plug translators into the system; but at present we only use Java as snippets, do very few macro expansions, and use Java as target platform, so the need for translation is limited in this scenario
I hope to provide a summary of my used stereotypes and their functions later on, once I have written my thesis. In addition to that a conclusion about them. As you said the right abstration is a key point.
But it will take a while. So I hope I will remember it once I am done.