The Model Driven Software Network

Raise your level of abstraction

FYI, Standford is opening a free online course named Model thinking. I think this is a great course for a foundation understanding of modeling in general, and how to think in it.

Back to my day in University, I got a course to study about Software Engineering, from include its process (Requirement gathering to Design to Implementation, to Deployment and finally Maintenance). During the course, I was taught to use UML as my first modeling language (along with StarUML, a free UML modeling tool). However, after the course, the only thing we can applied were the knowledge in software development process (from requirement gathering to maintenance). The UML part was more difficult to apply, because the limitation of tools:

  • Manually synchronization between artifacts: If your source code is changed, you have to manually update all the related artifacts (from requirement to deployments documents) to fit with the current reality.

  • Limited round-trip engineering capacity: Round trip engineering is a wonderful thing. With round-trip engineering, you can code to create the actual work, and have an overview of the big picture to design at the same time. UML Lab is an example: UML Lab round-trip engineering. However, all the modeling tools (Sparx Enterprise Architect), UML Lab, Visual Paradigm only supports Object Oriented style, so it's hard to apply in various domains. Although it's good to see Enterprise Architect supports modeling and transformation of database (however, the database vendor also provides the high level tool to design), EJB (not really necessary).

  • The transformation rules and the domain specific languages (like UML, BPMN) are unnecessary complicated. People can use the notations in different ways, and they have to invest a significant time to learn in order to use in a reasonable way. Most people currently model and design though directly writing specifications with diagrams when needed, because it is the simplest way without any overhead.

Since in a model driven way, developers have to do to much work rather than doing their actual job, thus Agile is created. As you know already, in agile, the basic idea is to bond with customer as closely as possible,  at the same time separate from them when the devs start working, and finally iteratively deliver usable product in an evolution manner (which means the next iteration is evolved from the previous one). Agile, is generally good. However, taken to extreme case like XP, I don't think it's a good practice. In XP, activities such as modeling, design and documentations are considered a burden more than what it delivers. Here is an example how an XP experts think documentation is almost valueless: Much Ado About Nothing: Documentation. Personally, currently I have to port a TCP/IP server written in C++ using Win32 API to Linux. The comments were extremely helpful to me, and it made my days easier.

In XP, people consider source code as the design and the model, test as specification, thus having Test Driven Development. However, the downside is that, I think, it is only sufficient for small scopes. Even at a small scope, if a requirement has to many dependency to other requirement, without an upfront design, how can it scale? Or you have to get into refactoring nightmare.

So, there's no silver bullet to avoid getting into planning. A wise man once said: "Plans are worthless, but planning is everything". The thing is, if planning is so important, and in our case, using model and modeling where we can explore the domain concepts and process, the implementation is a more detail reflection of the plans, why do so many people in software industry resist it? Who would build a house without a blueprint anyway? (do you trust those who do?) People in other fields (such as mechanical engineering) consider technical drawing a very important element before using the actual material to create something. And yet,we stuck with something like UML? Maybe our environment is too dynamic to adapt to documents (not sure about this)?

Or maybe the reason is the adaption of models and modeling in software world? Probably, after learning "Model Thinking", we can conclude that modeling is important, but its implementation (UML) does not deliver what is expected.

Tags: model

Views: 870

Reply to This

Replies to This Discussion

I can't follow how challenges in reverse engineering can represent limitations in model-driven development. IMHO, model-driven and round-trip engineering are antagonistic ideas. If models drive, code is clearly a derived artifact. The mapping flows one way only, there is no economically interesting way to map from code to model. And you don't go and edit derived artifacts (generated source code), that is crazy talk.

If you are trying to understand an existing code base with models, sure, reverse engineering may have its value, but that has nothing to do with model-driven.

More on this in a couple of old posts:

Myths that give model-driven development a bad name

On code being model – maybe not what you think

Thumbs up! :)

One of the diagram I like is the component diagram. I know we all know what it is, but I still give an example

The component diagram is where I can organize thing at higher level, which is components and how it's going to interact via the supplied interface, then define the structure of each component by class diagram. In practice, components are just an class interface (interface in java or a .h file in C/C++) for external components to interact a bunch of other classes it includes indirectly. With round-trip engineering, I can see the relationships between classes and components, thus the "architecture" which provides a higher level view. 

With round-trip engineering, we can have the code structure to view the "big picture" immediately because code is mapped to the diagram on the fly, think about structure a bit, change it, then map it automatically to code (by the tool), rather than having to switch to another program and wait for it to reverse.With this method, I can process implementation and design "parallel", without separating the process like in the traditional way.

I agree that model and its representation (UML is one of it) is different. However, creating model means having extra works before jumping to actual code, and the burden to update it back. In practice, I think modelling needs a medium/heavy visual representation to support it rather than plain textual documents to describe a model.

The current UML tools relies on the mouse too much. The time spending to move the mouse around, drag and drop, adjust to make the diagram look nice, outweigh the benefits of visual modeling. The textual UML may look like a better solution, but having to write its script at that level of details, would people rather write the real code? For that reason, in my opinion, in order for modeling to benefit a development process, it needs strong tools to support synchronization between various artifacts, so the developers are free from taken care of outdated artifacts. Previously, I only example the synchronization between code and the design view. We need synchronization between all the related artifacts from requirement down to implementation. For example, if a requirement change, the tool should be able to trace down the related artifacts (related requirements, related design documents, related source code which needs to be change), and give a suggestion on where in the source code needs to be mapped the changes back. Some tools like Enterprise Architect supports traceability, and it's helpful but not really reduce much of the manually work.

It is possible to achieve the idea I said above. One example is Eclipse. It has a very nice plugin named Mylyn, where I can download the task tickets from project management systems such as BugZilla, Trac, Rational Team Concert. Each task in Mylyn will have a thing called context, which limits your project view with thousand of files into a set of related files to that task. You can share the context by attaching it onto the actual web based ticket, and share the context with others, so if you have bug or requirement changes, people can quickly address the issue directly at the source code. For languages like Java, Mylyn can even limit the scope down to method level: only relevant methods in a given file is visible. I haven't tried, but I heard people even share the debug stack trace. Imagine if you can share the code architecture.

In the current situation, my colleagues are very hesitated/against the idea to perform model upfront, because of the limitations I stated. Also thanks to Agile for pushing it further. Personally, I can't confidently code if I do not study to understand the domain enough in a given period of time, otherwise sooner or later I would get into refactoring hell to map. People who won't plan before writing code would get into this problem, unless they got tons of experience with the thing they're going to create.

Note: I forgot to add the source link where I got the component diagram. It's from actifsource's blog: http://www.actifsource.com/

Take a look at ABSE (http://www.abse.info). It's an approach where the model is the source, and is heavily based on reuse. It's textual but version 2.0 of its specification will have diagramming too. The models are always trees, made of "Atoms" (reusable assets). ABSE supports traceability from requirements to deployment. It has some other unique features you can find on the site.

AtomWeaver is an IDE that implements ABSE. It manages a "live" ABSE model. The next version (April/May) will have diagramming and multi-tree support.

With multiple trees on a project you can have a tree for requirements, another for your system, another for issue tracking, another for documentation, and so on. You can easily go back and forth between linked Atoms, effectively going from requirements to source code, or vice-versa, if necessary.

Unfortunately, ABSE and AtomWeaver and both young, and there's still a lot to be implemented, but you can try a lot of things already with the current version.

Very interesting! I like their ideas. Gonna play with it now.

It's obvious that you missed the whole point of Rafael's comment. Rafael is describing a paradigm where the models are the code, and the models represent a higher level of abstraction than can be embodied in Java, C, C++, etc. To such a view, the 3GL (Java, C, C++, etc) are to the model as Assembly language is to the 3GL. You use a model compiler to generate the lower level code, and you don't even consider round-tripping to be a desirable option. In the Shlaer-Mellor (now Executable UML) camp, we have been doing this for over 20 years!

The real problem with the UML is that it is often mistaken for a method and not just a standardized notation. There are many methods out there that will promote a lot of time-wasting activities, so UML often gets demonized It's just a notation. i.e., not responsible for it's usage.

"Model-driven" means the models are in the driver's seat, not the lower-level code. Combine a subset of the UML with the Executable UML method, and you have a programming language.

The issue of time-wasting modeling activities is well known among industrial practitioners of MDD. IMHO, there are two reasons behind it:

- use of general-purpose methodology and notation where domain-specific method and DSLs are more appropriate (this leads to verbose modeling with the former, whereas a concise modeling is possible with the latter)

- current MDD tools do not allow customization of user interaction (issue of e.g. excessive mouse clicks, but also manipulation of modeling elements). In my experience, user interaction is also very-much domain specific. 

I would like to stress that today even the best DSM tools (aka language workbenches) that allow efficient customization of modeling concepts have a hardcoded "model" of user interaction with model. IMO a proper MDD tool is DSMI (domain specific modeling and interaction).

That's obviously the ideal: everything should be tailored to best support development in that domain: language, notation, generation, interaction, persistence, multi-user support, integration with code IDE, integration with other modeling languages, scalability for large models etc.

However, to do that to the full extent requires each modeling tool to be built from scratch, with its own requirements analysis, design, hand-coded implementation, testing etc. Obviously, for all but the largest numbers of users and most static set of requirements, it makes more sense to accept some generic tool functionality, in order to get a modeling tool faster.

It's also not necessarily the case that somebody building their first modeling tool by hand will come up with better solutions to the inevitable compromises you make in designing interaction, performance etc. In fact, the metamodeler is often pretty poor at suggesting changes in interaction, as indeed are modelers: when you query their suggestion, they too realise that it wouldn't be a good solution in general for them, even if it felt like a great idea for that one task of one person on one day. Of course this isn't to say that existing tools are perfect, far from it!

I think a language workbench should apply the principles of DSM to itself: people experienced at building modeling tools should make the generic modeling tool parts, and offer facilities that make it easy for people experienced in a particular domain to specify the language they want, and get a good tool out.

My experience is that the more a language workbench can raise the level of abstraction of language creation away from the code, the more the resulting language will improve the productivity of its users. A good language gives a much better improvement in productivity than saving some clicks in a poorer language. The language workbenches that focus on allowing low-level changes have not been as successful at raising the productivity of as many modelers by as much as the workbenches that focus on raising the level of abstraction of modeling tool creation.

Of cause I did not mean total adaptation of a tool to domain needs. Such tools already exist for a long time. They are language workbenches that focus on allowing low-level changes. However the development cost of this freedom is prohibitively high to keep up with unavoidable DSL evolution, so I do not consider them as modern MDE tools.

Instead of trying to accomplish everything, MDE/DSM tools are best to tackle the next most painful bottleneck. IMHO this bottleneck is the way users interact with their models. An implication thereof is that at least tool dialogs, toolbars, contextual menus, custom actions on model elements, binding custom actions to menus and key-shortcuts should be customizable.

The important difference from the "low-level" language workbenches, is that interaction customization should be modelled with proper DSLs and implemented by MDE tool without low-level coding.

Interesting discussion. 

Coincidentally, we've been working lately on a modelling frontend for our tool with our tool itself, i.e. we're generating it. And in fact all the issues we have had with customer projects before popped up in this project, too, from ui down to db. Somehow it was expectable.

Indeed, we consider the UI issue as very important for acceptance and usability.

Moreover, I think there is (unfortunately!) not a sinlge bottle neck, but the "Business/IT Gap" needs to be filled thoroughly. I do consider as nonnegligible ingredients:

  • a universal, neutral intermediate model (technically neutral as well as with respect to applicaion domain)
  • a powerful backend behind (M2UR - model 2 usable runtime ;-) )
  • a stack (!) of DSLs on top of the universal model, allowing modelling at all levels in parallel (i.e.the 80% easier issues very nice and domain specific, and 20% complicated ones without restrictions on respective levels)
  • social components, since a business user simply might not want to interact with a machine interface in principle, howsoever capable it might be

Andreas, I agree with you on all points. There is not a single bottleneck and filling Business/IT (and more generally problem/solution gap as MDE is applied in more contexts) is the ultimate goal. My replies were made in a more limited scope of DSL development and tool support for this task.

Also, thank you for sharing your experiences about the UI issues. I recognize these as well. In one extreme case, I had a business user who simply refused to interact with a DSL due to usability awkwardness. Instead he went back to old tools (office package), but this time with insights and new concepts learned from the DSL.

RSS

Badge

Loading…

© 2014   Created by Mark Dalgarno.   Powered by

Badges  |  Report an Issue  |  Terms of Service