The Model Driven Software Network

Raise your level of abstraction

Hello everybody,

just joined MDSN, looks like a lot of useful reading available.

Short self-introduction:

Computer science major, 20 years ago got into developing quite a few data-driven Applications with similar scenario, like:

one or more Enterprise data structures defined (branch, department,..., employee) and/or (warehouse, block,..., part) and/or (lists of processes, remunerations, customers, etc) which are were kept in the Database tables. There are CRUD operations defined on those related tables and there were some domain-specific operations too. Plus various reports. Pretty common and usual stuff.

I started with some generalization attempts: a business object consists of data container(a DB table), view(s) and form(s) and relations to other objects. It all could be expressed in plain text. CRUD operations could be executed in precompiled code - no actual generation here. UI of an object was interpreted by pre-written code again - no code generation here either. Relations one-to-many and many-to-many were again interpreted by run-time. Only custom logic has to be manually coded. So that how it started. I could elaborate if needed.

That approach evolved into something I cannot categorize and I'm looking for help and opinions here.

Today the highest level of  abstraction is the same: a business object is a table, plus UI(s), plus CRUD operations, plus objects relations, plus API to implement custom Data processing. An application is a set of such objects and it is most likely might be expressed in UML or such. Although it is not.

There was some custom language before, today an App definition is hidden behind objects' Visual Properties Manager(s). An App definition contained in the Relational DB as set of text-based objects' definitions. There is a server side code waiting for client code to connect. That is how it works.

The client supplies credentials and if valid, receives back from the server an "entry" object's definition from the Database - usually the Main Menu (a table). The client interprets that definition and creates UI. Human user make a gesture (keyboard or mouse, touch interface is not supported yet) and activates one of that Menu's links. Another object loaded from the server and so on. Menus are immutable objects - they are easy. When mutable table loaded from the server, both a client(grids, views, forms) and a server(sessions, locks) support CRUD and custom operations (e.g. delete every odd record in the table) in ACID fashion. All clients are synchronized real-time - means all the content changes instantly propagated to all relevant sessions. There is a gesture language to visually define parameters of complex requests. SQL gets generated and executed by the server part and results passed to relevant client/sessions. There is Java API (architecture is language agnostic) which used on loaded objects on the client side only (!) by mimicking human gestures: "load object"/"edit object" and so on. The server is never restarted.

The client part contains Properties Manager code, which allows to define new objects and redefine new ones without leaving a live Application. Team development welcome.

There are client side processors which allows to link a table cell to Images, PDFs, some HTML. More processors - more data types could be supported. At last there are pure UI objects - with no data containers defined. They work pretty much like Java Canvas. After being loaded they call user code, which has access to other objects in the Database. That allows to create UIs beyond grids and forms - up to multiplayer games - with custom programming sure.

That thingy works in production environment for 3 years now. It is extremely productive and helpful. But I'm struggling to classify it. What is it? RAD RIA tool? It does generates some SQL, it allows live Apps design like Web CMSs, it is based on generalization of an SQL table... I'm more experimenting practitioner then academician.

Please help me define what that Datalator thing is. And what place could it hope to find in the computing world. Would be much obliged for reviews and opinions. Sorry for hasty prepared story.

The site where I try to describe/market the Datalator is well, www.datalator.com aka www.fancydata.com aka www.moderngroupware.com.

I'm no good with web sites, forgive me. Please comment.

Views: 953

Reply to This

Replies to This Discussion

Hi Alex,

my personal view on the issues is like this: in software engineering we're experiencing a phase shift from 3GL, 3tier manual programming to some next level, which is not already reached and whose contour is only emerging slowly (it is likely model based, I guess). Unlike preceeding developments, say from no-tier to 3-tier or from procedural to OO, the present shift is a heavy one, very unlinear, complex. Maybe you can compare it to something like forming cells in biology based on cell ingredients - a transition which could not just take place, but required lots of crucial prerequisites to be available.

Same with software: no single engineering or development process improvement helps to get out of the local minimum we're in, the "big mess" of expensive, unfitting, slowly developed systems.

Many, many improvements have been introduced, many bright ideas, but - as my guess goes - a critical mass of them simultaneously is required to leapfrog the gap.

Worse: it is not only an improvement of all parts we are using, but some substantial refactoring of our core abstractions is required. Just to give one example, with SOA we have the problem, that our 3-tier bundles are rather worthless: eather you rip all the UI-tiers off, inegrate your services, and build new UIs on top of them, or you can only combine all the UIs in a very stupied fashion.

That said, to me it sounds that you have done some hard work on many fronts in this arena of development between 3GL-old-style and next-generation-new-style. Classification? I don't know. It has two sides, a marketing one, and anoher one of deeper understanding. The marketing part seems rather easy. Just look vaguely for the technologies you're using, and you might end up with something like model based agile social superflexible cloudcapable xy. The other part seems more difficult, and it might be the case that a really appropriate classification will only be possible retrospective by future software historicians :)

All best and good luck with your system,

Andreas

Thank you Andreas,

>The marketing part seems rather easy

well, to me it is the most difficult part and not all that intersting. My particular combination of known technologies is available and useful today. My guess - the interesting part is long-term selection of big-scale software components (like chips or cells). Everything else is just to support lifecycle of those cells. That support system(IDE, messaging, UI, DB) is very lightweight and pretty simple, comparing to containers like EJB.  The concept is proved and there are a lot of way of improvements, but that's a job not for a single developer. To build a company from scratch to promote and apply - will take a long time. Too long time. Just grew another company of it's infancy - 8yo transport firm (escadron.ca) That's said I'd rather "pass the flame" if somebody might be interested.

Much more interesting part is as you put it "to get out of the local minimum we're in". That's is my second favorite :-). Again being an experimentator did some work with "immersive programming". Starting from auto definition of the relevant modelling "slang", converting it into 3D shapes and manipulating them in 3D environment - objects and algorithms and data. A developers team would work in 3D goggles in virtual reality, creating objects-shapes, moving them around and building extensive soft-scapes.  Dirfferent roles supported (architecht, linguist, coders...)

Haven't got far with implementation east of the curtain, doesn't look like I'm gona finish it here on the westside. No complains - just curious observation :-).

For a long time haven't had understanding people to speak with, thank you Ladies and Gentlemen for your attention. Have to do more reading here...

> the interesting part is long-term selection of big-scale software components (like chips or cells). Everything else is just to support lifecycle of those cells

In general I do agree, though I think it's not clear on which abstraction level those cells will exist. Around 1998 we started an inhouse initiative of something we called "well defined component" then. The idea was to gather all information that is needed to define a component that can be used in various contexts. We tried to realise it on 3GL (OO) level, as well as on container level (EJBs became famous). It failed, and it was clear we need to reach some modelling level. Until today, I still haven't seen such a complete spec, even on model level. But we're becoming closer. So I think that the old and powerful idea of truely pluggable parts will require a sound model driven infrastructure first. But that's all gut feeling, not a solid theory ;-)

Hi Alex,

Nice to see your story here, this was a surprise to me. Your system is a good instance of what I called model driven application (MDApp), or model driven information system (MDIS).

The key principle to achieve a MDApp is what I called model driven mechanism (MDM). Thus, in general, we have a new type or class of a system:

"For a system, if all of its functions and behaviors can be defined, controlled and changed through MDM in runtime, then it is a full model-driven system (MDS). In brief, MDS is a class of system that the main functions and behaviors are controlled or realized with MDM. (Yu 2005)"(from my presentation: Model-Driven Mechanism in Information Systems and Enterprise Engine...)

Note that there were some another usages for the phrase 'model driven information systems', that is, used it roughly to call the systems which were developed by such the MDA/MDD(tm) approachs, IMHO it was ambiguous and inappropriate: in fact we can also use any MDA/MDD approach to develop a MDS, although so far most of the systems developed by MDE/MDSD aooriachs (as you would see in the MDE community) were not a MDS.

This is a "cool" topic yet ;-)

I'm trying to write some thing to introduce this. Some related essays can be retrieved at  http://thinkinmodels.wordpress.com/tag/mdapp/

Have a nice weekend!

TY,

Much agreed, see also comment above.

To some degree, code generation (M2T) serves just as a kind of intermediate technology used to climb up to the next abstraction level, since it allows easy experimentation. It has been served that purpose before (early assemblers and early OO precompilers were nothing else but code generators). To the other degree, I think for the current "phase shift" that will not be fully true, and CG will stay there for some parts.

Hi Alex,

There are a few examples of environments that are similar to the environment you describe: Mendix and OutSystems. The similarities in their approach to and ingrediences of agile application developement include domain model, business logic, user interface model, etc.. These are modelled (in DSLs). Other generic aspects (e.g. storage, CRUD operations, web-based implementation of UI, etc..) are not modelled explicitely, but are either interpreted or code generated, their architecture reminds me that of DDD. As far as I know, there is no classification.. If you read about Mendix or OutSystems, you will find terms like RAD, agile platform, PaaS, which is still pretty generic. If I had to classify such systems, I would use the following categories: business modeller, model interpreter, code generator, DDD, CIM, DSL, CASE, enterprise, web-based, paas.
Hope it helps. Good luck.
Thank you Andreas, Mr. Yu and Andriy. The most interesting and supportive responses. I've done some reading on MDS but I'm light years behind you in understanding and terminology. I do have a few naive questions, which I'd like to ask here instead of spending days digging them myself out. I'm a one-man company - just not enough resources. May I? But before asking I'll answer to
@Andriy: thank you for helping to classify my Datalator. The outsystems and mendix - those systems are built around  abstraction of a "lonely table". There are many similar tools which can build and support a table, but not a "society" of related tables. Definition and use of relations is up to hand-coding. I'm trying to work with relations in pure visual and pure declarative way. My "videos" link below containes two video responsse on famous "outsystems" screencast - "How to built an Application in 5 mins" - I do it in 2 and do related tables.
It was in  2009 and it well might be something has changed since...
@Mountriver: thank you for response and for the good weekend wishing - I actually did :-). I like your definition of a "full model-driven system". I'm still having troubles applying the term "model" to pretty general abstraction of a "table-based" system. Does that mean that my particular model is a system of related tables?
@Andeas: thank you for "future software historicians" and for praising of "such a complete spec, even on model level". I feel better knowing that some people can recognize complex "MD" mechanics behind a facade of a particular application.
 
I'd be very pleased to hear about RAD systems in domain of data-driven applications where table relations could be defined and controlled as easy as it is done controlling a particular table.
 
For those who might be interested I have a few of "raw" screencasts which are not ready for publishing - not polished at all. Here they are: www.fancydata.com/videos.html. Forgive me my narration.
 
And here is my question: In UML or any other modeling language how well data structures are handled? I know there are pictos for "database" or "table" but how a simplest "bubble-sorting" algorithm would look like in a good contemporary modeling language? Variable, arrays, vectors, pointers, strings and such? I realize that processes or directives are easy to represent. But Data are usually substituted by their identificators. IMHO that breaks the balance between data and algorithms representation in the same model.
In other words is MetaEdit(f.e.) able to model itself in pure "ModelToCode" fashion?
 
Thank you all for reading and answering.

When I got the primary idea in more than a dozen years ago and chose the 'model driven', the inspiration was from a so-called "table-driven" for app :-)

For 'model'.. the schemata are models, as well as the data. The point is what the models modeled (the business or your system? in black or white? and so on) and how the models work (MDA/MDD or MDM/MDS? design time or run time? etc., the difference is the key and huge.)

In general, UML is more or less apt for modelling everything you want, even a bubble sort, why not. But I think the key issue here is "abstraction level", i.e. why would you want to do that on a certain level, and how easy is it to do so.

One can distinguish at least these levels, from lowest ot highest:

- Relational DB level, where tables reside

- OO 3GL programming; RDB is abstracted away with help of an OORDB mapper, on OO level there are only Objects, including arrays, vectors etc., a Table is something like a collection of Objects, more or less; relations are typically modelled as navigable attributes, sometimes as objects as well

- Modelling Level I (typically technical models, often domain models too), where Objects are Objects, but relations are first class citizens; code traditionally has been treated here a little as a stepchild, which is partially a reason why so few "fully model generated" projects succeeded. The executable UML initiative tries to improve that by defining a standard for describing code on that level, but the computational model behind relies still on an object world where runtime classes correspond 1:1 to model classes (more or less)

- Modelling Level II, where code is not attached to classes anymore in such a 1:1 fashion, but code is devided into fine snippets, related to a set of contexts (e.g. BL instance context, factory context, possibly UI contexts, State contexts), corresponding more to business rules than to traditional code

Of course, the matter of code in conjunction with such levels is an ongoing debate, we're far from common agreement, not to mention standardisation. And of course in almost all practical projects there is to a certain percentage a need to break out of abstraction level and incorporate some dirty code from below.

Now what about "bubble-sorting"? This is something I would want to do at level OO 3GL, encapsulate it, and refer to such functionality in abstract forms from the higher levels. E.g., it might suffice to say that some relation is sorted according to atributes x, y, z, how it is done is a matter of the interpretation system (plus maybe some hints)

Alex, 

Mendix and OutSystems have evolved significantly in the recent years. The current versions abstract from tables. Instead, the user is modeling domain objects and relationships among them. There is no code with generic functionality attached to the objects. The code is derived from the context of domain objects. [this is a short and incomplete summary]

As for the "bubble-sorting" question, I absolutely agree with Andreas. I would like to add that it is not necessary to model everything: "bubble-sorting" is a well-known algorithm that is best implemented either in the interpreter or in the code generated from a model. What needs to be modeled is declaration that a data set be sorted by a particular algorithm.

Can you explain what you mean in your last question about MetaEdit+?

@Mountriver: sorry, I lost you... have to do more reading. Will do and respond.

@Andreas:

>UML is more or less apt for modelling everything you want, even a bubble sort, why not

When I tried to play with well flow charts, trying to produce code from a schema of a say "buble-sort", I stopped dead because I could not work with variables. There are flow charts (http://en.wikipedia.org/wiki/File:LampFlowchart.svg) and there are dataflow diagrams(http://en.wikipedia.org/wiki/File:Data_Flow_Diagram_Example.jpg)

I felt those are a trunk and a tail of an elephant and started looking for diagramming metaphor to feasibly represent the whole elephant. Bubble chart is ugly and less readable then the code it represents

The big idea was to build a visual tool(another discussion, why visual) which is general enough to represent itself. Like an IDE enables to work with its own code. Like a doctor must be healthy enough to treat patients :-).

I cannot find right now the statement that 2 basic shapes - if-block and directive-block - could represent any code of any program. Correct me if I'm not right - was in that territory a while ago. So we do have 2 amino-blocks of computer programs DNAs, we just do not know yet how to handle huge DNA dimentions. So we evolve methodologies: 3GL languages, OOP and now code generation :-). We apply different levels of abstractions - that is a way to appoach the problem too, right?

My belief is that we have to understand how to work with smaller blocks (bubble...) before generalizing further. I did produce first schemas for simple sorts with data and code and guess could express the tool in those terms. Haven't finish experiments though. 

Why we create those abstraction levels in the first place? To deal with navigation inside a huge body of linear code, which is abstarct enough by itself. A program is not a linear one-dimentional set of code lines, right? But we stubbornly express it text. And after we create complex methodologies to deal with that non-optimal representation.

What if in the first place we express a program in 2D, 3D shapes on a surface or multi-tiered surfaces. Something like sim-sity game, only with  basements and many stores buildings. With neighbourhoods (custom code), factories (DB engines) and communications between. And we put on top the Google bird eye :-) - may be all the problems with comprehending the whole picture, categorization issues, incapsulation, abstractions levels - they just go away, replaced by magnification level? I'm against flat diagramming - UML or not - although it is better then text-based coding. Telling that because I was playing with nice 3D softscapes and felt the difference...  And human world is 3D, we percept 3D and think in 3D much more productive - evolution, right? "You may say I'm a dreamer..." :-). But that Datalator thing of mine is working fine - so I do produce some sober judgement. Sometimes. Hope.

Andreas wrote about the mess we are in above. I'd rather not tell you how to approach the problems you gentlemen working for decades on. But we cannot deny effects of overexposure too. It might not be the case and it might be :-) I'm a guy of practice. I'd rather build a 3D diagramming "Model-to-executable code" tool if somebody cared to hire me to do that :-). I'll do it anyways, but much slower without support - recreational style.

@Andriy: on MetaEdit: after learning such a tool exists I'm full of hopes to rebuild my humble Datalator. I know a lot of improvements to be made. I read it saves tons of time. And after looking into I do not feel it could help. I do not see it could deal with diagramming a lot of simple "bubble-sorts" which my program consists of. Am I right? I cannot afford to do the full set of diagramming and after pass it to my expert developer, which I am. Or I'm missing something here?

Mendix: by screens and examples I did not see how I can create a simple app to keep say my CRM efforts. I do need a table with events(calls, email, etc.) I need to keep names and categories of contacts. Call them object of tables - derive or attach code - I do not see that I can simply create a container for event and relate it to a container for categories of event. Without coding... Again, that is my first impression. Mistake?

Gentlemen, I deeply appreciate your input, just got excited - turns out the code generation is my second favorite for 3 decades  :-). Probably have to keep it more academic way, right? Anyways, thank you.

 

Should I rephrase (at least for myself):

a) data and code on the same motherboard(playground, whatever)

b) ability to self-express is a must

c) 3D is better then flat

Back to my silvermines...

> ...may be all the problems [...] just go away, replaced by magnification level?

It is a matter of quality on different abstraction levels, not pure quantity and size (scale). Abstraction is the opposite of linear scaling, I'd say.

> ...nice 3D softscapes [...]"You may say I'm a dreamer..."

Not at all :-) I once (next to 20 years ago) dreamed the same dream (possibly you'll like it, Alex: CAP ) I'm convinced we will work like that one day, but it still may take some time.

Nevertheless: even in 3D abstraction levels are inevitable (compare real world)

 

> The big idea was to build a visual tool [...] general enough to represent itself.

We've recently done just that with our own tool, though it does not imply that we do all basic programming in the tool. Instead, we modelled a modeller, and both rely on some core library containing the "3GL implementation fundament" - no need to describe that on model level.

RSS

Badge

Loading…

© 2014   Created by Mark Dalgarno.   Powered by

Badges  |  Report an Issue  |  Terms of Service