Wednesday, February 1, 2012
Dreaming of an Eclipse Plugin-Store...
A while ago, I wrote a little tool for exporting UML-like diagrams for Java classes and packages to OmniGraffle. I blogged about that tool and, from my statistics, it got downloaded over 100 times. I also announced that the tool will stop working in 2012, which it actually did. As I wrote in the announcement of the tool, based on the feedback I wanted to decide whether to continue developing the tool or not.
So, what is the feedback after almost six months? There were a few comments, and about 150 downloads according to my logs. Although I installed a donate and Flatter button on my blog, I received no money at all. The natural consequence would be to stop developing (and providing) the tool.
Today, someone posted a comment as the tool has stopped working (just as announced). The commenter also wrote that he needs the tool to create some diagrams for him. Hmm... As I'm trying to be a good guy, I published an update of the tool working until June 2012.
I won't complain about people not giving any feedback or money voluntary. Instead I'm wondering how to make the tool available for a small amount of money. Actually, I don't know about any Eclipse tools to be sold for a couple of dollars/euros, except Log4E. Most tools are either freely available, or they are really expensive. Log4E comes in two versions: A free community edition and a so called "Pro" version for only €7,50. I have purchased that tool a long time ago, rather to support the author than urgently needing the additional features of the Pro version. However, due to the lack of Eclipse supporting this kind of "business model", it is rather complicated to install the license key (and keep it up to date with new Eclipse installations), and I figure managing licenses and payments to be time-consuming for the author as well.
Apple, and also Google, have successfully created systems enabling authors of software to make (little) money by selling there products very easily. Although I don't like the Apple way of approving software, I'm wondering if some kind of "Eclipse PluginStore" would be a good idea. People spend a lot of money on "apps", including a lot of small games. If buying a commercial Eclipse plugin would be as simple as purchasing an iPhone/Android app, would people do that? And how many programmers would publish their tools then? Maybe combining that kind of store with a BugStore (see "Should We Pay for Eclipse Bug Fixes?" for a summary of a discussion taken place in April 2010) would be a good idea as well...
As long as there is no Eclipse PluginStore available: How do you sell your tools?
Thursday, December 22, 2011
GEF3D goes Git, Maven/Tycho, and Hudson
Standing on the shoulders of giants
Abstract: Setting up a continuous integration build based on Git, Maven/Tycho, and Hudson is surprisingly easy. I assume that this is no real news for most readers. However, I ways very skeptical about that, especially because of all the project dependencies. So, this posting is meant for readers hesitating to set up an automatic build system because they think that it would be too complicated, just as I thought until .. well, until now.
One of the things on my todo list of 2011 was to set up a continuous integration build for GEF3D. I did set up such a system several years ago, using some XML files describing a module and its dependencies, and an XSL Transformation generating Ant scripts based on these module descriptions. In order to run nightly builds, cron was used -- yeah, good old times. I remember trying Maven back then, and I was so much disappointed that I wrote my own tools. Due to this experience, I had lots of respect setting up a build system for GEF3D.
In the beginning, Miles helped me to set up a Buckminster based build. He had some experience with this tool because his build system for the AMP project is based on Buckminster as well. Since Miles has switched to Git, he suggested the GEF3D project to switch as well. This would simplify the set up, as we only have to deal with one version control system. I filed a bug report for moving GEF3D from SVN to Git. Since I had read in the Git Migration guide that when moving to Git it would also be a good opportunity to refactor the structure of the project, we decided to introduce folders separating plugins, features, examples, and so on. Somehow my report got forgotten... As it was very complicated to configure the Buckminster build, and due to other things, I also didn't push it.
Migrate to Git
So, as the year is reaching its end, I decided to give it another try -- despite all these unknown tools such as Buckminster, Git, and Hudson. I started with Git. The first giants I have to give kudos to are the Eclipse Git guys, and Stefan, who wrote a nice blog posting about how he moved CDO to Git. The basic idea of Stefan's approach is really simple:- Create a Git repository locally and use svn2git to migrate the code. Note that there exist different svn2git tools. I used https://github.com/nirvdrum/svn2git, while Stefan used https://github.com/schwern/svn2git.git! The import is a single command, in my case
svn2git https://dev.eclipse.org/svnroot/technology/org.eclipse.gef3d --authors users.txt
users.txt
is a list of the committers, withusername = prename surname <email>
. Very simple, indeed. Since GEF3D is not that big, the migration requires only 10 minutes, and the created Git repository has a size about 9 MB.
- Then I refactored the project structure, simply done via command line:
git mv
is your friend here. (Kristian helped me with little problems, as I'm a Git rookie as well ;-) ).
- Commit all changes to the local Git repository, pack the ".git" repository, upload it to developer.eclipse.org and let the webmaster unpack it at git.eclipse.org.
Maven/Tycho
During my research about some Git questions, I stumbled over the GMF tooling project (well, I know that project for a long time, but I didn't cared about the "internal" structure). It uses the same project structure as the GEF3D team had decided to be used for GEF3D, and it uses Tycho. Although I had my (bad) experiences with Maven, I gave Tycho a try. And again, I was very much surprised at how easy it is to set up a complete build system with Maven/Tycho. I used the GMF tooling poms as a template, and after a couple of minutes (not hours!) I had a build system which could build most parts of GEF3D. Kudos to the Tycho giant team! Besides some special Eclipse packaging things, e.g., telling Maven how to handle an Eclipse plugin (and probably much more hidden under the hud), one really nice feature is the ability to use P2 repositories as Maven repositories. GEF3D has dependencies to GEF, GMF and EMF. To resolve these dependencies, I only had to define a single repository:<repository> <id>Galileo</id> <layout>p2</layout> <url>http://download.eclipse.org/releases/galileo</url> </repository>This is so cool!
Unfortunately, LWJGL (the OpenGL wrapper library used by GEF3D) does not provide a p2 repository, but an old style update site instead. That is, its update site provides only the
site.xml
file, and no p2 metadata. By accident, I'm the guy maintaining the LWJGL update site build script. It is an Ant based build, added to the overall LWJGL build system. Since LWJGL does not use Maven, and no Eclipse at all, I could not rely on the Tycho or p2 publisher to build the p2 metadata. In order to keep the overhead for the LWJGL project low, I wrote my own Ant task creating the missing p2 metadata for an old style update site. If you ever need something like this, the code of this Ant task is available from the LWJGL SVN -- it is a plain Java Ant task without any Eclipse dependencies. It can also be used standalone in order to create the content.xml/jar and artifact.xml/jar from a bunch of plugins, features, and a site.xml. At the moment, the official LWJGL update site is not updated yet and I'm using a personal mirror for GEF3D. But Brian, who maintaines the LWJGL update site, will probably update it soon. Remark: The LWJGL update site provides the LWJGL plugin, which basically bundles LWJGL as an Eclipse plugin. Additionally, source and documentation bundles are provided, and a tool bundle with an information view (showing the OpenGL settings of your graphics card), and a library for easily configuring standalone LWJGL apps. And thank you very much, Brian, for maintaining the update site at lwjgl.org!
I also had to fight with Maven and Tycho to get the source and documentation bundles build (seems as if some tiny things were changed when Tycho moved to Eclipse, so you have to compare the settings in the documentation with actual poms). Thanks to Chris' Minerva project, and the GMF tooling project, I could solve these issues. The Minerva project also demonstrates how to configure tests (simple JUnit tests, and plugin tests with SWT bot) -- and it was easy to configure this for GEF3D as well. Although I'm still curious about Buckminster, I was really surprised how well Maven/Tycho works. And since it is already working, I won't switch to Buckminster. However, I could imagine that if you have special requirements, it would be easier to configure that with Buckminster. I'm currently tutoring a student setting up a Buckminster build system for a research project -- I'm looking forward to comparing the results.
Hudson
Eventually, I had to set up a Hudson job for the GEF3D build. Miles had already prepared that job, and I only had to configure GEF3D's git repository, and Maven. I tried this first on a locally installed Hudson ("installed" sounds like a lot of work, actually it is only downloading a war and start Hudson via java -jar hudson.war). Again, setting up a Hudson job for a Maven based build system is really easy. All you have to do is to specify your code repository (git in my case), and the parameters passed to Maven (which usually is "clean install"). That's it. Well, at the moment I have some problems building the javadoc API reference, as the Javadoc at hudson.eclipse.org apparently behaves a little bit different as the Javadoc on my local machine. But at the moment I can ignore that problem, and I'm sure this can be solved soon.Summary
I was really surprised at how easy it was to migrate from SVN to Git, to set up a build system with Maven/Tycho, and to configure the job with Hudson. As a matter of fact it was that easy, that I probably will use Git, Maven/Tycho, and Hudson for new projects right from the start (I know, that's what all the agile guys tell you to do... but I didn't dare to actually do it). I was particularly surprised at how good Maven works with Eclipse thanks to Tycho -- the Tycho team did a really great job here! According to Leonard, there's a crack in everyting... and I'm a little bit nervous about configuring special requirements with Maven, such as integrating code transformation tools. I've found some blog posts about getting Xtext/Xtend work with Maven -- seems as if there are reasons to being nervous... but that's how the light comes in :-DSunday, August 21, 2011
Java To OmniGraffle
When I have to create diagrams for documenting some Java code, I used to manually draw an UML like class diagram with OmniGraffle. This is an error prone process, and a boring one as well. So, I tried to find a better solution. Since I didn't find any existing tool, I wrote a small Eclipse plugin my self. It automatically generates OmniGraffle class diagrams from existing Java code.
Its usage is very simple: Open or create a drawing in OmniGraffle. Then switch back to Eclipse and select "Create OmniGraffle Diagram" from the context menu of a package in the package explorer, as shown in Figure 1. Configure the output, as shown in Figure 2. The plugin will scan the package and add a class diagram of this package to the front most drawing opened with OmniGraffle. Figure 3 shows the result created by the plugin without any manual changes. It is a visualization of the package "ReverseLookup" of GEF3D.
- Getter and setters can be omitted
- Methods implementing or overriding methods of interfaces or classes already shown in the diagram can be omitted as well
- In order to better see relations between classes, you can force to draw all associations, even if they would be filtered out by the scope filter.
Tip: In order to manually change the diagram, you may want to have a look at my collection of UML shapes at Graffletopia.
You can install the plugin via the update site:
In the preferences, you can set the default configuration settings and define the name of your OmniGraffle installation (however, the plugin tries to find the latest installed version automatically).
Last but not least: Of course, this plug is only available on Mac OS X, since OmniGraffle is a native Mac application. The communication between Eclipse and OmniGraffle is done via AppleScript, which is very easy thanks to Peter Friese's blog post.
(At Stackoverflow, someone estimated a tool for creating OmniGraffle diagrams from Eclipse UML2 based models would require 18 months development effort. Well, I needed less then 18 hours. But I only convert Java packages to class diagrams... ;-) ).
Update 2011-11-01:
- Besides packages, selected types and sub packages can be visualized.
- The context of visualized types can be visualized. That is, types on which the selected types depend on, such as super classes, can be rendered additionally to the selected classes. These context types are rendered in gray.
- The default package is now handled as well (see comment by mathpup).
Wednesday, June 29, 2011
It's full of classes!
When I presented GEF3D in the past, people often ask me if it will scale, that is if a large number of items could be displayed. Well, the following screencast, inspired by Kubrick's great movie, shows a flight through the JDK. That is, every package of the 1.000 packages contained in the JDK is visualized as a plane in 3D space. On that plane, the classes are displayed---in total, more than 20.000 classes and interfaces are shown that way. Since the whole demo is a more or less a performance test, the classes are not really layouted yet. Also, only intra-package generalizations and implementations are shown yet.
The flight is sometimes a little bit bumpy. However, flying through 20.000 elements is more or less the worse case. Usually, the camera is moved in a specific area, and only sometimes a tracking shot may be used to "fly" to the next interesting area. As you will notice at the very beginning of the video, the camera is moving quite smoothly. Well, there is still room for improvement ;-)
Note that the video does not only demonstrate the overall performance of GEF3D, but also some of its features:
- the whole flight through the package tube is a single GEF3D tracking shot
- note the high quality font rendering
- level-of-detail (LOD) techniques are implemented in two ways:
- fonts are either rendered as texture or vector font, depending on the distance of the text to the camera
- packages are painted empty, with only the name of the package, or with their content, depending on the distance to the camera. This kind of LOD technique is not part of GEF3D yet, but it can easily be added.
- fonts are either rendered as texture or vector font, depending on the distance of the text to the camera
- actually, you see 1.000 GEF editors, combined into a single 3D multi editor
Thursday, June 9, 2011
When your MWE2 workflow is not working...
Problems instantiating module
- Error message:
- Error message in console:
1 [main] ERROR mf.mwe2.launch.runtime.Mwe2Launcher - Problems instantiating module ... ... Caused by: org.eclipse.emf.mwe.core.ConfigurationException: The platformUri location '......' does not exist
- Possible fix:
- Fix projectName in MWE2 file.
var projectName = ".."matches the actual project name. This line is present in Xtext related MWE2 files.
Couldn't find module with name
- Error message:
- Error message in console:
ERROR mf.mwe2.launch.runtime.Mwe2Launcher - Couldn't find module with name ...
- Possible fix:
- Create missing src-gen folder.
Workflow definition is ignored
- Error message:
- None. However, the selected workflow is completely ignored. It seems as if another workflow is executed.
- Possible fix:
- Ensure module name of workflow, that is the first line in MWE2 file
module ..
is unique.
Couldn't resolve reference to JvmType 'Workflow'.
- Error message:
- Error in MWE2 workflow file:
Couldn't resolve reference to JvmType 'Workflow'.
When you try to run the workflow, the following message appears:Please put bundle 'org.eclipse.mwe2.launch' on your project's classpath.
- Possible fix:
- Ensure Plug-in Dependencies are correctly added to classpath.
- your project is not an OSGi/Plug-in project. This can be fixed by converting the (Java) project to a Plug-in project.
- as printed in the dialog, ensure 'org.eclipse.mwe2.launch' to be listed in the plug-in dependencies
Weird errors when generating the parser etc.
- Error message:
- When running the Xtext MWE2 workflow to generate the code from your grammar, weird errors occur indicating problems in your grammar.
- Possible fix:
- Increase memory in MWE2 runtime configuration.
Disclaimer: This is more a personal note, and some problems may be fixed in the meantime. Feel free to tell me if I got something wrong here :-)
Monday, March 14, 2011
Implement toString with Xtext's Serializer
my.dsl.impl.SomeElement@67cee792 (attr1: SomeValue)
Well, this is not too bad. However this looks completely different to my DSL syntax, which may look like this:
SomeValue { The content; }
Especially for debugging and logging, I prefer that DSL like output. Since Xtext does not only generates a parser for reading such a text, but also a seralizer for creating the text from a model, I was wondering if that mechanism could be used for the toString method as well. (Actually, Henrik Lindberg pointed out the serializer class -- thank you, Henrik!)
In the following, I describe how to do that. Actually, this is a little bit tricky, and it will cover several aspects of Xtext and the generation process:
- use the generated serializer for formatting a single model element
- tweak the generation process in order to add a new method
- define the body of the newly added method
We will do that by adding a post processor Xtend file, which adds a new operation to the DSL model elements. The body of the operation is then added using ecore annotations. But first, we will write a static helper class implementing the toString method using the serializer.
Use the serializer
Xtext provides a serializer class, which is usually used for writing a model to an Xtext resource. The Serializer class (in org.eclipse.xtext.parsetree.reconstr) provides a method serialize(EObject obj), which returns a String---this is exactly what we need. This class requires a parse tree constructor, a formatter and a concrete syntax validator. Thanks to google Guice, we do not have to bother about these things. Xtext generates everything required to create a nicley configured serializer for us. What we need is the Guice injector for creating the serializer:Injector injector = Guice.createInjector(new my.dsl.MyDslRuntimeModule()); Serializer serializer = injector.getInstance(Serializer.class);
Now we could simply call the serialize method for a model element (which is to be an element of the DSL):
String s = serializer.serialize(eobj);
Since this may throws an exception (when the eobj cannot be successfully serialized, e.g., due to missing values), we encapsulate this call in a try-catch block. Also, we create a helper class, providing a static method. We also use a static instance of the serializer.
Since this helper class is only to be used by the toString methods in our generated implementation, we put it into the same package.
package my.dsl.impl; import org.eclipse.emf.ecore.EObject; import org.eclipse.xtext.parsetree.reconstr.Serializer; import com.google.inject.Guice; public class ToString { private static Serializer SERIALIZER = null; private static Serializer getSerializer() { if (SERIALIZER == null) { // lazy creation SERIALIZER = Guice.createInjector(new my.dsl.MyDslRuntimeModule()) .getInstance(Serializer.class); } return SERIALIZER; } public static String valueOf(EObject eobj) { if (eobj==null) { return "null"; } try { return getSerializer().serialize(eobj); } catch (Exception ex) { // fall back: return eobj.getClass().getSimpleName()+'@'+eobj.hashCode(); } } }
Post processing
Now we have to implement the toString() method of our model classes accordingly. That is, instead of the default EMF toString method, we want to call our static helper method for producing the String.A generic solution, which can not only be applied for adding the toString method but for all kind of operations, is to use a post processor extension (written in Xtend) to add new operations to the generated ecore model. The overall mechanism is described in the Xtext documentation. We have to write an Xtend extension matching a specific naming convention: <name of DSL>PostProcessor.ext. In our exampel that would be MyDslPostProcessor.
The easy thing is to add a new operation to each classifier:
import ecore; import xtext; process(GeneratedMetamodel this) : this.ePackage.eClassifiers.addToStringOperation(); create EOperation addToStringOperation(EClassifier c): ... define operation ... -> ((EClass)c).eOperations.add(this);
For defining the operation, we need:
- the return type of the operation
- the body of the operation
The return type is an EString (which will result in a simple Java String). In EMF, we have to set the type via EOperation.setEType(EClassifier). That is, we need the classifier of EString. With Java, this would be no problem: EcorePackage.eINSTANCE.getEString().
Unfortunately, we cannot directly access static fields from Xtend. At least, I do not know how that works. Fortunately, we can substitute EcorePackage.eINSTANCE with calling a static method of EcorePackageImpl. This static method can then be defined as a JAVA extension in Xtend:
EPackage ecorePackage(): JAVA org.eclipse.emf.ecore.impl.EcorePackageImpl.init();
Note that we return an EPackage instead of the EcorePackage. I assume this is necesssary because we use the EMF metamodel contributor and EcorePackage is not available then. We can now set the EString classifier as return type of the operation: setEType(ecorePackage().getEClassifier("EString"))
Now, we need the body of the operation. Ecore does not directly support the definition of a body, that is there is no field in EOperation for setting the body. Fortunately, we can exploit annotations for defining the body. The default EMF generator templates look for annotations marked with the source value "http://www.eclipse.org/emf/2002/GenModel". The key of the annotation must be "body", and the value of the annotation is then used as the body of the operation. In the body, we simply call our static helper method for producing the DSL-like string representation.
The complete post processor extensions looks as follows:
import ecore;
import xtext;
process(GeneratedMetamodel this) :
this.ePackage.eClassifiers.addToStringOperation();
EPackage ecorePackage():
JAVA org.eclipse.emf.ecore.impl.EcorePackageImpl.init();
create EOperation addToStringOperation(EClassifier c):
setName("toString") ->
setEType(ecorePackage().getEClassifier("EString")) ->
eAnnotations.add(addBodyAnnotation(
'if (eIsProxy()) return super.toString(); return ToString.valueOf(this);')) ->
((EClass)c).eOperations.add(this);
create EAnnotation addBodyAnnotation(EOperation op, String strBody):
setSource("http://www.eclipse.org/emf/2002/GenModel") ->
createBody(strBody) ->
op.eAnnotations.add(this);
create EStringToStringMapEntry createBody(EAnnotation annotation, String strBody):
setKey("body")->
setValue(strBody) ->
annotation.details.add(this);
If you (re-) run the GenerateMyDSL workflow, the EMF toString() implementations are replaced by our new version. You can test it in a simple stand alone application (do not forget to call doSetup in order to configure the injector):
public static void main(String[] args) { MyDslStandaloneSetup.doSetup(); MyElement = MyDslFactory.eINSTANCE.createElement(); e.setAttr1("Test"); e.setAttr2("Type"); System.out.println(e.toString()); }
Closing remarks
You probably do not want to really replace all toString methods with the serializer output, as this would create rather long output in case of container elements. In that case, you can add the new operation only to selected classifiers, or use the (generated) Switch-class to further customize the output.
Although the solutions looks straight forward, it took me some time to solve some hidden problems and get around others:
- How to create the serializer using the injector -- and how to create the injector in the first place
- How to access a static Java method from Xtend without too much overhead. Would be great if static fields could be accessed from Xtend directly.
- How to use the post processor with the JavaBeans metamodel contributor. If I switch to the JavaBeans metamodel, my extension didn't get called anymore.
- I'm still wondering where "EStringToStringMapEntry" is defined. I "copied" that piece of code from a snippet I wrote a couple of months ago, and I have forgotten how I found that solution in the first place.
- Sorry, but I have to say it: The Xtend version 1.0.1 editor is crap (e.g., error markers of solved problems do not always get removed). But I've heard there should be a better one available in version 2 ;-)
Friday, March 4, 2011
Traverse DAGs with Xtend
1) unsorted search traversing directed connections
2) depth-first search traversing directed connections
3) breadth-first search traversing directed connections
4) traverse a directed connection in counter-direction
5) unsorted search, traversing directed connections in counter-direction
6) depth-first search traversing directed connections in counter-direction
7) breadth-first search traversing directed connections in counter-direction
Example: a type hierarchy.
For the examples, I use a "real" world example: a type hierarchy. Our (very simple) model looks like that (in pseudo Java code):Type { String name; Collection<Type> super; }Let's also assume a container in which all types are stored, e.g.
Collection<Type> allTypes;In OCL, you can even retrieve all types by a simple query, however we often have some type of container defined in our model anyway (and we use Xtend here ;-) ).
For testing the code, I have defined a concrete example. The following type hierarchy visualizes some type instances, the super types attributes are drawn as connections:

A
. B
, C
, and D
are direct subtypes of A
. E
is a subtype of B
. F
is a subtype of B
and C
(we allow multi inheritance ;-) ), and so on. Preliminary remarks:
- I haven't looked into any algorithm book to find the best algorithm (what ever "best" means). The solutions below are simply the one I implemented when I needed it, without much thinking about the algorithm. If you know a better solution, please let me know!
- I'm using
create
extensions for simplicity and performance reasons here. If you have small (or mid sized) models, I'd assume this would be ok. - Although Xtend is quite similar to OCL, the algorithms will probably not work with OCL, as OCL is side effect free (and I modify collections in the algorithms, which will not work that way with OCL).
Super type queries (or: traverse directed connection)
Since we store the super types in the model, the easiest query is to retrieve the direct super types of a type. E.g., A.super
returns an empty list; F.super
returns B, C
. Now, lets calculate the transitive closure of super types. This is very easy with Xtend:
1) Transitive closure of super types, unsorted:
create Set[Type] superTypesTransitive(Type type): this.addAll(type.supers) -> this.addAll(type.supers.superTypesTransitive());
We use a set in order to avoid duplicates, which would be added in case of multi-inheritance. This solution is straight forward. If you are not used to OCL syntax, you may be a little bit confused by the implicit syntax for the collect
operation: type.supers.superTypesTransitive()
returns superTypesTransitive()
for all type.supers
elements.
J.superTypesTransitive()
will return F,B,C,A
, just as expected. (J.superTypesTransitive() is just more OO-like way for writing superTypesTransitive(J). Frankly, I don't know if this is better readable, but it definitely looks cooler ;-)). The returned collection is not ordered, and often enough this is sufficient. However, sometimes we need the collection to be ordered. There are two often used strategies for traversing a tree or DAG: depth first search and breadth first search. We will implement both.
2) Transitive closure of super types with depth first search order:
create List[Type] superTypesTransitiveDFS(Type type): type.supers.forAll(s| this.add(s)-> this.addAll(s.superTypesTransitiveDFS().reject(e|this.contains(e))) !=null);
As we need a sorted collection, we have to use a list instead of a set. However, we have to reject duplicates in our code now. Since Xtend does not provide a loop statement, I have used the collection operation forAll
here. Since the forAll operation expects a boolean expression inside, the !=null
part "casts" our chained expression into a boolean expressions. If you know of a nicer solution, please let me know.
J.superTypesTransitiveDFS()
will return F,B,A,C
.
3) Transitive closure of super types with breadth first search order:
create List[Type] superTypesTransitiveBFS(Type type): let todo = new java::util::ArrayList : todo.addAll(type.supers) -> bfsSuper(this, todo); private Void bfsSuper(List[Type] result, List[Type] todo): if todo.isEmpty then Void else result.add(todo.first()) -> todo.addAll(todo.first().supers.reject(e|todo.contains(e) || result.contains(e))) -> todo.remove(todo.first()) -> bfsSuper(result, todo);
The breadth first search is a little bit more complicated, as we need a helper list, and a helper method. Note that we cannot create an Xtend type of List
here, instead we have to use the Java ArrayList
, which is available when we use the JavaBeans Metamodel in the project's Xpand/Xtend settings.
J.superTypesTransitiveBFS()
will return F,B,C,A
.
Sub type queries (or: traverse connections in counter direction)
So, we have three different queries for super types. Now we want to write the very same queries for sub types. Unfortunately, the sub type information is not directly stored in the model but must be derived instead. First, we write a simple query for the direct sub types:
4) Computes direct sub types (navigate in counter-direction):
create Set[Type] subTypes(Type type, Collection[Type] allTypes): this.addAll(allTypes.select(k|k.supers.contains(type)));
This solution relies on the afore described queries. Frankly, I needed some time to figure it out (as I'm more an imperative and OO guy ;-) ) and I was really surprised how short (one line) it is. If you have to write that with Java, you will need a lot more lines. So, if you ever wondered why to use a special transformation language -- this is at least one argument.
Now, let's compute the transitive closure:
5) Transitive closure of sub types, unsorted:
create Set[Type] subTypesTransitive(Type type, Collection[Type] allTypes): this.addAll(allTypes.select(k|k.superTypesTransitive().contains(type)));
Note that this query does not rely on the subTypes extension, but only on the super type queries. Since we use a Set
, we do not have to take care of duplicates. Again: A Java solution would be much longer.
A.superTypesTransitive(allTypes)
will return B,C,E,G,H,I,J,D,F
Now, let's implement the depth first search for sub types:
6) Transitive closure of sub types with depth first search order:
create List[Type] subTypesTransitiveDFS(Type type, Collection[Type] allTypes): type.subTypes(allTypes).forAll(s| this.add(s)-> this.addAll(s.subTypesTransitiveDFS(allTypes).reject(e|this.contains(e))) !=null);
Since we have implemented an extension for subTypes, the solution is quite similar to the super kind depth first search algorithm.
A.subTypesTransitiveDFS(allTypes)
will return B,E,F,I,J,C,D,G,H
. Note that we cannot simply use the unsorted solution with a list instead of a set, as this will not result in a depth first search order (in our case, it would return something like B,C,D,E,F,I,J,G,H
). The same is true for superTypesTransitiveDFS, by the way. Last but not least the breadth first search for sub types.
7) Transitive closure of sub types with breadth first search order:
create List[Type] subTypesTransitiveBFS(Type type, Collection[Type] allTypes): let todo = new java::util::ArrayList : todo.addAll(type.subTypes(allTypes)) -> bfsSub(this, todo, allTypes); private Void bfsSub(List[Type] result, List[Type] todo, Collection[Type] allTypes): if todo.isEmpty then Void else result.add(todo.first()) -> todo.addAll( todo.first().subTypes(allTypes). reject(e|todo.contains(e) || result.contains(e))) -> todo.remove(todo.first()) -> bfsSub(result, todo, allTypes);
This algorithm also resembles the one for super types. By the way: I didn't find the let
expression explained in the documentation of Xtend (however, some examples use it). Did I miss it or is there really more OCL in Xtend as told in the docs?
A.subTypesTransitiveBFS(allKinds)
will return B,C,D,E,F,G,H,I,J
, which is easily validated as this is the order of the types as shown in the little figure.