Thursday, December 22, 2011

GEF3D goes Git, Maven/Tycho, and Hudson

or
Standing on the shoulders of giants

Abstract: Setting up a continuous integration build based on Git, Maven/Tycho, and Hudson is surprisingly easy. I assume that this is no real news for most readers. However, I ways very skeptical about that, especially because of all the project dependencies. So, this posting is meant for readers hesitating to set up an automatic build system because they think that it would be too complicated, just as I thought until .. well, until now.

One of the things on my todo list of 2011 was to set up a continuous integration build for GEF3D. I did set up such a system several years ago, using some XML files describing a module and its dependencies, and an XSL Transformation generating Ant scripts based on these module descriptions. In order to run nightly builds, cron was used -- yeah, good old times. I remember trying Maven back then, and I was so much disappointed that I wrote my own tools. Due to this experience, I had lots of respect setting up a build system for GEF3D.

In the beginning, Miles helped me to set up a Buckminster based build. He had some experience with this tool because his build system for the AMP project is based on Buckminster as well. Since Miles has switched to Git, he suggested the GEF3D project to switch as well. This would simplify the set up, as we only have to deal with one version control system. I filed a bug report for moving GEF3D from SVN to Git. Since I had read in the Git Migration guide that when moving to Git it would also be a good opportunity to refactor the structure of the project, we decided to introduce folders separating plugins, features, examples, and so on. Somehow my report got forgotten... As it was very complicated to configure the Buckminster build, and due to other things, I also didn't push it.

Migrate to Git

So, as the year is reaching its end, I decided to give it another try -- despite all these unknown tools such as Buckminster, Git, and Hudson. I started with Git. The first giants I have to give kudos to are the Eclipse Git guys, and Stefan, who wrote a nice blog posting about how he moved CDO to Git. The basic idea of Stefan's approach is really simple:
  1. Create a Git repository locally and use svn2git to migrate the code. Note that there exist different svn2git tools. I used https://github.com/nirvdrum/svn2git, while Stefan used https://github.com/schwern/svn2git.git! The import is a single command, in my case
    svn2git https://dev.eclipse.org/svnroot/technology/org.eclipse.gef3d --authors users.txt
    users.txt is a list of the committers, with username = prename surname <email>. Very simple, indeed. Since GEF3D is not that big, the migration requires only 10 minutes, and the created Git repository has a size about 9 MB.
  2. Then I refactored the project structure, simply done via command line: git mv is your friend here. (Kristian helped me with little problems, as I'm a Git rookie as well ;-) ).
  3. Commit all changes to the local Git repository, pack the ".git" repository, upload it to developer.eclipse.org and let the webmaster unpack it at git.eclipse.org.
Stefan had to do much more work, as he had to create four repositories by extracting things from the existing CDO project, which is much bigger than the GEF3D project, of course. You will find the GEF3D git repository at git://git.eclipse.org/gitroot/gef3d/org.eclipse.gef3d.git

Maven/Tycho

During my research about some Git questions, I stumbled over the GMF tooling project (well, I know that project for a long time, but I didn't cared about the "internal" structure). It uses the same project structure as the GEF3D team had decided to be used for GEF3D, and it uses Tycho. Although I had my (bad) experiences with Maven, I gave Tycho a try. And again, I was very much surprised at how easy it is to set up a complete build system with Maven/Tycho. I used the GMF tooling poms as a template, and after a couple of minutes (not hours!) I had a build system which could build most parts of GEF3D. Kudos to the Tycho giant team! Besides some special Eclipse packaging things, e.g., telling Maven how to handle an Eclipse plugin (and probably much more hidden under the hud), one really nice feature is the ability to use P2 repositories as Maven repositories. GEF3D has dependencies to GEF, GMF and EMF. To resolve these dependencies, I only had to define a single repository:
<repository>
    <id>Galileo</id>
    <layout>p2</layout>
    <url>http://download.eclipse.org/releases/galileo</url>
</repository>  
This is so cool!
Unfortunately, LWJGL (the OpenGL wrapper library used by GEF3D) does not provide a p2 repository, but an old style update site instead. That is, its update site provides only the site.xml file, and no p2 metadata. By accident, I'm the guy maintaining the LWJGL update site build script. It is an Ant based build, added to the overall LWJGL build system. Since LWJGL does not use Maven, and no Eclipse at all, I could not rely on the Tycho or p2 publisher to build the p2 metadata. In order to keep the overhead for the LWJGL project low, I wrote my own Ant task creating the missing p2 metadata for an old style update site. If you ever need something like this, the code of this Ant task is available from the LWJGL SVN -- it is a plain Java Ant task without any Eclipse dependencies. It can also be used standalone in order to create the content.xml/jar and artifact.xml/jar from a bunch of plugins, features, and a site.xml. At the moment, the official LWJGL update site is not updated yet and I'm using a personal mirror for GEF3D. But Brian, who maintaines the LWJGL update site, will probably update it soon.

Remark: The LWJGL update site provides the LWJGL plugin, which basically bundles LWJGL as an Eclipse plugin. Additionally, source and documentation bundles are provided, and a tool bundle with an information view (showing the OpenGL settings of your graphics card), and a library for easily configuring standalone LWJGL apps. And thank you very much, Brian, for maintaining the update site at lwjgl.org!

I also had to fight with Maven and Tycho to get the source and documentation bundles build (seems as if some tiny things were changed when Tycho moved to Eclipse, so you have to compare the settings in the documentation with actual poms). Thanks to Chris' Minerva project, and the GMF tooling project, I could solve these issues. The Minerva project also demonstrates how to configure tests (simple JUnit tests, and plugin tests with SWT bot) -- and it was easy to configure this for GEF3D as well. Although I'm still curious about Buckminster, I was really surprised how well Maven/Tycho works. And since it is already working, I won't switch to Buckminster. However, I could imagine that if you have special requirements, it would be easier to configure that with Buckminster. I'm currently tutoring a student setting up a Buckminster build system for a research project -- I'm looking forward to comparing the results.

Hudson

Eventually, I had to set up a Hudson job for the GEF3D build. Miles had already prepared that job, and I only had to configure GEF3D's git repository, and Maven. I tried this first on a locally installed Hudson ("installed" sounds like a lot of work, actually it is only downloading a war and start Hudson via java -jar hudson.war). Again, setting up a Hudson job for a Maven based build system is really easy. All you have to do is to specify your code repository (git in my case), and the parameters passed to Maven (which usually is "clean install"). That's it. Well, at the moment I have some problems building the javadoc API reference, as the Javadoc at hudson.eclipse.org apparently behaves a little bit different as the Javadoc on my local machine. But at the moment I can ignore that problem, and I'm sure this can be solved soon.

Summary

I was really surprised at how easy it was to migrate from SVN to Git, to set up a build system with Maven/Tycho, and to configure the job with Hudson. As a matter of fact it was that easy, that I probably will use Git, Maven/Tycho, and Hudson for new projects right from the start (I know, that's what all the agile guys tell you to do... but I didn't dare to actually do it). I was particularly surprised at how good Maven works with Eclipse thanks to Tycho -- the Tycho team did a really great job here! According to Leonard, there's a crack in everyting... and I'm a little bit nervous about configuring special requirements with Maven, such as integrating code transformation tools. I've found some blog posts about getting Xtext/Xtend work with Maven -- seems as if there are reasons to being nervous... but that's how the light comes in :-D

Sunday, August 21, 2011

Java To OmniGraffle

If you ask developers using Mac OS X about their favorite diagramming tool, you will often get the same answer: OmniGraffle. I also like OmniGraffle very much, and I'm still wondering what makes this tool so much better then all the GEF based editors.

When I have to create diagrams for documenting some Java code, I used to manually draw an UML like class diagram with OmniGraffle. This is an error prone process, and a boring one as well. So, I tried to find a better solution. Since I didn't find any existing tool, I wrote a small Eclipse plugin my self. It automatically generates OmniGraffle class diagrams from existing Java code.

Its usage is very simple: Open or create a drawing in OmniGraffle. Then switch back to Eclipse and select "Create OmniGraffle Diagram" from the context menu of a package in the package explorer, as shown in Figure 1. Configure the output, as shown in Figure 2. The plugin will scan the package and add a class diagram of this package to the front most drawing opened with OmniGraffle. Figure 3 shows the result created by the plugin without any manual changes. It is a visualization of the package "ReverseLookup" of GEF3D.
Fig. 1: Context menu entry

Fig. 2: Configuration Dialog
Fig 3: Created class diagram
At this moment, the plugin can only create class diagrams for a single package. Update: The plugin can create class diagrams for selected types, packages, and sub packages. Besides, the context of selected classes, that is types on which the selected types depend on, can be visualized as well. Attributes are drawn as associations if possible, parameterized collection types are recognized and replaced by 0..* associations. You can configure the output with some switches. Besides filtering members based in their scope, I have added some "filters" I often use when I draw diagrams manually:
  • Getter and setters can be omitted
  • Methods implementing or overriding methods of interfaces or classes already shown in the diagram can be omitted as well
  • In order to better see relations between classes, you can force to draw all associations, even if they would be filtered out by the scope filter.
The newly created shapes are initially drawn using OmniGraffle's hierarchical layout algorithm.

Tip: In order to manually change the diagram, you may want to have a look at my collection of UML shapes at Graffletopia.

You can install the plugin via the update site:

http://jevopi.de/updatesite/de.jevopi.JavaToOmniGraffle
This is an beta alpha version, which will expire 2012. If you find this plugin useful, flattr me! Or leave me a comment below. Depending on the feedback, I will continue developing the plugin -- or not ;-)

In the preferences, you can set the default configuration settings and define the name of your OmniGraffle installation (however, the plugin tries to find the latest installed version automatically).

Last but not least: Of course, this plug is only available on Mac OS X, since OmniGraffle is a native Mac application. The communication between Eclipse and OmniGraffle is done via AppleScript, which is very easy thanks to Peter Friese's blog post.

(At Stackoverflow, someone estimated a tool for creating OmniGraffle diagrams from Eclipse UML2 based models would require 18 months development effort. Well, I needed less then 18 hours. But I only convert Java packages to class diagrams... ;-) ).

Update 2011-11-01:
  1. Besides packages, selected types and sub packages can be visualized.
  2. The context of visualized types can be visualized. That is, types on which the selected types depend on, such as super classes, can be rendered additionally to the selected classes. These context types are rendered in gray.
  3. The default package is now handled as well (see comment by mathpup).

Wednesday, June 29, 2011

It's full of classes!

"The thing's hollow---it goes on forever---and---oh my God!---it's full of stars!" (Arthur C. Clarke: 2001. A Space Odyssey)

When I presented GEF3D in the past, people often ask me if it will scale, that is if a large number of items could be displayed. Well, the following screencast, inspired by Kubrick's great movie, shows a flight through the JDK. That is, every package of the 1.000 packages contained in the JDK is visualized as a plane in 3D space. On that plane, the classes are displayed---in total, more than 20.000 classes and interfaces are shown that way. Since the whole demo is a more or less a performance test, the classes are not really layouted yet. Also, only intra-package generalizations and implementations are shown yet.



The flight is sometimes a little bit bumpy. However, flying through 20.000 elements is more or less the worse case. Usually, the camera is moved in a specific area, and only sometimes a tracking shot may be used to "fly" to the next interesting area. As you will notice at the very beginning of the video, the camera is moving quite smoothly. Well, there is still room for improvement ;-)

Note that the video does not only demonstrate the overall performance of GEF3D, but also some of its features:
  • the whole flight through the package tube is a single GEF3D tracking shot
  • note the high quality font rendering
  • level-of-detail (LOD) techniques are implemented in two ways:
    • fonts are either rendered as texture or vector font, depending on the distance of the text to the camera
    • packages are painted empty, with only the name of the package, or with their content, depending on the distance to the camera. This kind of LOD technique is not part of GEF3D yet, but it can easily be added.
  • actually, you see 1.000 GEF editors, combined into a single 3D multi editor

Thursday, June 9, 2011

When your MWE2 workflow is not working...

MWE2 is a kind of Ant for model related tasks. Xtext is using MWE2, and I also use it to run my own Xpand generator templates. It's a nice tiny tool, and one of the nice things is that it runs in its own JVM, so you can easily extend MWE2 with new components which resides in your project. When MWE2 is started, the project settings, i.e. the classpath (including all information from the plugin dependencies), are passed to the workflow. Unfortunately, this may produce problems which are really hard to find, mostly because the error message do not really tell you where to find the actual problem. Here is a list of some problems I have run into several times (using Eclipse 3.6 and Xtext/Xpand 1.0.1). Note that all the problems mentioned and fixed below may be caused by other reasons, requiring other fixes.

Problems instantiating module

Error message:
Error message in console:
1    [main] ERROR mf.mwe2.launch.runtime.Mwe2Launcher  - Problems instantiating module ...
...
Caused by: org.eclipse.emf.mwe.core.ConfigurationException: The platformUri location '......' does not exist 
Possible fix:
Fix projectName in MWE2 file.
I run into this problem after renaming a project. Ensure the project name defined in your MWE2 file
var projectName = ".."
matches the actual project name. This line is present in Xtext related MWE2 files.

Couldn't find module with name

Error message:
Error message in console:
ERROR mf.mwe2.launch.runtime.Mwe2Launcher  - Couldn't find module with name ...
Possible fix:
Create missing src-gen folder.
I run into this problem after checking out a project from a code repository. Since we do not add generated code to the repository, the src-gen folder was not added to the repository. Hence, it was not present after having checked out the project. However, it is configured in the build.properties. This seems to lead to a problem when the classpath is computed, so that the src folder is not added to the classpath either. Since the MWE2 file resides in the src folder, is is not found and, consequently, the module is not found. I was able to fix this problem by simply creating the src-gen folder. In order to not cause this problem again, I have added the src-gen folder to the repository and ignore only its content.

Workflow definition is ignored

Error message:
None. However, the selected workflow is completely ignored. It seems as if another workflow is executed.
Possible fix:
Ensure module name of workflow, that is the first line in MWE2 file
module ..
is unique.
This seems to be a typical copy- and paste error. Actually, in deed another workflow is executed. Although one can run an MWE2 file via "Run as../MWE2 workflow", the launcher does not directly call the actual workflow file. Instead, the name of the module is read and then the internal representation of this module is been executed. If you define two of more workflows with the same module name, only one of these modules is actually present (there seems to be some kind of map from module name to module).

Couldn't resolve reference to JvmType 'Workflow'.

Error message:
Error in MWE2 workflow file:
Couldn't resolve reference to JvmType 'Workflow'.
When you try to run the workflow, the following message appears:
Please put bundle 'org.eclipse.mwe2.launch' on your project's classpath.
Possible fix:
Ensure Plug-in Dependencies are correctly added to classpath.
This error can be caused by at least one of the following problems:
  • your project is not an OSGi/Plug-in project. This can be fixed by converting the (Java) project to a Plug-in project.
  • as printed in the dialog, ensure 'org.eclipse.mwe2.launch' to be listed in the plug-in dependencies
While these are obvious reasons for the error, in my case I had configured the project as plug-in project and 'org.eclipse.mwe2.launch' was defined in the dependency list. However, it still didn't worked. This may be caused because I renamed the project, and probably something had gone wrong. I noticed the problem only because the Package Explorer didn't showed the entry "Plug-In Dependencies". I assume there may be some corrupt cache entries. I only was able to fix this problem by creating a new plug-in project and moving the content of the broken project into the new one.

Weird errors when generating the parser etc.

Error message:
When running the Xtext MWE2 workflow to generate the code from your grammar, weird errors occur indicating problems in your grammar.
Possible fix:
Increase memory in MWE2 runtime configuration.
I stumbled over this problem twice, and it is really annoying since it is very hard to find the reason. If the parser generator runs out of memory, it crashes at some arbitrary position and randomly creates error messages. Since it usually happens in the parser generator, the error messages always indicate problems in your grammar, although your grammar may be perfectly ok. So, if you got any grammar problems which are not that obvious real grammar problems, ensure to set VM argument "-Xmx1024m" in the MWE2 runtime configuration.

Disclaimer: This is more a personal note, and some problems may be fixed in the meantime. Feel free to tell me if I got something wrong here :-)

Monday, March 14, 2011

Implement toString with Xtext's Serializer

Xtext uses EMF to generate the model API of the abstract syntax tree (or graph) of a DSL. For all implementation classes, the toString() method is generated. For a simple model element, this default implementation returns a string looking similar to this:

my.dsl.impl.SomeElement@67cee792 (attr1: SomeValue)

Well, this is not too bad. However this looks completely different to my DSL syntax, which may look like this:

SomeValue { 
    The content;
}

Especially for debugging and logging, I prefer that DSL like output. Since Xtext does not only generates a parser for reading such a text, but also a seralizer for creating the text from a model, I was wondering if that mechanism could be used for the toString method as well. (Actually, Henrik Lindberg pointed out the serializer class -- thank you, Henrik!)

In the following, I describe how to do that. Actually, this is a little bit tricky, and it will cover several aspects of Xtext and the generation process:
  • use the generated serializer for formatting a single model element
  • tweak the generation process in order to add a new method
  • define the body of the newly added method

We will do that by adding a post processor Xtend file, which adds a new operation to the DSL model elements. The body of the operation is then added using ecore annotations. But first, we will write a static helper class implementing the toString method using the serializer.

Use the serializer

Xtext provides a serializer class, which is usually used for writing a model to an Xtext resource. The Serializer class (in org.eclipse.xtext.parsetree.reconstr) provides a method serialize(EObject obj), which returns a String---this is exactly what we need. This class requires a parse tree constructor, a formatter and a concrete syntax validator. Thanks to google Guice, we do not have to bother about these things. Xtext generates everything required to create a nicley configured serializer for us. What we need is the Guice injector for creating the serializer:

Injector injector = Guice.createInjector(new  my.dsl.MyDslRuntimeModule());
Serializer serializer = injector.getInstance(Serializer.class);

Now we could simply call the serialize method for a model element (which is to be an element of the DSL):
String s = serializer.serialize(eobj);

Since this may throws an exception (when the eobj cannot be successfully serialized, e.g., due to missing values), we encapsulate this call in a try-catch block. Also, we create a helper class, providing a static method. We also use a static instance of the serializer.
Since this helper class is only to be used by the toString methods in our generated implementation, we put it into the same package.

package my.dsl.impl;

import org.eclipse.emf.ecore.EObject;
import org.eclipse.xtext.parsetree.reconstr.Serializer;
import com.google.inject.Guice;

public class ToString {
 private static Serializer SERIALIZER = null;

 private static Serializer getSerializer() {
  if (SERIALIZER == null) { // lazy creation
   SERIALIZER = Guice.createInjector(new my.dsl.MyDslRuntimeModule())
        .getInstance(Serializer.class);
  }
  return SERIALIZER;
 }

 public static String valueOf(EObject eobj) {
  if (eobj==null) {
   return "null";
  }
  try {
   return getSerializer().serialize(eobj);
  } catch (Exception ex) { // fall back:
   return eobj.getClass().getSimpleName()+'@'+eobj.hashCode();
  }
 }

}

Post processing

Now we have to implement the toString() method of our model classes accordingly. That is, instead of the default EMF toString method, we want to call our static helper method for producing the String.

A generic solution, which can not only be applied for adding the toString method but for all kind of operations, is to use a post processor extension (written in Xtend) to add new operations to the generated ecore model. The overall mechanism is described in the Xtext documentation. We have to write an Xtend extension matching a specific naming convention: <name of DSL>PostProcessor.ext. In our exampel that would be MyDslPostProcessor.

The easy thing is to add a new operation to each classifier:
import ecore;
import xtext;

process(GeneratedMetamodel this) :
 this.ePackage.eClassifiers.addToStringOperation();

create EOperation addToStringOperation(EClassifier c):
    ... define operation ... ->
 ((EClass)c).eOperations.add(this);

For defining the operation, we need:
  • the return type of the operation
  • the body of the operation

The return type is an EString (which will result in a simple Java String). In EMF, we have to set the type via EOperation.setEType(EClassifier). That is, we need the classifier of EString. With Java, this would be no problem: EcorePackage.eINSTANCE.getEString().
Unfortunately, we cannot directly access static fields from Xtend. At least, I do not know how that works. Fortunately, we can substitute EcorePackage.eINSTANCE with calling a static method of EcorePackageImpl. This static method can then be defined as a JAVA extension in Xtend:

EPackage ecorePackage(): 
 JAVA org.eclipse.emf.ecore.impl.EcorePackageImpl.init();

Note that we return an EPackage instead of the EcorePackage. I assume this is necesssary because we use the EMF metamodel contributor and EcorePackage is not available then. We can now set the EString classifier as return type of the operation: setEType(ecorePackage().getEClassifier("EString"))

Now, we need the body of the operation. Ecore does not directly support the definition of a body, that is there is no field in EOperation for setting the body. Fortunately, we can exploit annotations for defining the body. The default EMF generator templates look for annotations marked with the source value "http://www.eclipse.org/emf/2002/GenModel". The key of the annotation must be "body", and the value of the annotation is then used as the body of the operation. In the body, we simply call our static helper method for producing the DSL-like string representation.

The complete post processor extensions looks as follows:
import ecore;
import xtext;

process(GeneratedMetamodel this) :
 this.ePackage.eClassifiers.addToStringOperation();

EPackage ecorePackage(): 
 JAVA org.eclipse.emf.ecore.impl.EcorePackageImpl.init();


create EOperation addToStringOperation(EClassifier c):
 setName("toString") ->
 setEType(ecorePackage().getEClassifier("EString")) ->
 eAnnotations.add(addBodyAnnotation(
  'if (eIsProxy()) return super.toString(); return ToString.valueOf(this);')) ->
 ((EClass)c).eOperations.add(this);

create EAnnotation addBodyAnnotation(EOperation op, String strBody):
 setSource("http://www.eclipse.org/emf/2002/GenModel") ->
 createBody(strBody) ->
 op.eAnnotations.add(this);
 
create EStringToStringMapEntry createBody(EAnnotation annotation, String strBody): 
 setKey("body")->
 setValue(strBody) ->
 annotation.details.add(this);

If you (re-) run the GenerateMyDSL workflow, the EMF toString() implementations are replaced by our new version. You can test it in a simple stand alone application (do not forget to call doSetup in order to configure the injector):

public static void main(String[] args) {
 MyDslStandaloneSetup.doSetup();
 MyElement = MyDslFactory.eINSTANCE.createElement();
 e.setAttr1("Test");
 e.setAttr2("Type");
 System.out.println(e.toString());
}


Closing remarks


You probably do not want to really replace all toString methods with the serializer output, as this would create rather long output in case of container elements. In that case, you can add the new operation only to selected classifiers, or use the (generated) Switch-class to further customize the output.

Although the solutions looks straight forward, it took me some time to solve some hidden problems and get around others:
  1. How to create the serializer using the injector -- and how to create the injector in the first place
  2. How to access a static Java method from Xtend without too much overhead. Would be great if static fields could be accessed from Xtend directly.
  3. How to use the post processor with the JavaBeans metamodel contributor. If I switch to the JavaBeans metamodel, my extension didn't get called anymore.
  4. I'm still wondering where "EStringToStringMapEntry" is defined. I "copied" that piece of code from a snippet I wrote a couple of months ago, and I have forgotten how I found that solution in the first place.
  5. Sorry, but I have to say it: The Xtend version 1.0.1 editor is crap (e.g., error markers of solved problems do not always get removed). But I've heard there should be a better one available in version 2 ;-)

Friday, March 4, 2011

Traverse DAGs with Xtend

Like OCL, Xtend (a sublanguage as part of the Xpand project for model queries) provides some really powerful collection operations. These operations allow to easily retrieve elements from arbitrary models, i.e. graphs. Searching a single element in a graph is often very simple with these operation. However, when a collection is to be returned (which is not stored in some attribute), this might be a little bit more complicated, especially if the order matters. In the following, I have assembled some examples showing how to traverse a directed acyclic graph (DAG) with some common traversal strategies:

1) unsorted search traversing directed connections
2) depth-first search traversing directed connections
3) breadth-first search traversing directed connections
4) traverse a directed connection in counter-direction
5) unsorted search, traversing directed connections in counter-direction
6) depth-first search traversing directed connections in counter-direction
7) breadth-first search traversing directed connections in counter-direction

Actually, traversing a connection in counter-direction is not really possible. What I mean by that is, that the query is to return the element of the non-navigable end of the connection. E.g., for a given directed connection x->y, x is to be returned for a given y.

Example: a type hierarchy.

For the examples, I use a "real" world example: a type hierarchy. Our (very simple) model looks like that (in pseudo Java code):
Type {
    String name;
    Collection<Type> super;
}
Let's also assume a container in which all types are stored, e.g.
Collection<Type> allTypes;
In OCL, you can even retrieve all types by a simple query, however we often have some type of container defined in our model anyway (and we use Xtend here ;-) ).
For testing the code, I have defined a concrete example. The following type hierarchy visualizes some type instances, the super types attributes are drawn as connections: That is, we have a base type A. B, C, and D are direct subtypes of A. E is a subtype of B. F is a subtype of B and C (we allow multi inheritance ;-) ), and so on.

Preliminary remarks:

  • I haven't looked into any algorithm book to find the best algorithm (what ever "best" means). The solutions below are simply the one I implemented when I needed it, without much thinking about the algorithm. If you know a better solution, please let me know!
  • I'm using create extensions for simplicity and performance reasons here. If you have small (or mid sized) models, I'd assume this would be ok.
  • Although Xtend is quite similar to OCL, the algorithms will probably not work with OCL, as OCL is side effect free (and I modify collections in the algorithms, which will not work that way with OCL).

Super type queries (or: traverse directed connection)

Since we store the super types in the model, the easiest query is to retrieve the direct super types of a type. E.g., A.super returns an empty list; F.super returns B, C. Now, lets calculate the transitive closure of super types. This is very easy with Xtend:

1) Transitive closure of super types, unsorted:

create Set[Type] superTypesTransitive(Type type):
 this.addAll(type.supers) ->
 this.addAll(type.supers.superTypesTransitive());

We use a set in order to avoid duplicates, which would be added in case of multi-inheritance. This solution is straight forward. If you are not used to OCL syntax, you may be a little bit confused by the implicit syntax for the collect operation: type.supers.superTypesTransitive() returns superTypesTransitive() for all type.supers elements.
J.superTypesTransitive() will return F,B,C,A, just as expected. (J.superTypesTransitive() is just more OO-like way for writing superTypesTransitive(J). Frankly, I don't know if this is better readable, but it definitely looks cooler ;-)). The returned collection is not ordered, and often enough this is sufficient. However, sometimes we need the collection to be ordered. There are two often used strategies for traversing a tree or DAG: depth first search and breadth first search. We will implement both.

2) Transitive closure of super types with depth first search order:

create List[Type] superTypesTransitiveDFS(Type type):
 type.supers.forAll(s|
    this.add(s)->
    this.addAll(s.superTypesTransitiveDFS().reject(e|this.contains(e))) 
    !=null); 

As we need a sorted collection, we have to use a list instead of a set. However, we have to reject duplicates in our code now. Since Xtend does not provide a loop statement, I have used the collection operation forAll here. Since the forAll operation expects a boolean expression inside, the !=null part "casts" our chained expression into a boolean expressions. If you know of a nicer solution, please let me know.
J.superTypesTransitiveDFS() will return F,B,A,C.

3) Transitive closure of super types with breadth first search order:

create List[Type] superTypesTransitiveBFS(Type type):
 let todo = new java::util::ArrayList :
  todo.addAll(type.supers) -> 
  bfsSuper(this, todo);
  
private Void bfsSuper(List[Type] result, List[Type] todo):
 if todo.isEmpty then
  Void
 else
  result.add(todo.first()) ->
  todo.addAll(todo.first().supers.reject(e|todo.contains(e) || result.contains(e))) -> 
  todo.remove(todo.first()) ->
  bfsSuper(result, todo);  

The breadth first search is a little bit more complicated, as we need a helper list, and a helper method. Note that we cannot create an Xtend type of List here, instead we have to use the Java ArrayList, which is available when we use the JavaBeans Metamodel in the project's Xpand/Xtend settings.
J.superTypesTransitiveBFS() will return F,B,C,A.

Sub type queries (or: traverse connections in counter direction)

So, we have three different queries for super types. Now we want to write the very same queries for sub types. Unfortunately, the sub type information is not directly stored in the model but must be derived instead. First, we write a simple query for the direct sub types:

4) Computes direct sub types (navigate in counter-direction):

create Set[Type] subTypes(Type type, Collection[Type] allTypes):
 this.addAll(allTypes.select(k|k.supers.contains(type)));

This solution relies on the afore described queries. Frankly, I needed some time to figure it out (as I'm more an imperative and OO guy ;-) ) and I was really surprised how short (one line) it is. If you have to write that with Java, you will need a lot more lines. So, if you ever wondered why to use a special transformation language -- this is at least one argument.
Now, let's compute the transitive closure:

5) Transitive closure of sub types, unsorted:

create Set[Type] subTypesTransitive(Type type, Collection[Type] allTypes):
 this.addAll(allTypes.select(k|k.superTypesTransitive().contains(type)));

Note that this query does not rely on the subTypes extension, but only on the super type queries. Since we use a Set, we do not have to take care of duplicates. Again: A Java solution would be much longer.
A.superTypesTransitive(allTypes) will return B,C,E,G,H,I,J,D,F
Now, let's implement the depth first search for sub types:

6) Transitive closure of sub types with depth first search order:

create List[Type] subTypesTransitiveDFS(Type type, Collection[Type] allTypes):
 type.subTypes(allTypes).forAll(s|
        this.add(s)->
        this.addAll(s.subTypesTransitiveDFS(allTypes).reject(e|this.contains(e)))
    !=null);

Since we have implemented an extension for subTypes, the solution is quite similar to the super kind depth first search algorithm.
A.subTypesTransitiveDFS(allTypes) will return B,E,F,I,J,C,D,G,H. Note that we cannot simply use the unsorted solution with a list instead of a set, as this will not result in a depth first search order (in our case, it would return something like B,C,D,E,F,I,J,G,H). The same is true for superTypesTransitiveDFS, by the way. Last but not least the breadth first search for sub types.

7) Transitive closure of sub types with breadth first search order:

create List[Type] subTypesTransitiveBFS(Type type, Collection[Type] allTypes):
 let todo = new java::util::ArrayList :
  todo.addAll(type.subTypes(allTypes)) -> 
  bfsSub(this, todo, allTypes);
  
private Void bfsSub(List[Type] result, List[Type] todo, Collection[Type] allTypes):
 if todo.isEmpty then
  Void
 else
  result.add(todo.first()) ->
  todo.addAll(
    todo.first().subTypes(allTypes).
        reject(e|todo.contains(e) || result.contains(e))) -> 
  todo.remove(todo.first()) ->
  bfsSub(result, todo, allTypes); 

This algorithm also resembles the one for super types. By the way: I didn't find the let expression explained in the documentation of Xtend (however, some examples use it). Did I miss it or is there really more OCL in Xtend as told in the docs?
A.subTypesTransitiveBFS(allKinds) will return B,C,D,E,F,G,H,I,J, which is easily validated as this is the order of the types as shown in the little figure.

Tuesday, February 8, 2011

Extract Xtext Project Wizard

Besides a nice parser and a powerful editor, Xtext can also generate a project wizard and a generator plugin. In this little posting I will explain how to extract the project wizard related code from the generated UI plugin -- and explain why you would want to to that in the first place. First of all, what does the project wizard? The project wizard is useful to set up an initial project for your DSL, e.g., for creating initial templates, workflow files and to configure the project and its dependencies. The nice thing about the Xtext generated wizard is, that the template files are simply created using Xpand. It is very easy to provide custom templates and other stuff by simply modifying the Xpand file. You will find that Xpand template in the UI plugin, if you have enabled the wizard fragment in your DSL generator workflow:
// project wizard (optional)
fragment = projectWizard.SimpleProjectWizardFragment {
 generatorProjectName = "${projectName}.generator"
 modelFileExtension = file.extensions
}
This code is uncommented by default. So, Xtext generates a nice project wizard. This project wizard is found in the generated ui project. Let's say, my language is called sample.mydsl, then the wizard is created in sample.mydsl.ui, together with the powerful editor. Now, why would I want to extract this editor from the ui plugin. Actually, because I want to use the wizard during development of my language. Sounds silly... OK, here is the longer explanation: I find the wizard extremely useful especially if I have a project which uses code generation. Although you can use any kind of code generation technologies, you probably will often use Xpand and Xtend, as it is nicely integrated with Xtext. That is, Xtext can also generate a generator plugin with some template files and most notably a workflow file (MWE2). Actually, the generated generator plugin looks almost similar to a plugin project created by the generated project wizard. That is, both, the generator plugin and a project wizard created DSL project, contain a folder src/model with a sample model, and also a sample MWE2 generator file. Also, both projects contain a src-gen folder, and the plugin-dependencies are set accordingly. This is no coincidence: If you use code generation for your DSL, and if you use Xpand to perform the generation, using an MWE2 workflow to trigger the generation is a very easy solution. Alternatively, you will have to write a new action or something, but the MWE2 workflow is much easier to set up. The model provided in the generator plugin is usually only used for testing purposes. That is, you write your Xpand (and Xtend) templates in the generator plugin and you can quickly test them by applying them on the models provided in the generator model folder. Later, when the generator plugin is installed, the user of the generator won't see the Xpand templates in the workspace, however they can be accessed by the MWE2 workflow of the DSL project. During development, you probably run into the same situation as I did: You write the templates, then some things are changed in your DSL and you have do adjust the templates, or new features should be implemented, or you have forgotten some weird constellations, or you haven't written templates for all model elements. Most important: You probably will have several different models and you do not want to get the generated code to be generated in your generator plugin (as it should only define the templates, and should not contain some weird generated Java or whatever files). In my case, other guys on the project write the DSLs and I have to maintain the templates. I rarely use the generated DSL editor myself, but I often have to use the generator (and adjust the templates). Since I needed an extra project for each different case, I found myself copying the generator plugin (without the templates, but with my nicely configured MWE2 workflow) over and over again. I always had to adjust the plugin dependencies and so on. Well, a wizard would be nice in that situation. Now, you see my point? The Xtext generated project wizard is exactly what I needed -- but I need it in an Eclipse instance in which I do not have my DSL editor (and neither the DSL parser) installed, as this is the very instance I develop these things. But the wizard would come quite comfortably. So, the idea is as follows: I extracted the wizard into a separate plugin. The wizard plugin has no dependencies to my DSL, so it can be installed without my DSL plugins installed. However, the generator, i.e. the MWE2 workflow, requires all my DSL stuff. But this is no problem, as the generator plugin is an opened project in my development workspace -- thus the MWE2 workflow (not the plugin code, but the workflow) can access the project. Here is how to extract the project wizard (which is not too complicated, however it may save you some minutes):
assumption
You have an existing Xtext project, I will call it "sample.mydsl" in the following, and, of course, a generator plugin "sample.mydsl.generator" created for your DSL. Inside the generator, you have configure the MWE2 workflow.
generate project wizard
Enable Project Wizard Fragment in your DSL workflow, e.g. in "GenerateMyDSL.mwe2" of your DSL project:
// project wizard (optional) 
fragment = projectWizard.SimpleProjectWizardFragment {
 generatorProjectName = "${projectName}.generator" 
 modelFileExtension = file.extensions
}
Now run the workflow "GenerateMyDSL.mwe2", and disable the project wizard fragment (as we do not want to have two wizards in the end).
create wizard plugin
Create new plugin project (e.g., sample.mydsl.ui.wizard) with and Activator, check "This plugin-in will make contributions to the UI".
Important: Use "sample.ui.wizard" as package for the Activator (without mydsl), in order to retrieve the same package names as in the Xtext generated ui project.
move wizard code to new plugin
Move the following classes from the generated ui plugin into the wizard -- this is the actual "extraction" of the wizard:
from src-gen:
  • sample.ui.wizard.MyDSLNewProjectWizard
  • sample.ui.wizard.MyDSLProjectCreator
from src:
  • sample.ui.wizard.MyDSLProjectInfo
  • sample.ui.wizard.MyDSLNewProject.xpt
and copy the following classes from the generated ui plugin into the wizard plugin, put them into the wizard package:
  • sample.ui.MyDSLUiModule
from src-gen:
  • sample.ui.MyDSLExecutableExtensionFactory
Also copy the plugin from the ui plugin into the wizard plugin and remove everything except the last extension with point "org.eclipse.ui.newWizards", adjust class name of extension factory according to the class in your wizard project (you will have to add a ".wizard" to the fully qualified name).
configure wizard project
In Manifest, set singleton directive to true (if not already set) and add missing plug dependencies, e.g.
  
 org.eclipse.xtext.ui,
 org.eclipse.ui.editors;bundle-version="3.5.0",
 org.eclipse.ui.ide;bundle-version="3.5.0",
 org.eclipse.xtext.ui.shared,
 org.eclipse.ui,
 org.eclipse.xtext.builder,
 org.antlr.runtime,
 org.eclipse.core.runtime,
 org.eclipse.core.resources,
 org.eclipse.xtend,
 org.eclipse.xpand
Depending on your project, you may have to add other dependencies as well, e.g. de.itemis.xtext.typesystem if you use the great Xtext type system by Markus Völter.
adjust the copied and moved java files
  • MyDSLUiModule is to be replaced completely:
     
     public class MyDSLUiModule extends AbstractGenericModule {
     
     private AbstractUIPlugin plugin;
    
    
     public MyDSLUiModule(AbstractUIPlugin plugin) {
      this.plugin = plugin;
     }
     
     public void configureLanguageName(Binder binder) {
      binder.bind(String.class).annotatedWith(Names.named(Constants.LANGUAGE_NAME)).toInstance("sample.MyDSL");
     }
     
     public void configureFileExtensions(Binder binder) {
      binder.bind(String.class).annotatedWith(Names.named(Constants.FILE_EXTENSIONS)).toInstance("MyDSL");
     }
     
     @Override
     public void configure(Binder binder) {
      super.configure(binder);
      binder.bind(AbstractUIPlugin.class).toInstance(plugin);
      binder.bind(IDialogSettings.class).toInstance(plugin.getDialogSettings());
     }
     
     
     // contributed by org.eclipse.xtext.ui.generator.projectWizard.SimpleProjectWizardFragment
     public Class<? extends org.eclipse.xtext.ui.wizard.IProjectCreator> bindIProjectCreator() {
      return MyDSLProjectCreator.class;
     }
    }
    
  • MyDSLProjectCreator can be reused, however you may want to add some dependencies to the getRequiredBundles method, depending on the dependencies of your generated classes:
      
    @Override
    protected List<String> getRequiredBundles() {
     List<String> result = Lists.newArrayList(super.getRequiredBundles());
     result.add(DSL_GENERATOR_PROJECT_NAME);
     result.add("org.eclipse.jface.text");
     result.add("org.eclipse.jdt.core");
     result.add("org.eclipse.equinox.common");
     result.add("org.eclipse.core.runtime");
     return result;
    }    
    
    Actually, this list is the list of required bundles of your generated code. That is, if you generate Java code which requires a plugin "my.super.plugin", you have to add the dependency here.
  • MyDSLNewProjectWizard: you probably have to fix a problem in getProjectInfo, simply change the fully qualified name MyDSLProjectInfo to a simple name, as we have moved the info into the same package.
  • MyDSLExecutableExtensionFactor: fix class name of plugin activator, change getInstance().getInjector("..") to getDefault().getInjector()
  • Activator: Add initialization of Guice injector and add an injector attribute to the wizard's activator:
      
    public class Activator extends AbstractUIPlugin {
        ..
        Injector injector;
    
     public Injector getInjector() {
      return injector;
     }
    
     @Override
     public void start(BundleContext context) throws Exception {
      super.start(context);
      plugin = this;
    
      injector = Guice.createInjector(
      // Wizard:
       Modules.override(new MyDSLUiModule(this))
       // Workspace etc.:
        .with(new org.eclipse.xtext.ui.shared.SharedStateModule()));
    
     }
     ..
    }
    
remove obsolte code from ui plugin
  • remove method bindIProjectCreator in AbstractMyDSLUiModule
  • remove the wizard extension point definition from the ui plugin.xml, as you probably do not want to have two wizards.
You can now run the wizard in the Eclipse runtime (i.e. the Eclipse started from within your initial, vanilla, installation). Of course, you can modify the Xpand template, e.g. I have simply copied and pasted the workflow of my generator plugin into that Xpand file. You may also add other adjustments as well. Now, you can export the wizard as a plugin and install it to the dropins folder of your Eclipse installation. After restarting that instance, the wizard is available in the workspace in which you develop your DSL. As it has no dependencies to the DSL, you can create new projects, which are configure as you specified above. The workflow is working, although it probably has dependencies to your DSL (e.g., it uses the generated DSL parser), at the DSL project is an opened project in your workspace. Side effect: Actually, you have extracted the Guice infrastructure as it is used by Xtext generated editors. So, even if you do not use your wizard as often as expected, you may have learned some stuff about this better-then-factory-pattern technology.

Friday, January 21, 2011

Clickable logging messages

Probably everyone is using logging. The dirty way is to use System.out/err, a better solution is to use the JDK logging or some logging library, such as log4j (or a facade, e.g., slf4j) -- and of course Log4E to generate the logger declaration and other logging code . For rich clients and small tools, I usually use the JDK logging, as I do not have to add another library and the logging is only used for development purposes. For web (or other server) applications, in which logging is an important monitoring tool, other log libraries may be a better choice. Anyway, when using the JDK logging facilities, the log message is written to the Eclipse console view by default. A simple log message usually looks like that:
Jan 14, 2011 4:36:54 PM my.project.MyClass bar
INFO: Demo
This is ok, however I usually do not need the date and time. Also, IMHO two lines for a single message waste too much of my console view space. I have written a small formatter, which produces the following output (scroll to the right to see the whole ouput):
I Demo                     at my.project.MyClass.bar(MyClass.java:88)
That is, the date and time is omitted (as this usually is not needed for development purposes), and instead of printing the full logging level, abbreviations are used (I - info,W - Warning, S - Severe and so on). In case of shorter messages, the location information is printed on the same line with some tabs between message and location for better readability. The best thing about that format is, that the Eclipse console recognizes the location and makes it available as a link, which directly opens the location in the source code where the log message was produced. If you configure the console preferences to use gray for standard error text (as which the log message is interpreted due to the location format), you will something like that:
W Test Warning             at my.project.LoggingTest.main(LoggingTest.java:32)
I Test Info                at my.project.LoggingTest.main(LoggingTest.java:36)
Actually, I'm using this formatter for quite some years now. Today I have created an eclipselabs.org project called delofo (short for Developer Logging Formatter) with this formatter. I've chosen eclipselabs.org because the formatter is optimized for the Eclipse console view, although the project does not provide a plugin and can be used w/o Eclipse. In the project wiki, you will find an installation guide, explaining how to globally install the formatter. By installing the formatter globally, you do not have to modify your projects in anyway.