Étiquette : Java

  • JAXB puzzles with Java 11

    JAXB, which means Java Architecture for XML Binding, is a library that can be used to persist Java objects as XML documents. XML is a text-based format that can be used to represent complex and hierarchical data. JAXB can unmarshal a XML document directly into objects whose classes are defined by the developer, rather than letting the developer parse a generic abstract representation such as the Document Object Model (DOM) into domain-specific
    objects. JAXB can also marshal objects into XML documents. This can be used to persist data in XML, a text-based format independent of the platform, as opposed to some alternatives like Java object serialization.

    For an introduction to JAXB, see for example Guide to JAXB.

    Object mapping is not specific to JAXB. The Jackson library can be used to parse JSON documents directly into objects, while SnakeYaml can do the same for YAML. These libraries are not essential but very handy. Instead of just parsing the XML, JSON or YAML files into an abstract tree that then needs to be traversed by tedious and repetitive code, they directly map the data to business objects, performing some degrees of validation in the process.

    However, these libraries are quite problematic when they stop working after upgrading to a newer Java version. This is exactly with happened with JAXB past Java 8. Fortunately, solutions exist, but they don’t always work.

    This post summarizes the known problems and proposes solutions. We first start with the classic dependency issues due to removal of JAXB from JavaSE, then go on with a less common issue that can arise in applications involving multiple class loaders.

    Somewhat know but not fully solved problem

    There are many, sometimes misleading, posts and forum questions about JAXB. Here are a couple of examples:

    These posts lead me to believe not enough effort was spent in addressing the issue. This could be explained by many developers moving off XML, in favor of other formats like JSON or YAML. XML is a bit too verbose and many parsers must be explicitly configured to disable external entities to avoid some security vulnerabilities. JSON is a lot simpler format, but it doesn’t support comments by default. YAML is a bit less verbose than JSON and also supports comments, but its indentation-based syntax can be misleading.

    In our case, we were stuck with XML-based projects our component had to continue loading. Migrating to JSON or YAML would break backward compatibility and providing migration tools would have been as tedious as parsing the XML documents without JAXB. Maybe a migration tool would be easier to write in Python, but such an offline tool would create complexity. If we have to switch project format, the Java code needs to be able to load the old XML-based projects and the new ones, no offline tool to convert XML to something else.

    Compilation errors due to JAXBContext not found

    The first thing that occurs after migrating a JAXB-enabled program from Java 8 to some newer versions is a compilation error because JAXBContext cannot be found anymore. This is because JAXB got turned into a module in Java 9 and it is not on the module path by default. Although this can be solved, it is better to explicitly add a JAXB library as a third party dependency since JAXB got removed completely from Java 11.

    Besides the JAXB-API itself, the program needs an implementation. The most common JAXB implementation is Glassfish.

    Here is an example of Maven dependencies for JAXB.

    <dependency>
        <groupId>jakarta.xml.bind</groupId>
        <artifactId>jakarta.xml.bind-api</artifactId>
        <version>2.3.3</version>
    </dependency>
    <dependency>
        <groupId>org.glassfish.jaxb</groupId>
        <artifactId>jaxb-runtime</artifactId>
        <version>2.3.6</version>
        <scope>runtime</scope>
    </dependency>

    There are more recent versions of JAXB, but they change the package names of the classes. The 2.x is the closest to what was provided in Java 8.

    Setting the scope to runtime for jaxb-runtime is not strictly necessary, but is a good idea to help IDEs proposing meaningful code completion. No code should directly refer to JAXB implementation classes. Code should only interact with JAXB through its public API. This allows the implementation to be switched if need be.

    javax.xml.bind.JAXBException: Implementation of JAXB-API has not been found on module path or classpath.

    Main cause is the absence of a JAXB implementation on the class path. This can usually be solved by adding a dependency to your build file. See above for an example for Maven.

    If the error persists, next step to investigate is obviously to look at the cause in the stack trace. There is a common but misleading cause:

    Caused by: java.lang.ClassNotFoundException: com.sun.xml.internal.bind.v2.ContextFactory

    Inspecting Glassfish JAXB implementation JAR, one can find that the implementation name is com.sun.xml.bind.v2.ContextFactory, without the « internal ». It is then tempting to believe there is an issue in JAXBContext.newInstance() using the wrong default factory class. However, this is not a problem for JAXB API 2.3.3 at least. I had to dig into the source code of JAXBContext to verify this.

    JAXBContext.newInstance(), from JAXB-API, applies several strategies to search for a JAXB implementation, and it falls back on trying to load the default class present in Java 8 only if everything else fails. If JAXB implementation is present in the class path, JAXBContext.newInstance() should find it and not fall back on the missing Java 8 default.

    Class loader intricacies

    What if despite the fact you checked, double checked, triple checked, asked other developers to check, double check, triple check, that the JAXB dependencies are correct, and your program keeps complaining about JAXB implementation that cannot be found? Checking again like many forum posts suggest won’t help in some cases. Trying different versions of the JAXB artifacts could help, but doing that in a tentative blind way is likely to be of no help.

    In a nutshell, you need to make sure JAXBContext.newInstance() is called at a place where the context class loader is correctly set up. Calling JAXBContext.newInstance() from a worker thread spawned by the Java Executor Service or ForkJoinPool can cause issues. It is better to construct the JAXBContext instances you need at the beginning of your program’s execution, store them in static variables and reuse them instead of recreating them again and again. Your program will benefit from an almost free performance boost when loading from or saving to XML, and you will be less likely to get JAXB issues. The JAXB problems, if any, will pop fast, right at application startup, rather than later on after the application receives requests.

    Understanding the reason for this is not obvious and requires digging into Java class loaders. When a Java application requires a new class or resource, it uses a class loader to search for it. Most class loaders look for resources at well-defined locations named the class path. Locations can be directories, archive files (with .jar extension) or URLs to archive files (the contents will be downloaded as needed).

    Simple applications started with java command line and running a main method from a class have a single class loader. This loader is created at startup, using a class path coming from the CLASSPATH environment variable or passed through the command line using the -cp option. Java applications started with -jar option also have a single class loader. If JAXB can be verified to be on the class path, the application should work correctly.

    Problems arise when the Java Virtual Machine deals with multiple class loaders. Spring Boot applications can have more than one class loaders if they create multiple application contexts. Even running a Java program through the Maven Exec plugin results in a second class loader being created for the execution.

    When there are multiple class loaders, a question should come up: how JAXBContext.newInstance() chooses a class loader to search for a JAXB implementation? There are unfortunately multiple possibilities, and this is not necessarily the one you think about.

    1. JAXBContext.newInstance() has some overloads accepting an explicit class loader. In that case, that class loader will be used to search for the implementation. But being in control of the class loader doesn’t necessarily mean you will know the correct one to pass. Moreover, not all forms of newInstance accept a class loader, e.g., the one taking XML classes to bind to doesn’t.
    2. Some developers, including me, could think that JAXBContext.newInstance() will use the class loader bound to JAXBContext. This class loader can be retrieved easily, using JAXBContext.class.getClassLoader(). If JAXB-API and JAXB implementation are on the same class path, the class loader that loaded JAXBContext should be suitable to find the implementation. But JAXBContext.newInstance() gets its class loader another way. See 3.
    3. Using the current thread’s context class loader. This can be retrieved using Thread.currentThread().getContextClassLoader(). It can be confirmed, checking at source code, that this is how JAXBContext.newInstance() locates the JAXB implementation.

    Wrapper code such as the Maven Exec plugin or Spring Boot that creaes a custom class loader will take care of setting that class loader as the current thread’s context class loader, using Thread.currentThread().setContextClassLoader(). However, what if the wrapped code uses other threads?

    If the wrapped code creates a Thread of its own, using new Thread(), the new thread will inherit the context class loader from its parent. However, it seems that things are different if threads from a pool are reused. Facilities such as the Executor Service and ForkJoinPool, offered by Java, can create threads, and it is not guaranteed these threads, that can be reused by multiple parent threads, will have the expected class loader. They may end up with the default class loader, and that class loader could be unable to locate JAXB implementation, unless it is baked into the Java standard library, like in Java 8.

    This explains why a program that worked well with Java 8 starts to exhibit JAXB errors in Java 9 and onward.

    Possible solutions

    1. As outlined above, the ideal solution is to make sure JAXBContext.newInstance() is only called from the main thread and the JAXBContext instances are cached and reused. JAXBContext is thred-safe; only the Marshaller and Unmarshaller created by JAXBContext are not thread-safe.
    2. If above fails, one quirky possibility is to temporarily set the context class loader to something that could locate JAXB implementation, then reset the class loader. Here is an example of this.
    ClassLoader oldContextClassLoader = Thread.currentThread().getContextClassLoader();
    JAXBContext jc;
    try {
       Thread.currentThread().setContextClassLoader(JAXBContext.class.getClassLoader());
       jc = JAXBContext.newInstance(MyXMLObject.class);
    } catch (JAXBException e} {
       // Do something, this is a checked exception.
       // Lazy developers sometimes do this and that's not ideal.
       // If you do this, at least pass the thrown exception as the cause.
       throw new RuntimeException("An error occurred", e);
    } finally {
       Thread.currentThread().setContextClassLoader(oldContextClassLoader);
    }

    Another very bad solution may be to directly call ContextFactory.newInstance() method from the JAXB implementation. This is not great, because it ties your code to a specific implementation, but it is at least better than getting rid of JAXB.

  • Ubuntu 16.04 almost killed my current HTPC setup

    Yesterday, I tried to upgrade my HTPC running Ubuntu 14.04 to the new LTS 16.04. That almost went smooth, but some glitches happened at the end and some changes prevented my Minecraft FTB server to start again. The problems are now solved, but I was wondering if I would be able to get this working again.

    I had two hopes with this upgrade: get an intermittent awful audio glitch fixed and have the ProjectM visualization work again. From time to time, when I start the playback of a video file, I’m hearing an awful super loud distortion instead of the soundtrack. I then have to restart playback. Usually, that’s enough, sometimes, I have to restart it twice. Fortunately, audio doesn’t go crazy during playback. ProjectM visualization started to fail, I think since Kodi 1.16. It just doesn’t kick in, leaving me a blank screen. At least Kodi doesn’t crash or freeze as some versions of XBMC were doing when ProjectM was unable to access Internet reliably.

    CloneZilla failing to start

    The week before the upgrade, I wanted to backup the SSD of my HTPC using CloneZilla in case some problems happened. I used an old version I had burned on a CD because I thought this 2009 HTPC wouldn’t boot USB sticks. Well, that old version, although working on my main PC, failed to start on my HTPC. It was simply freezing without any clue of what was happening. Before trying to download the new version and burn it on a CD, I noticed that my external USB hard drive was showing up in the boot up options when pressing F8 at computer startup. I thus tried to boot my CloneZilla USB stick running a more recent version and that worked. I don’t know if my HTPC was always able to boot off USB, maybe this capability got added by a BIOS upgrade. That was a good thing, and allowed me to perform my backup.

    Dist-upgrade or clean install?

    Several people on forums recommend to perform a clean install, claiming that too much things changed from one version to the other. That may be true in some cases, and that’s probably the safest route, but unfortunately, the clean install doesn’t always detect the drives to mount, requiring time-consuming modifications to /etc/fstab (with copy/pasting of drive UUIDs) and then I would have to figure out what packages were previously installed and reinstall them. I also have a couple of Cron jobs performing automatic backups of my Minecraft worlds that I would need to recreate.

    Instead of doing that, I tried to use the Update Manager to perform a dist-upgrade. Unfortunately, by default, the tool won’t go from one LTS to the other. You have to go all the way through 14.10, 15.04, 15.10, then 16.04! Each dist-upgrade would have taken at two hours, making this process a really painful non-sense. Instead, I tried calling update-manager -d and got the option to go from 14.04 to 16.04!

    During the installation, I thought that if the power supply of this relatively old system died during the process, the system would probably be unrecoverable, requiring a backup restore or clean install. Aouch! Luckily, no such thing happened.

    TeXLive broken

    During the dist-upgrade, I got some error messages because the updated TeXLive-related packages couldn’t be configured properly. Why is TeXLive installed on this HTPC? I don’t remember exactly. I don’t need to compile any LaTeX document on this machine so this didn’t seem an issue at all for me. I just asked the installer to ignore the errors and noted down to myself to delete the TeXLive packages after the upgrade to be sure not to run into issues if, for some obscure reasons, I wanted to compile a LaTeX document later on.

    Failed dist-upgrade

    Unfortunately, the dist-upgrade aborted with an error, no accurate information, just a message telling that the dist-upgrade failed. Argh! The system couldn’t shutdown or reboot anymore, even when running sudo reboot from the command line. I was so frustrated that I considered shutting down this machine, which caused me issues after issues since more than seven years, and never turn it back on again. If I weren’t able to recover from this failure, I could however have restored my CloneZilla image after taking a break from this catastrophic upgrade. In other words, everything wasn’t lost.

    I tried pressing the power button a couple of times, the screen became blank and remained blank for a few seconds, then the stupid machine rebooted. At least, the broken Ubuntu installation started up to the GUI. Assuming the main issue was this TeXLive glitch, I opened a Terminal and tried to remove the TeXLive package: sudo apt-get remove texlive. This failed. Apt-get was reporting errors about the TeXLive-related packages that weren’t configured. I tried to remove the package using dpkg, which complained that texlive wasn’t an installed package. I then tried searching for the packages using apt-cache pkgnames tex, and ended up removing tex-commons. That got rid of the incorrectly configured packages and unblocked apt-get.

    After this, I ran apt-get update, then apt-get dist-upgrade. That installed a couple of additional packages. Then I ran apt-get autoremove to remove the obsolete packages. This, hopefully, completed the dist-upgrade. I also rebooted to make sure the system could still boot after that.

    OpenJDK 8 causing issues

    This HTPC is running a Minecraft world my friend and I are sharing. We log less and less often onto that map because my friend plays rarely and I am currently focusing on Agrarian Skies 2 rather than this old FTB Monster pack the map runs on. But I I am considering the possibility of starting a map on FTB Infinity Expert Skyblock pack after I’m done (or completely blocked) with Agrarian Skies 2 and would like to run it on a server with an auto-backup strategy in place and the possibility for friends to join in if they want. I thus wanted to keep the possibility of running Minecraft servers on my HTPC.

    Now, when I started the FTB Monster server, I was greeted with a meaningless ConcurrentModificationException. I may be able to retrieve the stack trace, but this is a bit pointless, referring repeatedly to non-sense internal class names. Ok, this is probably broken because of Java 8 and won’t get fixed unless I upgrade the mod pack, which will either force me to start from scratch on a new map, or require hours and hours of work to convert the map, and the map would be quite damaged after the upgrade. In particular, switch to Applied Energistics 2 mod will destroy my logistic network so much that it will require a complete redesign and rebuild. This will be even worse than the switch of Thermal Expansion and IC2 that occurred when I migrated (painfully) from Unleashed to Monster.

    Simple solution: run this under OpenJDK 7. That’s simple under Windows, unfortunately… Yep, no available OpenJDK 7 package on apt-get for Ubuntu 16.04! Maybe I could have fiddled something with PPAs or install Oracle’s JDK outside of the apt-get packaging system, but what’s the point of having a packaging system if it requires so many workarounds? I also thought about running the server into a Docker container constructed from an image proposing Java 7, but that’s a bit convoluted and could cause other issues. Who knows if the server will behave well when running in a Docker container? It will probably, but that remains to be tested.

    Fortunately, I figured out a way to patch the installation by adding a new JAR to the mods folder. The JAR comes from http://ftb.cursecdn.com/FTB2/maven/net/minecraftforge/lex/legacyjavafixer/1.0/legacyjavafixer-1.0.jar and was recommended by a forum post on http://support.feed-the-beast.com/t/cant-start-crashlanding-server-unable-to-launch-forgemodloader/6028. Installing the JAR fixed the issue and allowed me to start the server!

    Totally unexpected, very frustrating

    In order to test my Minecraft server, I started the FTB Launcher on my Ubuntu 16.04 main computer. From the launcher, I started the FTB Monster pack: crash. OpenJDK 8, again. I had to apply the JAR patch on my client as well. I did it (instead of fiddling to manually install JDK 7) and that worked. I was able to log on my server and enter my world. However, as soon as I pressed F12 to go full screen, screen went blank and everything was blocked. No way to go out of the game by switching desktop, no way to kill the game window with ALT-F4. I would once again have to go to another machine, SSH into my main computer, kill the JVM, fail, try with kill -9. Instead, I just rebooted the machine, tried with Windows, and that worked. My Minecraft setup was correct. Just the client now requires a different video card or driver to work reliably on Ubuntu, but I changed from onboard Intel HD to a NVIDIA GeForce addon card in 2013 just for that reason. Having to switch back and forth graphic cards from Ubuntu versions to versions is a total non-sense for me.

    Kodi is gone

    I don’t know exactly how that happened, but Kodi, the new name of XBMC, got removed during the upgrade. Just reinstalling it was simple and enough to fix this. Kodi still works fine, for music and video playback. ProjectM visualization is still broken, though, but that’s not a big deal. I didn’t hear the audio distortion since the upgrade, but it’s too recent to tell if it’s gone for good or not.

    Conclusion

    For now, I’m not sure it was worth it but at least it didn’t break things. Main functionalities of my HTPC are still there: Minecraft server runs, I was able to listen to YouTube videos, Kodi works for music and videos, SSH is  working properly. I’ll have to see if other surprises are awaiting me.

  • Groovy + Maven + Eclipse = headache

    Java is a general-purpose programming language that matured over more than ten years. It provides a solid platform on which many third party libraries (of various quality and complexity of course) were developed. Maven is one of the several ways large Java projects can be described formally and built automatically. Maven has the ability to manage dependencies a lot better than the traditional way of bundling everything together in a large archive and aims at simplifying and unifying build process, although advanced configuration quickly drifts into XML nightmares. Then comes Eclipse, an integrated development environment. Although not perfect, very far from it, Eclipse has been a time saver for me, especially when comes time to search into large code bases and refactor code.  Eclipse is designed for Java, and it has a plugin to integrate well with Maven, called M2Eclipse. We can safely state that Java, Maven and Eclipse play well together.

    Then comes Groovy, a language built on top of Java. Source code in the Groovy language is compiled into byte-code like Java is, and the byte-code can run under the same virtual machine as regular Java programs, with the exception they need a set of runtime classes and the generated byte-code has some more indirections as compared to one generated by a traditional Java compiler. As a Java extension, we would expect Groovy to play well with Maven and Eclipse. Well in practice, I found out not to be exactly the case.

    I experienced what follows with Eclipse Kepler, Groovy 2.2 and Maven 3. Things may have been better with older versions or with newer ones, that will have to be seen.

    Groovy and Eclipse, almost well but…

    First time you will try to write a Groovy program in Eclipse, you will notice that there is absolutely no IDE support for that language. You won’t be able to use any code assist and Eclipse will not allow to compile or run Groovy code for you. You will need to install an extension to get Groovy support. This is the Groovy Eclipse plugin. The plugin works relatively well, but it has a couple of annoying drawbacks.

    First, code completion works in random erratic ways. I sometimes get tired and turn it off. For example, I had a variable of type String. I knew it was a String and the IDE had the ways to know, because I declared the type of the variable in my code. In Groovy, you can use variables without declaring the type. However, when I was trying to get proposed completions for to, I was getting toUpperCase() but not toLowerCase(). This was completely arbitrary.

    When running a Groovy script, the arguments in the launch configuration get prepopulated with a list of standard stuff that you must not delete. If you want to pass your own arguments to your script, you have to append them at the end of what the Groovy extension inserted in the Arguments box and you need to be careful not to delete the predefined stuff if you completely replace your custom arguments.

    Debugging Groovy code in Eclipse is like playing Russian roulette. Sometimes you can print the contents of a variable, sometimes you cannot; you don’t know when it will fail and why.  Sometimes you can expand an object and see its fields, sometimes the + icon is not there and you cannot expand, again for no obvious reasons. Execution may step into closures or may not, you don’t know, at least I didn’t. You can work around by putting breakpoints in the closures, but when you go out the closure, you end up in strange places of the execution within internals of Groovy. Conditional breakpoints never worked, at all, so I had to constantly pollute my code with insane if (some condition) println(« Bla ») and be careful to remove all the junk after I’m done debugging.

    Error messages are sometimes cryptic. If you are unlucky enough, you can even manage to get an Internal error from the Groovy Eclipse compiler! I was getting that in one of my classes and had to disable static type checking for that class to get rid of it.

    On Monday, August 4th 2014, things went completely south after I upgraded my build to Groovy 2.3. Everything was working fine with Maven on the command line. Eclipse was compiling the code fine. I set up the project to use Groovy 2.3 and there was no issue. However, when running the project, I was getting the following runtime error.

    Conflicting module versions. Module [groovy-all is loaded in version 2.2.1 and you are trying to load version 2.3.6

    I looked at my POM file, analyzed Maven dependencies with both mvn dependency:tree and Eclipse, found no Groovy artifact except the 2.3.6 one, verified my PATH to make sure only Groovy 2.3 was on it, checked Eclipse preferences many many times, restarted Eclipse several times, to no avail. There seems to be something in the Groovy Eclipse plugin hard-coded for Groovy 2.2, even if the compiler is set to 2.3!

    Any search on Google is yielding results about Grails and Spring, as if nobody is using Groovy alone anymore, only with other frameworks. Nobody else seems to be having the issue.

    Maven + Groovy = fire hazard!

    Maven relies on plugins to perform its tasks, so the ability to build something with Maven depends on the quality of the plugins. There is unfortunately no official well known, well tested and stable plugin to build Groovy stuff using Maven. The page Choosing your build tool gives a good idea of what is currently available.

    First I read about GMaven, but I quickly learned it was not maintained anymore, so I didn’t try to use it. Then I read that the Groovy Eclipse Compiler was the recommended one. I was a bit reluctant, thinking this was a piece of hackery that would pull out a bunch of dependencies from Eclipse, resulting to an heavy weight solution. But this was in fact well isolated and just the compiler, no need to pull the whole Eclipse core!

    Maven Eclipse compiler worked well a couple of months for me. However, yesterday, things went south all of a sudden. First, there were compilation errors in my project that would not show up into Eclipse but appeared when compiling with Maven. These were error messages related to the static type checking. After fixing these, compilation went well, but all of a sudden, at runtime, I was getting a ClassNotFondError about ShortTypeHandling. I read that this class was introduced by Groovy 2.3 while my project was using Groovy 2.2. Digging further, it seemed that the Groovy Eclipse Compiler was pulling Groovy 2.3, compiling code against it but the code was executed with Groovy 2.2. This should in principle not cause any problem, but it seems that in Groovy, byte-code is not fully compatible between versions!

    I tried updating my dependency to the Groovy Eclipse Compiler in the hope that would fix the issue. However, that changed my ShortTypeHandling exception for stack overflows. It happened that the clone() method of one of my class was calling super.clone(), which is perfectly normal. But Groovy was making something nasty that was causing super.clone() to recursively call clone() of my subclass! This resulted to an infinite loop causing the stack overflow.

    I found this issue to be even more intricate after I tried to compile my code on JDK8 and found it out to be working correctly. In other words, the JDK was affecting how Groovy Eclipse Compiler was building the byte-code!!! In JDK7, something would fluke the byte-code, causing the stack overflow errors, while in JDK8, everything would go fine!

    I then tried updating the compiler once more, to the latest and greatest. Things compiled, but I was back at square one with the ShortTypeHandling exception! So no matter what I was trying, Maven was unable to build the project anymore.

    I was about to give up on Maven and use a batch file to call Groovy directly, but that would have been a lot of fiddling with the class path. I was not happy at all about this solution.

    Then I found out about the GMavenPlus plugin. I tried it and it worked like a charm! The plugin makes use of the Groovy artifact defined in the project’s dependencies rather than hard-coding a compiler for its own version of Groovy. It uses the official Groovy compiler API rather than its own, so things get compiled the same way as when using the Groovy Ant task or the standalone groovyc compiler. GMavenPlus saved my day yesterday, freeing me from a lot of hassle.

    Is it worth it?

    I’m not sure at all. I got several problems with Groovy that would deserve a separate post. The integration difficulties with Maven and Eclipse make me believe it is better just to use Java directly. JDK8 introduced lambda expressions that fulfill a part of what Groovy is trying to implement in its own special way. For projects that really need a true scripting language, there are already several of them, like Python, which is built from the basis for scripting.

  • Groovy? Pas sûr…

    Hier, je me suis dit que ça vaudrait la peine d’essayer d’utiliser le langage de programmation Groovy pour un projet chez Nuance. J’estime que cela va me permettre de générer et manipuler du XML plus facilement et m’éviter de répétitives constructions. Plutôt qu’écrire du code pour effectuer la même opération sur chaque item d’une liste, classifier des items selon certains critères afin de pouvoir appliquer un traitement spécifique à chaque classe d’items, ouvrir des fichiers, etc., je pourrai me concentrer davantage sur la logique du programme et éviter de perdre plein de temps à écrire de la poutine répétitive et déboguer. Eh bien pour le moment, c’est exactement tout le contraire! Voici pourquoi.

    • Cette journée a mal commencé avec un problème de connexion de mon écran. Mon laptop de Nuance est relié chez moi à mon écran par un adaptateur mini-HDMI vers HDMI, un fil HDMI qui va dans un commutateur HDMI, puis un fil HDMI vers DVI qui va dans l’écran. Eh bien je n’avais plus d’image. Pourtant, mon ordinateur personnel, lui aussi raccordé au commutateur et allumé pour des raisons qui importent peu ici, affichait le bureau d’Ubuntu. C’est arrivé à quelques reprises et j’ai dû débrancher et rebrancher le câble mini-HDMI. Eh bien en vain cette fois. Cela a fini par fonctionner en essayant avec un autre adaptateur mini-HDMI! Je ne sais pas encore si c’est vraiment l’adaptateur, car mon Raspberry Pi a aussi, hier soir, refusé d’afficher en HDMI. C’est donc peut-être le foutu commutateur si bien qu’il faudrait idéalement que je remplace mon écran par un doté de plusieurs entrées HDMI. Mais les écrans d’ordinateur ont pour la plupart une seule entrée DVI ou HDMI.
    • L’installation de mon environnement Groovy a posé des difficultés. Je n’ai eu aucun mal à installer Groovy lui-même, à mettre en place le plugin Groovy pour Eclipse, mais après, les problèmes ont commencé. Je me suis vite rendu compte qu’il valait mieux créer un nouveau projet distinct dans Eclipse pour cette nouvelle tâche, pas seulement pour éviter d’introduire des difficultés dans les builds à cause de Groovy mais aussi par souci de séparation correcte du code. Sans cela, Eclipse indiquait que le compilateur Groovy du projet, dicté par le fichier POM de Maven, ne correspondait pas au compilateur utilisé par défaut dans Eclipse. Il fallait alors modifier les propriétés du build, dans Eclipse, et il n’y avait pas de synchronisation avec le fichier POM de Maven, donc modifier le fichier POM risquait de nécessiter de refaire le paramétrage du build, et toute personne désireuse de consulter mon code dans Eclipse aurait elle aussi à paramétrer le build.
    • Trouver comment configurer mon fichier POM pour que Maven puisse gérer mon satané projet Groovy n’a pas été une mince affaire. Le plugin GMaven qui semblait devoir faire ce travail est discontinué, sans aucune alternative convainquante pour le remplacer! Le seul candidat est un plugin Groovy-Eclipse-Compiler qui me semble un joli hack utilisant le compilateur d’Eclipse en arrière-plan pour compiler du Groovy! Mais bon, c’est tout ce qu’on a alors on essaie. Eh bien il me fallut copier/coller plusieurs blocs de code dans mon fichier POM et ça ne fonctionnait même pas pour les raisons suivantes!
    • Eclipse s’est d’abord plaint qu’il y avait deux installations de Groovy dans le classpath. J’ai dû exclure celle en provenance d’un projet dépendant; c’était la 1.8 et je voulais partir avec la 2.0. Après, eh bien encore cette erreur de correspondance du compilateur: mon projet voulait Groovy 2.1, Eclipse avait la 2.0! Il m’a fallu utiliser une version antérieure de Groovy-Eclipse-Compiler, et trouver le bon numéro de version a demandé des recherches à plus finir.
    • Après tous ces efforts, eh bien Eclipse est devenu affreusement lent et gelait à tout bout de champ. Cela a fini par des erreurs à propos de mémoire insuffisante puis un plantage. Par chance, le comportement était plus normal après le redémarrage d’Eclipse.
    • Ensuite, le développement a véritablement commencé. D’abord, le plugin Groovy d’Eclipse souffre de problèmes lorsque vient le temps de proposer des noms de classes, méthodes et propriétés. Parfois, il trouve un nom, parfois pas, et c’est très arbitraire. Par exemple, j’avais une variable de type String (chaîne de caractères), et Groovy avait l’information à propos du type (à noter que ce n’est pas toujours le cas vu la nature dynamique de Groovy). Eh bien Eclipse localisait la méthode toLowerCase() mais pas toUpperCase()! La complétion de noms de classes fonctionnait parfois, mais elle n’ajoutait pas toujours l’importation nécessaire si bien qu’après coup, j’avais des erreurs indiquant que la classe récemment référencée n’était pas trouvable, devais sélectionner sa référence et appuyer sur CTRL-SHIFT-M pour ajouter l’importation. Ça fonctionnait parfois, parfois pas, il fallait alors appuyer plusieurs fois!
    • D’autres difficultés surgirent en raison de ma connaissance embryonnaire du langage Groovy. Par exemple, je me suis emmêlé les pinceux avec la notation pour construire un tableau associatif. Il ne faut pas utiliser [a:b, c:d]; ça ne va pas fonctionner, le compilateur va se plaindre de l’absence des variables b et d. Il faut plutôt utiliser [a: »b », c: »d »] ou encore [« a »: »b », « c »: »d »]. Mais pourtant, GroovySH va bêtement afficher [a:b, c:d] si on lui demande de montrer le tableau! Déclarer une variable de type List<?> ne fonctionnait pas: il fallait que j’utilise simplement List; en Java, cela déclenche un avertissement comme quoi c’est un type brut. Mais si je déclarais List[] ou List<?>[], eh bien j’avais un avertissement à propos du type brut! Il faut utiliser des listes au lieu des tableaux ou bien ne pas déclarer de type du tout. Mais je trouve ça plus clair de donner le type, surtout pour les arguments d’une fonction!
    • J’ai été bien choqué quand j’ai voulu créer une classe avec des champs et y générer des accesseurs, car la fonction d’Eclipse pour le faire n’était pas disponible en Groovy. Je me suis alors rappelé qu’il existe des annotations pour indiquer à Groovy de générer ces accesseurs automatiquement. Eh bien je n’arrivais pas à retrouver ces annotations dans la documentation et des recherches sur Internet me donnèrent à des indices pour bâtir une transformation d’AST personnalisée permettant de le faire!!! Bon sang! Par chance, il suffisait de déclarer mes champs sans modificateur d’accès pour que Groovy ait l’intelligence de les traiter comme des propriétés et alors définir les accesseurs.
    • Outre les problèmes syntaxiques, il y a aussi eu des difficultés d’API. Jusqu’à ce que je trouve la documentation du GDK, indiquant quelles méthodes Groovy ajoute à Java, je n’arrivais pas à savoir facilement comment appliquer une transformation sur tous les items d’une liste (collect peut le faire), s’il était possible d’ouvrir un fichier texte en UTF-8 avec un seul appel de méthode plutôt que construire le FileInputStream, puis le InputStreamReader, et enfin le BufferedReader, etc..
    • J’ai aussi eu des difficultés avec le débogueur qui s’est remis à se plaindre chaque fois que je définissais un point d’arrêt conditionnel. J’avais beau vérifier et revérifier l’expression de la condition, tout était OK. Pourtant, j’avais cette maudite erreur. J’ai encore été obligé de modifier le code temporairement après quoi le point d’arrêt fonctionnait, mais Eclipse n’arrivait pas à trouver le code source de la classe, dans un projet importé par dépendance Maven qui était pourtant dans mon espace de travail Eclipse! Il m’a fallu indiquer l’emplacement explicitement puis j’ai enfin pu déboguer le code. C’est possible que ce soit ça qui ait brisé les points d’arrêt conditionnels.
    • J’ai eu des erreurs d’exécution à la pelle! Le code compilait, semblait beau, mais à l’exécution, j’avais des problèmes à propos de méthodes ou de propriétés inexistantes. Cette fois-ci, ce n’était pas Groovy, ni Maven, ni Eclipse mais bien mon code; il fallait corriger les petites erreurs. Certaines erreurs ont été difficiles à corriger, surtout celles qui ont surgi quand je me suis mis à utiliser le MarkupBuilder de façon un peu exotique pour construire mon XML de façon dynamique. Oh là là! La documentation de Groovy n’explique pas très bien comment fonctionne le builder; c’est un fichier en progression. Mais pourquoi placer une page sur un site web pour simplement écrire, pendant deux ans, work in progress ou coming soon? Je ne me souviens plus exactement d’où j’ai eu les indices pour comprendre ce qui se passe, peut-être dans le chapitre sur les DSL de Groovy in Action. Le problème ici était que mon fichier XML n’était pas statique: je devais générer un élément <task> pour chaque tâche de mon application et y injecter des attributs au besoin, pas toujours tous les attributs! Par chance, la chose a été possible et peut être étendue pratiquement à l’infini.

    En bref, quelle galère! On se demandera si tout ceci a valu le coup. Je me le demande moi aussi. Je pense que l’apprentissage de Groovy aura une utilité si bien que je prévois continuer cette exploration. Je ne saurais pour le moment mesurer la contribution exacte de Groovy, Eclipse et Maven dans cette expérience plutôt déplaisante.

    Il faut garder à l’esprit que j’ai eu beaucoup de difficultés avec Eclipse, incluant des problèmes avec la complétion de noms de classes et de méthodes, les points d’arrêt conditionnels qui boguent parfois mais au moins pas de plantages, pas sous Windows en tout cas. Sous Linux, j’en ai déjà eu à plus finir. Pourtant, les alternatives à Eclipse sont plutôt limitées: NetBeans qui ne gère même pas bien le fichier POM de notre projet chez Nuance, IntelliJ dont la version gratuite est bridée (on ne sait JAMAIS quand on tombera sur un blocage demandant la version payante!!!) et puis les éditeurs de texte comme Notepad++, Emacs, Vi, etc. Ces éditeurs sont excellents, je ne peux que l’avouer, mais ils ne suffisent pas à la tâche pour gérer un gros projet Java ou Groovy avec plusieurs classes.