Wednesday, May 19, 2010

Implementing Date Support with Quickfix using Xtext

Intro

Now that Xtext is at 1.0 RC1 I thought it was time to start using more of all the new features for Eclipse b3. One of the features I wanted to add was to support time stamps in a nice way in the editor. Internally, a time stamp is naturally stored as a java.util.Date so there is never a question about the exact UTC it is representing. When editing however, you may want to use some other format (if not copying an actual timestamp, you may want to use something like 'feb 10, 11:00:00am' .

The issue is that the reference to 'feb 10, 11:00:00am' in the source text has no time zone information, and the name of the month may not be in english etc. In order for the source to be valid everywhere, it would be required to fully specify the date format used, as well as the timezone and store this in the source. I choose a middle ground where the editor understands the more human friendly formats and offers to help to convert it to a format that is always possible to parse.

All of this may not be all that interesting, but it gave me opportunity to try some features of Xtext that I had not used. The rest of this blog is about my first iteration of the implementation, and it shows some Xtext techniques like:

  • Using an ecore data type in the grammar
  • A Date value converter
  • Overriding the SyntaxErrorMessageProvider
  • Providing a quick fix for a ValueConverterException

The Grammar

First step is to define the grammar that involves a time stamp

import "http://www.eclipse.org/emf/2002/Ecore" as ecore
Entity : "timestamp" '=' TIMESTAMP ;
TIMESTAMP returns ecore::EDate : STRING ;
This simply declares that a language element 'Entity' has a 'timestamp'. The TIMESTAMP rule declares that it returns an ecore:EDate. Luckily we don't have to state more than the import of ecore to make use of it in our language. Also in our favour is that EDate is declared in ecore. If this was for a datatype not in ecore, we would need to create a model containing the definition of the data type. As this was not the case here, we can move on to the data converter.

Date Value Converter

This is almost boiler plate code, but there are some interesting details. Here is the converter method.
01   @ValueConverter(rule = "TIMESTAMP")
02   public IValueConverter<java.util.Date> TimestampValue() {
03     return new AbstractNullSafeConverter<Date>() {
04
05       @Override
06       protected String internalToString(Date value) {
07         SimpleDateFormat fmt = new SimpleDateFormat("yyyyMMddHHmmssZ");
08         fmt.setTimeZone(TimeZone.getTimeZone("UTC"));
09         return '"' + fmt.format(value) + '"';
10       }
11
12       @Override
13       protected Date internalToValue(String string, AbstractNode node) throws ValueConverterException {
14         string = string.substring(1, string.length() - 1);
15
16         // First choice, if a timestamp string, use it.
17         try {
18           // Allow non UTC strings since they are fully qualified with offset and can thus
19           // be parsed by anyone.
20           SimpleDateFormat fmt = new SimpleDateFormat("yyyyMMddHHmmssZ");
21           fmt.setTimeZone(TimeZone.getTimeZone("UTC"));
22           return fmt.parse(string);
23         }
24         catch(ParseException e) {
25           // ignore and try timestamp format
26         }
27         // Second choice - if using java default for the locale
28         // Needs special processing as it probably does not contain TZ in the string)
29         try {
30           // try the default locale style of Date Time and see if it parses
31           DateFormat.getDateTimeInstance().parse(string);
32           // if this parsed, it is not likely that the default is the full
33           // format with timezone offset, so flag this as a special error :)
34           // that is fixable
35           // Although simple, it makes sense from a user perspective, a time in
36           // local format can be entered and transformed to a timestamp.
37           throw new ValueConverterException("Not in timestamp format", node, new NonUTCTimestampException());
38         }
39         catch(ParseException e) {
40           DateFormat fmt = DateFormat.getDateTimeInstance();
41           String defaultFormat = (fmt instanceof SimpleDateFormat)
42               ? ((SimpleDateFormat) fmt).toLocalizedPattern()
43               : "Default format for the locale";
44           throw new ValueConverterException("Not in valid format: Use 'yyyyMMddHHmmssZ' or " + defaultFormat +
45               "Parse error:" + e.getMessage(), node, null);
46
47         }
48       }
49     };
50   }

The code first tries to convert the string entered by the user using the wanted timestamp format. If this fails, an attempt is made to use the default format. If this works, we know we have source text that (most likely) does not have the correct time zone information in it, and we want to offer a quick fix to convert the format. But how can that be done — the ValueConverterException does not allow us to specify a 'diagnostic code' that allows a quick fix to detect the particular problem. The ValueConverterException is also final (in the 1.0RC1 release at least), so the only option is to use a marker Exception as the cause (In this case NonUTCTimestampException).

The final attempt to convert (again using the preferred timestamp format) is there simply to catch the error (it could have been remembered from the first attempt).

As you will see later, the design can be improved further by supplying the actual format that was used to successfully parse the entered timestamp in the marker exception, but I left that for a later iteration.

Note that the error message includes the two valid formats as feedback to the user in case the entered text was unparsable. It would be easy to try several formats.

Overriding the Syntax Error Message Provider

The default SyntaxErrorMessageProvider is a class that hands out SyntaxError instances describing a problem occuring in a particular context. In my case I just wanted to add handling of the ValueConverterException with my special non-UTC cause Exception.

Here it is

1 public class BeeLangSyntaxErrorMessageProvider extends SyntaxErrorMessageProvider {
2   @Override
3   public SyntaxErrorMessage getSyntaxErrorMessage(IValueConverterErrorContext context) {
4     if(!(context.getValueConverterException().getCause() instanceof NonUTCTimestampException))
5       return super.getSyntaxErrorMessage(context);
6     return new SyntaxErrorMessage(context.getDefaultMessage(), IBeeLangDiagnostic.ISSUE_TIMESTAMP__NON_UTC);
7
8   }

As you can see, this is straight forward, simply return a SyntaxErrorMessage with a diagnostic code (a static string) that I called IBeeLangDiagnostic.ISSUE_TIMESTAMP__NON_UTC. At this point, non of the new code (except the data value conversion is in effect, and a bit of magic is needed to make it kick in.

Xtext makes good use of google guice dependency injection. In addition to the standard guice, there is also advanced so called 'polymorphic dispatching'. This means, that even if it is not apparent in the guice module Xtext generates for a DSL that something can be bound to a specialized class, it is still just as easy to bind almost anything by simply adding a method.

Here is the part that was added to the guice module for my DSL

1   public Class<? extends ISyntaxErrorMessageProvider> bindISyntaxErrorMessageProvider() {
2     return BeeLangSyntaxErrorMessageProvider.class;
3   }

This means that whenever the Xtext runtime wants an implementation of the ISyntaxErrorMessageProvider, it will now get an instance of the specialized class shown earlier.

The Quick Fix

The final part of the puzzle is to provide the quick fix. There really is not much to say but to show the code:

01   @Fix(IBeeLangDiagnostic.ISSUE_TIMESTAMP__NON_UTC)
02   public void transformDate(final Issue issue, IssueResolutionAcceptor acceptor) {
03     acceptor.accept(
04       issue, "Convert to timestamp", "Converts the valid Date/Time to a fully specified time", null,
05       new IModification() {
06         public void apply(IModificationContext context) throws Exception {
07           IXtextDocument xtextDocument = context.getXtextDocument();
08           String dateString;
09           dateString = xtextDocument.get(issue.getOffset(), issue.getLength());
10           if(dateString.length() <= 2)
11             return; // something is wrong, it should be at least ""
12           dateString = dateString.substring(1, dateString.length() - 1);
13           // try to convert and throw exception if it fails.
14           Date date = DateFormat.getDateTimeInstance().parse(dateString);
15
16           // reformat as timestamp using UTC
17           SimpleDateFormat fmt = new SimpleDateFormat("yyyyMMddHHmmssZ");
18           fmt.setTimeZone(TimeZone.getTimeZone("UTC"));
19           dateString = '"' + fmt.format(date) + '"';
20
21           xtextDocument.replace(issue.getOffset(), issue.getLength(), dateString);
22         }
23       });
24   }

This is pretty much bolier plate code for a quick fix (when generating a DSL with Xtext, there is a sample that shows ow it is done). The code above simply converts the source string using the default format in the value converter, turning it into a timestamp in the correct format. It the replaces the string in the input text.

An improvement would be to pass the date format used in the 'Issue' (it is possible to pass data with a diagnostic code), but I did not look into how to do this with the SyntaxError class yet.

A big thank you to Sebastian Zarnekow at Itemis for pointing me in the right direction

Sunday, May 9, 2010

The b3 aggregator

The Eclipse b3 Aggregator is based on and part of the Eclipse b3 project. Eclipse b3 provides a versatile and adaptable framework supporting build, assembly and deployment processes. It supports a rich set of use cases. One of those - the aggregation of repositories - is the focus of the b3 Aggregator tool.

The Eclipse b3 Aggregator combines repositories from various sources into a new aggregated p2 repository. It can also be configured to produce a hybrid p2/Maven2 repository. There are many situations where using aggregated repositories is a good solution, here are some examples:

  1. Projects want to provide convenient access to their products - Installation instructions requiring the user to visit several repos for a complete install are not uncommon. An aggregated repo for all those locations provides a convenient one-stop-shop strategy. The aggregation can perform mirroring of all consumed p2 repos or selectively provide indirection via a composite repo.
  2. Organizations or teams want control over internally used components - It may be necessary to have gated access to relevant/"blessed" p2 repos where an organizational "healthcheck" has been performed prior to internal distribution. Furthermore, internally used aggregated repos can provide a common basis for all organizational users (i.e. for both IDE distribution as well as for content used when building internal applications).
  3. Increase repository availability - by aggregating and mirroring what is used from multiple update sites into internally controlled servers.
  4. Distributed Development Support - an overall product repository is produced by aggregating contributions from multiple teams.
  5. Owners of a p2 repo for a given project may not be in position to host all required or recommended components due to licensing issues - Buckminster's SVN support can serve as an example here, as it requires components available in the main Eclipse p2 repo as well as third-party components. Hence users have to visit several repos for a complete install.

The b3 Aggregator is focused on supporting these specific requirements, and it plays an important role in the full scope of the b3 project. The Aggregator is however used in scenarios outside of the traditional "build domain" and this has been reflected in the user interface which does not delve into the details of "building" and should therefore be easy to use by non build experts.

Functional Overview

The b3 Aggregator performs aggregation and validation of repositories. The input to the aggregator engine (that tells it what to do) is a b3aggr EMF model. Such a model is most conveniently created by using the b3 Aggregator editor. This editor provides both editing and interactive execution of aggregation commands. The editor is based on a standard EMF "tree and properties view" style editor where nodes are added and removed to form a tree, and the details of nodes are edited in a separate properties view. Once a b3aggr model has been created it is possible to use the command line / headless aggregator to perform aggregation (and other related commands). (Note that since the b3aggr is "just and EMF model", it can be produced via EMF APIs, transformation tools, etc. and thus support advanced use cases).

The model mainly consists of Contributions; specifications of what to include from different repositories, and Validation Repositories; repositories that are used when validating, but that are not included in the produced aggregation (i.e. they are not copied). The model also contains specification of various processing rules (exclusions, transformation of names, etc.), and specification of Contacts; individuals/mailing-lists to inform when processing fails.

Here are some of the important features supported by the b3 Aggregator in Eclipse 3.6M7:

  • p2 and maven2 support — the aggregator can aggregate from and to both p2 and maven2 repositories.
  • Maven2 name mapping support — names in the p2 domain are automatically mapped to maven2 names using built in rules. Custom rules are also supported.
  • Mirroring — artifacts from repositories are mirrored/downloaded/copied to a single location
  • Selective mirroring — an aggregation can produce an aggregation consisting of a mix of references to repositories and mirrored repositories.
  • Cherry picking — it is possible to pick individual items when the entire content of a repository is not wanted. Detailed picking is supported as well as picking transitive closures like a product, or a category to get everything it contains/requires.
  • Pruning — it is possible to specify mirroring based on version ranges. This can be used to reduce the size of the produced result when historical versions are not needed in the aggregated result.
  • Categorization — categorization of installable units is important to the consumers of the aggregated repository. Categories are often choosen by repository publishers in a fashion that makes sense when looking at a particular repository in isolation, but when they are combined with others it can be very difficult for the user to understand what they relate to. An important task for the constructor of an aggregation is to be able to organize the aggregated material in an easily consumable fashion. The b3 aggregator has support for category prefixing, category renaming, addition of custom categories, as well as adding and removing features in categories.
  • Validation — the b3 aggregator validates the aggregated result to ensure that everything in the repository is installable.
  • Blame Email — when issues are found during validation the aggregator supports sending emails describing the issue. This is very useful when aggregating the result of many different projects. Advanced features include specifying contacts for parts of the aggregation which is useful in large multi layer project structures where issues may related to the combination of a group of projects rather than one individual project - someone responsible for the aggregation itself should be informed about these cross-project issues. The aggregator supports detailed control over email generation including handling of mock emails when testing aggregation scripts.

Documentation

The b3 aggregator documentation is available here on the Eclipse Wiki.

Wednesday, May 5, 2010

Migrating b3 from Xtext 0.8 to 1.0 nightly > M6

Until there is migration documentation, my experiences of migrating the Eclipse b3 project from Xtext 0.8 (~M4) to 1.0 nightly (> M6) may be of value to others. I did this by first migrating to M6, and then using the nightly - this so I had a state to roll back to in case the nightly would fail me completely.

Migrating to 1.0 M6 version

Merge of o.e.xtext.ui.common and o.e.xtext.ui.core into o.e.xtext.ui

Almost everything that was either in ui.common, or ui.core is now in just ui - all that is needed is to change the imports, and update any dependencies to the two merged bundles with the new bundle. (See below for some additional changes).

Label Provider

The label provider changed more than just being moved to ui.

import org.eclipse.xtext.ui.common.DefaultLabelProvider;
changed to

import org.eclipse.xtext.ui.label.DefaultEObjectLabelProvider;


and my label provider is now derived from this class instead
.

Proposal Provider
References to
import org.eclipse.b3.ui.AbstractBeeLangProposalProvider;
changed to

import org.eclipse.b3.ui.contentassist.AbstractBeeLangProposalProvider;


New Structure

After running the mwe workflow, I got 4 new packages in my dsl project with the suffix ".ui". These packages contained the corresponding classes found in the existing packages without the ".ui" suffix. I moved/merged my code over to the new packages and deleted the old packages.

UI Module change

The UIModule for my DSL had to change to the following signature and constructor:

public class BeeLangUiModule extends org.eclipse.b3.ui.AbstractBeeLangUiModule {

public BeeLangUiModule(AbstractUIPlugin plugin) {

super(plugin);

}


The new structure has the UIModule in a new package (with ."ui" suffix), and the Activator (also in a new package "...ui.internal") uses the UIModule in this new package.

Mwe workflow change

I had to replace the JavaScopingFragment with ImportURIScopingFragment:

fragment class="org.eclipse.xtext.generator.scoping.JavaScopingFragment"

fragment class="org.eclipse.xtext.generator.scoping.ImportURIScopingFragment"

since the JavaScopingFragment no longer exist. Don't know if the ImportURIScopingFragment is what I want, but I had to pick one.


Converting Java Strings

Strings.convertFromJavaString now has an extra boolean argument useUnicode which should be set to true to process \\uXXXX escapes. (I did set it to true). I use this method in some terminal converters.


Changes in plugin.xml

A manual merge of all changes in plugin.xml_gen to my plugin.xml (basically changes related to use of "ui" in package names) was required.


Manifest change

Manifest file needed update as the activator is in a different package:

Bundle-Activator: org.eclipse.b3.ui.internal.BeeLangActivator

(using "ui" in the package name)

Migrating to latest nightly

The 42 Easter Egg

The method

protected void configureImportantInformation(IEditStrategyAcceptor acceptor)

has been dropped from DefaultAutoEditStrategy. It was only there to block an easteregg (typing 42 displays a funny comment about 'the meaning of life' - but the easter egg and method seems to both be gone in the nightly).

Serialization

I have not done much with b3 serialization yet, so required changes were small. I needed to add a single method:

public class BeeLangGrammarSerialization implements ITransientValueService

needs an implementation of the method

public boolean isCheckElementsIndividually(EObject owner, EStructuralFeature feature)

I added one that returns false, which hopefully is the same as the default.

Guice from Orbit

com.google.guice (from itemis) changed to com.google.inject from orbit (dependencies changed, and my launch configurations needed to be changed).

Open Issues

I had attached commands to the popup menu that should appear (and they did in 0.8 M4) over the editor's outline. But this stopped working. I am waiting on some wisdom from the Xtext gurus on this...
This was a temporary issue with plugin.xml changes not taking effect. After a restart and clean build it now works just like before.

Syntax highlight has changed, and I am trying to figure out how it works now... It stopped working because I forgot to move things over to the new UIModule (as described above).

Summary

All in all, the migration was quite painless. Knowing that changes were to take place in several of the services, I only have minimal implementations (the default, or just a few lines to fix something glaring) in many places (in wait for the 1.0 release and new documentation). If you have a lot of code and using everything "to the hilt" in 0.7.2, you may want to wait for the official release and the documentation.

I will update this article as I find more things that needs to be changed, or if I changed something in error.


Tuesday, May 4, 2010

Buckminster 3.6 New & Noteworthy

The Helios release of Buckminster has the following new and noteworthy features available in 3.6M7

  • Support for Git - uses, updates, or clones git repository as needed
  • Headless JUnit and EclEmma (code coverage) support
  • Comprehensive documentation available - introduction, examples, and reference. Download PDF, 250 pages, includes descriptions of the new features described here.
  • Graphical dependency visualizer - resolutions can be viewed and navigated/filtered with a Zest based viewer
  • Much improved target platform support - using new features in PDE to automatically manage/materialize target platform config
  • Provisioning and management of API baseline
  • New EMF based editors for MSPEC and RMAP - much easier to use than editing XML
  • Reader type for Project Set (.psf) files - makes it easy to integrate or migrate projects that are using .psf files to describe where the source is
  • p2 repository size reduction to 1/3 using improved pack200 support
  • OmniVersion support - the support for non OSGi version has been changed to use the p2 OmniVersion implementation for increased flexibility - backwards compatible with Buckminster version-type, version scheme used in earlier versions.
  • Qualifier generator using Build Identifier - use a property to control the content of a version qualifier
  • LDAP style filters on RMAP providers, CQUERY advisors, and MSPEC nodes - makes it possible to parameterize more things, reduces the need for multiple slightly different copies of these files.
  • Smart version range generation for feature 'includes' - heuristics result in natural choices
  • Support for category.xml files - the new PDE mechanism for categorizing result in p2 repository is supported
  • Headless 'install JRE' support
  • Better defaults often renders the MSEPC unnecessary - automatic materialization to Target Platform for binaries often removes the need to use a MSPEC.
  • Using new p2 API, p2 'pure' reader, and using separate p2 agent - reduces risk of contamination of the running instance's p2 data.

Monday, April 26, 2010

Eclipse Build Systems in Perspective

It is easy to get confused over the "build system options" available when developing with Eclipse - there is JDT and ANT, PDE (interactive), and PDE/Build (headless) - to start with the "classics". Then there is Athena, an elaboration on the also classic 'releng' system used to build Eclipse itself, then the newer Buckminster. The two latest additions to the family of build related projects at Eclipse are Eclipse b3 (2009), and Tycho (proposal 2010).

With this blog post I want to put eclipse build technology in perspective.

So, what does all these technologies do?

The classics

JDT - builds java source, interactively, under control of the project and workspace preferences.

PDE - builds OSGi components and the Eclipse specific extensions (features, products, etc.) under the control of PDE preferences. PDE consists of both the interactive parts (incremental builds, export commands, dialogs etc.), and the headless logic that performs the actual work. This logic is also made available as ANT tasks.

PDE/Build - generates ANT scripts that are then executed in headless fashion to build. The generated ANT scripts makes use of the same ANT tasks used by the interactive PDE.

If we stop there for a moment - this was the level of support for building provided by Eclipse a few years back. Although PDE does a very good job of building things interactively, it is fair to say that the headless PDE/Build has been the source of much pain and frustration. To complete the picture; at this time there was also the "releng" system in use at Eclipse, which only ran on the Eclipse servers.

Improving on PDE

While PDE/Build has more or less remained unchanged there has been many different approaches to how to provide good support for headless builds. Some took the scripting route - improving on releng base builder (Athena), some wrote their own scripts, other started modeling PDE like this project in PDE incubator, and some wrote better/easier to use script generators like pluginbuilder.

Eclipse Buckminster

When the Eclipse Buckminster project was introduced in 2005 it focused on additional things — when using generated scripts and technologies external to the IDE there are often issues caused by differences in how things are built i.e. it works just fine in the IDE but breaks in the build. An important goal for Buckminster was (and still is) to provide exactly the same build interactively as on the servers. This means that Buckminster has a tight integration with the builders running in a workspace made available in an efficient packaging for headless execution. Another important design decision was to use the existing information (i.e. the meta data used by PDE in the IDE) without generation of scripts and without roundtrip engineering, instead using advice/decoration to modify discovered metadata when the original information is not enough.

In addition to the important philosophical difference, Buckminster also provides unique support for materialization of a workspace, automatic provisioning of a target platform, running JUnit tests, EclEmma code coverage, Hudson integration, and much more.

Note that when Buckminster builds PDE related material (Buckminster can build other things as well), it calls on the same PDE logic that is used when building with interactive PDE.

Tycho

Tycho is a set of Maven plugins that provide building of OSGi and Eclipse related components (features, plugins, RCP products, etc.) and is an alternative to PDE suitable for those that have a Maven centric setup. Tycho does not use the original PDE logic.

Eclipse b3

The b3 project is about making it easy to work with build systems — discovering and modernizing or integrating existing build systems should be just as easy as building meta data rich components interactively or in continuos integration fashion. Eclipse b3 starts at the very other end of the spectrum than "which compiler to use" or which meta data dialect is used to describe components.

Eclipse b3 does this by providing EMF based models for build (i.e. components, their relationships, versions, types, etc.), expressions (i.e. the processing in form of tasks, builders, compilers, etc.), and p2 (with support for aggregation, re-categorization, mirroring, maven meta data publishing, and more), as well as a concrete syntax in form of a DSL implemented with Xtext that provides a rich text editing environment, and an evaluator that makes it possible to run b3 scripts.

As an example - there is nothing in Eclipse b3 that restrict it to using the original PDE logic to build the PDE related artifacts, it can just as well make use of Tycho's alternate way of building the same things.

The very first way Eclipse b3 will be building things is to use Buckminster as the execution engine. A small, and easy to understand b3 script will drive the entire build — combining the ease of use in the b3 DSL with the proven stable builds provided by Buckminster.

Monday, April 19, 2010

Eclipse b3 - a success at Eclipsecon

As you may have seen, the Eclipse b3 project is about creating a new generation of Eclipse technology to simplify software build and assembly. There has been lots of activity since the project was created, and there was much interest in b3 at Eclipsecon - so, here is a status update.

In case you did not know; a seminar on b3 was held at Eclipse Summit Europe 09 where the initial ideas were presented and discussed. A lot of new ideas about how people want to work with builds were generated, and there was lots of positive feedback on the original ideas. As always, some darlings were also completely killed in the process (the message that XPath queries are anything but easy to understand was received).

The feedback told us that these things are important:

  • Ease of use
  • Based on Modeling
  • Debuggable
  • Flexible / Extensible

Armed with all that input the time between ESE and Eclipsecon 2010 was devoted to developing a version of b3 that demonstrates the ideas, with a focus on Ease of Use, while not sacrificing flexibility or capability of dealing with real world complexities when building. At Eclipsecon we reached the first milestone of b3 consisting of:

  • A build ecore model
  • A process/expression ecore model
  • A p2 ecore model with aggregation and rewrite support
  • A concrete syntax implemented with XText (i.e. a feature rich eclipse editor and much more).
  • An evaluator (i.e. making it possible to run the build scripts).
  • Documentation of the concrete syntax
  • A website with links to all b3 related information (documentation, for developers, etc).
The b3 interest at Eclipsecon 2010 was huge - the room was packed, not everyone could get in, and of those that did, close to 80% liked the presentation (i.e. voted +1). To the two individuals who voted -1; I am hoping you put some comments on your votes so I know what you did not like (it simply has to be 'lack of chairs' :)).

For the next milestone (around Helios release) we are adding concrete things to b3:

  • Use a b3 script to drive a buckminster build
  • Publish b3 build units to a p2 repository - i.e. author installable units

As always - love to hear your questions and comments. I have already received quite a few questions regarding the relationship between b3 and other eclipse related build technologies (buckminster, athena, PDE build, and now the Tycho proposal), and that will be the topic of my next blog post about b3.

Monday, May 11, 2009

"Chester the test-data molester" comes to town

Introduction

"Chester the test-data molester" is a http test server that delivers various error scenarios consistently.

In the p2 project we have the need to test various communication error scenarios, such as when a web server reports illegal last modified dates, reports the wrong file size, endlessly redirects, hits internal server errors, when connection is made via a "hotel style" payment service etc. This was a real pain to set up in an ad-hoc manner, so I decided to create a small equinox based http testserver that is now ready for use.

The testserver has already been invaluable in finding trasport related issues. I thought it was worth writing this short introduction as it may help others test and fix error reporting issues in RCP apps with custom p2 user interface, for those that need to be able to replicate p2 problems where for security/practical reasons it is not possible to access the real repositories, as well as for those that simply need a http server that can create various error scenarios consistently.

How to get it

The testserver resides in the p2 CVS repository - org.eclipse.equinox.p2.testserver. To use it you need to check it out, as well as the org.eclipse.equinox.http bundle. (If you use the p2 team project sets in the p2 releng project you will get everything you need). There is a launch configuration in the testserver project that starts the testserver on "localhost:8080". (You can change the port in the launch configuration if you want).

Basic Services

The testserver starts some basic testing services on the following paths:

  • /timeout[/anything] - will wait 10 minutes and then produce no response
  • /status/nnn[/anything] - returns html content with the http response status code set to nnn, e.g. /status/500 for an internal server error
  • /redirect/nnn[/location] - redirects nnn times and then redirects to location (a path on testserver). If no location given, a html page with a message is generated as the final redirect. Examples:
    • /redirect/3/status/500 - redirects 3 times and then generates a 500 - internal error.
    • /redirect/3 - redirects 3 times and produces a message
    • /redirect/30 - redirects 30 times, and will trigger "too many redirects error" in most configurations
  • /never[/anything] - has basic authentication turned on, but will not accept any username/password as valid.

Content Delivery

The testserver also has content available - there is an index.html in the project, as well as some p2 repository test data. The testserver has also "mounted" the eclipse updates 3.4 repository on different paths with different types of wrappers/"molestors" in place.
This set of paths goes to the testserver bundle's web content - i.e. index.html and some p2 test data:
  • /public/... - normal access
  • /private/... - same as public but requires login with "Aladdin" and password "open sesame"
  • /truncated/... - truncates files by delivering less content than stated in length e.g. /truncated/index.html
  • /molested/... - returns garbage instead of real content in the later part of the file e.g. /molested/index.html
  • /decelerate/... - delivers content chopped up in small packets with delay, e.c. /decelerate/index.html (interesting to watch in firefox which delivers content as it comes in).
This set of paths has mounted http://download.eclipse.org/eclipse/updates/3.4 with various "molestors":
  • /proxy/private/... - requires login with "Aladdin" and password "open sesame"
  • /proxy/public/... - unmolested access (useful in redirects)
  • /proxy/decelerate/... - chops up content and delays delivery
  • /proxy/decelerate2/.... - chops up content and delays the last 20% of the delivery
  • /proxy/truncated/... - truncates all files
  • /proxy/molested/... - generates gibberish for later part of all files
  • /proxy/modified/... - delivers various errors in "last modified" (see below)
    • .../zero/... - all times are returned as 0
    • .../old/... - all times are very old
    • .../now/... - all times are the same as the request time
    • .../future/... - all times are in the future (which is illegal in HTTP)
    • .../bad/... - the time is not a date at all - the client should throw an error
  • /proxy/length/... - delivers various content length errors (see below)
    • .../zero/... - length is reported as 0 (but all content written to stream)
    • .../less/... - less than the correct size is reported (all content written)
    • .../more/... - double the correct size is reported (but only available content is written)

Get in touch

I had fun writing this - I mean, how often do you get to write classes called "Molestor" :). I hope you find the testserver useful, and that if you would like it to perform other forms of content maiming and mutilation that you contribute by submitting patches to the "p2" project marking issues with the text [testserver].