2009年3月10日星期二

Enabling Peer-to-Peer BitTorrent Downloads with Azureus

by Jacobus Steenkamp
06/22/2007
The type of traffic distribution on the Internet today is quite different from the type you might have encountered only a few years ago. In the past, the vast majority of internet bandwidth was used to transfer character streams (in most cases HTML) over either HTTP or HTTPS. This trend has changed over the past few years, with a great deal of bandwidth (33 to 50 percent by some estimates) now being used to distribute large files over peer-to-peer connections. BitTorrent is one of the more popular protocols being used for peer-to-peer file transfers, and enabling your Java applications to use this protocol has never been easier.

Peer-to-peer networks rely primarily on the bandwidth and hardware of the participants in the network rather than on a relatively small set of centralized servers. It is, therefore, much cheaper in terms of bandwidth and energy costs for the content provider to distribute large files using a peer-to-peer network rather than through the traditional client-server approach. There are quite a few examples in the industry where peer-to-peer networking has already been taken advantage of:

Blizzard's World of Warcraft game uses the BitTorrent protocol to send game updates to clients
The BitTorrent protocol is often used to distribute free and open source software. Open Office and popular Linux Distributions often offer the option of downloading their software using BitTorrent.
The BBC has recently announced that it will be making hundreds of episodes available over peer-to-peer file sharing networks. By opting to use a peer-to-peer paradigm to distribute its content, the BBC can reach a large audience without the need to invest vast amounts of money in building a server infrastructure.
The BitTorrent Protocol
The BitTorrent Protocol, which was designed and first implemented by Bram Cohen in 2001, is arguably the most popular and efficient peer-to-peer protocol currently in use.

To start sharing a file (or set of files) using BitTorrent the first peer, or initial seeder, creates a torrent file that contains all the metadata information required by clients to start downloading the shared file. This typically includes the name of the shared file (or files), the number of pieces the file has been broken down into, the checksum of each of the pieces, and the location of the tracker server which serves as a central point that coordinates all the connected peers. Unlike the rest of the traffic in a BitTorrent peer group (or swarm), communication to the tracker server is usually performed over HTTP.

Given a torrent file, a BitTorrent client would typically start off by connecting to the tracker server and getting the details of all other peers on the network. It would then start requesting pieces of the shared file from the rest of the swarm and use the checksum values in the torrent file to validate the received data. This BitTorrent process is very nicely illustrated on Wikipedia.

Azureus
Due to the openness of the BitTorrent protocol, numerous compatible BitTorrent clients have been implemented in a variety of programming languages and computing platforms. Out of all the options out there Azureus, which is implemented using Java and SWT, has proven itself to be one of the more popular and feature rich clients available. In fact, Azureus is the second most downloaded application on the Alltime Top Downloads list on SourceForge. One can argue that Azureus's popularity probably makes it one of the most successfulconsumer targetted Java desktop applications in the world.

In addition to being a great BitTorrent client, Azureus also contains functionality to create torrent files, set up a tracker server and an initial seeder. In the rest of the article we will be looking at how you can leverage these features for use in your own applications and take advantage of the cost benefits that peer-to-peer file distribution offers.

Getting Started with the Azureus API: A Simple Torrent File Downloader
In this section, we are going to implement a simple command-line application based on the Azureus API (or engine) to download a data file using the BitTorrent protocol. The URL of the torrent file will be passed in at the command line.

public class SimpleStandaloneDownloader {
...
private static AzureusCore core;
...
public static void main(String[] args) throws Exception{

//Set the default root directory for the azureus engine.
//If not set, it defaults to the user's home directory.
System.setProperty("azureus.config.path", "run-environment/az-config");
...
String url = null;
...
url = args[0];
...
core = AzureusCoreFactory.create();
core.start();
...
System.out.println("Attempting to download torrent at : " + url);

File downloadedTorrentFile = downloadTorrentFile(new URL(url));

System.out.println("Completed download of : " + url);
System.out.println("File stored as : " + downloadedTorrentFile.getAbsolutePath());

File downloadDirectory = new File("downloads"); //Destination directory
if(downloadDirectory.exists() == false) downloadDirectory.mkdir();

//Start the download of the torrent
GlobalManager globalManager = core.getGlobalManager();
DownloadManager manager = globalManager.addDownloadManager(downloadedTorrentFile.getAbsolutePath(),
downloadDirectory.getAbsolutePath());

DownloadManagerListener listener = new DownloadStateListener();
manager.addListener(listener);
globalManager.startAllDownloads();

}
}
The singleton AzureusCore instance is the central axis on which the whole Azureus API revolves. After creating it (using the AzureusCoreFactory) and starting it, you are ready to start using its functionality. It should be noted that AzureusCore spawns its own Threads internally and generally runs asynchronously to the rest of the application.

After having downloaded the torrent file from the passed in URL using the downloadTorrentFile() method, the torrent is submitted to Azureus's GlobalManager instance, which is responsible for managing downloads. The DownloadManager that gets returned by the addDownloadManager() method can be used to retrieve a wealth of statistics on the download, including the data send rate and number of connected peers. In this example we have registered a DownloadManagerListener instance (implemented by the DownloadStateListener class) to track when the torrent data file has started downloading and to print out the completed percentage to the command line.

private static class DownloadStateListener implements DownloadManagerListener{
...
public void stateChanged(DownloadManager manager, int state) {
switch(state){
...
case DownloadManager.STATE_DOWNLOADING :
System.out.println("Downloading....");
//Start a new daemon thread periodically check
//the progress of the upload and print it out
//to the command line
Runnable checkAndPrintProgress = new Runnable(){

public void run(){
try{
boolean downloadCompleted = false;
while(!downloadCompleted){
AzureusCore core = AzureusCoreFactory.getSingleton();
List managers = core.getGlobalManager().getDownloadManagers();

//There is only one in the queue.
DownloadManager man = managers.get(0);
System.out.println("Download is " +
(man.getStats().getCompleted() / 10.0) +
" % complete");
downloadCompleted = man.isDownloadComplete(true);
//Check every 10 seconds on the progress
Thread.sleep(10000);
}
}catch(Exception e){
throw new RuntimeException(e);
}

}
};

Thread progressChecker = new Thread(checkAndPrintProgress);
progressChecker.setDaemon(true);
progressChecker.start();
break;
...
}
}

public void downloadComplete(DownloadManager manager) {
System.out.println("Download Completed - Exiting.....");
AzureusCore core = AzureusCoreFactory.getSingleton();
try{
core.requestStop();
}catch(AzureusCoreException aze){
System.out.println("Could not end Azureus session gracefully - " +
"forcing exit.....");
core.stop();
}
}
..
}
}

GMF: Beyond the Wizards

by Jeff Richley
07/11/2007
In today's development environment, users expect to be able to visualize data, configuration, and even the processes of a system. For this reason, they use tools to communicate requirements visually with stakeholders and subject matter experts. Think for a moment about UML, it takes a very complex set of data and represents it visually to simplify the communication of software requirements and design. Likewise, there are potential visual tools for describing workflows, data mining, server management, and many other business processes. These tools are able to boost productivity and reduce cost, which is obviously a win-win situation.

Historically, writing these tools has been very time consuming and reserved for those GUI gurus that are well above mere mortals. However, that barrier has been broken down for us by the folks working on the Eclipse Graphical Modeling Framework (GMF).

You may be wondering, "What is GMF and what can it do for me?" GMF is a framework that takes a set of configuration files (a domain model, a graphical definition, and a tool definition), puts them all in a blender, and **poof - magic** out comes a professional looking Eclipse plug-in. Not only does it generate most of the functionality that you have designed, it also gives many freebies such as printing, drag-and-drop, save to image, and customization. Once you have completed the plug-in and all of its handy features, you can then distribute it to your user base for widespread use. There are even features of the Eclipse Plug-in Development Environment (PDE) for creating a distribution site that will help with the nightmare of keeping all of those clients up-to-date.

If you've done any UI programming at all, you realize just how much feature (read: bug) prone coding this eliminates. All of the MVC setup, layout management, property listeners, and the like are generated for you. The mundane, cookie cutter work is generated, which allows you to concentrate on the fun and creative part of your projects.

Tutorials that show you how to get started with GMF jump right into the wizards that are provided as part of the SDK. The wizards and dashboard that are used to develop GMF applications are very powerful. With the exception of your data model, all of the configuration files can be generated from wizards. I am all for wizards, but I tend to go by the motto "Don't generate what you don't understand." Let's take a look under the covers of the wizards, in particular, ecore, gmfgraph, gmftool, and gmfmap.

The domain model, ecore/genmodel files, is the starting place for development of most Eclipse-based applications. The basic development pattern for EMF is to model your domain objects and have EMF generate your entire model code base, including beans and glue code. EMF is not discussed in depth in this article, but resources are listed at the end.

The graphical and tooling definitions are straightforward. The graphical side is a list of figures, described in gmfgraph files, which will be used in the diagram to display classes from the domain model. The gmftool file is a tooling definition that defines what text you want to display on the tool palette and the button's tool tip.

The final step is to tell GMF how all of these pieces work together by creating a gmfmap file. This is the glue that brings the other three configuration files together by telling GMF what action to take when a tool is selected, what classes are to be created, and what figures to render when those classes are added to the diagram. Once everything is wired together, generate a gmfgen file and application code, fire up a test instance of Eclipse, and test out your new application.

Now that we have talked about what GMF applications are and have a general idea of the steps involved in making them, let's take a look at a sample application that models managing a coffee shop. The beginning functionality allows you to add managers and employees, as well as associate a manager to the employees that she is responsible for. This is a fairly handy little tool, but it would be even better if we could add coffee machines to the shop. After all, this is a coffee shop and we need to make hot dirty brown water, right?

Let's fire up Eclipse to see the original plug-in and then we will add a coffee machine into the mix. Once you have added the projects to Eclipse, run the sample application (see Figure 1).


Figure 1. Running an Eclipse plug-in

Create a new coffee shop diagram by selecting File->New->Other->Examples->Coffee Diagram. This will give you a brand new diagram to play around with (Figure 2). Go ahead, add a manager or two, a few employees, and wire the managers with their employees. Once you have created a diagram, save it — in fact, keep it for later when you have wired in the coffee machines.


Figure 2. Sample coffee shop diagram

Now that you have the original set up and working, let's add the ability to create instances of the CoffeeMachine class. The steps for adding a creation action will be:

Define the figure for display
Define the creation tool for the tool palette
Map the creation tool, display figure, and the backing model class
Defining the Figure for Display
Let's first look at creating figures for displaying the CoffeeMachine for your store. Open the coffee.gmfgraph file and poke around to see what is inside (Figure 3). There are four main types of elements in the hierarchy that you need to understand:

Figure Gallery: Shapes for the application
Nodes: Graphical representations of the domain model
Diagram Labels: Labels for the Nodes that give helpful feedback to the user
Connections: Lines denoting relationships between graphical elements

Figure 3. View of the coffee.gmfgraph file

The first step in defining the diagram is to create a figure for the editor to use. Right-click on the Figure Gallery and select New Child->Rectangle (or any other shape that suits your fancy). Select the newly created Rectangle and look at the Properties view. The one line item that must be filled in, at this point, is the Name field. Let me give you a sage word of advice when it comes to naming elements: "Make sure you name your elements so that they are easily identifiable." One mistake that I made was to have vague names that looked very similar to other elements. You will be very happy in the mapping phase if you stay consistent. One naming convention that I typically use is Figure or Diagram. Pick a method that works for you, but once picked, stick with it.

For a good user experience, we would like a figure label to tell what type of model is being displayed. To add a label that shows that the rectangle is actually a coffee machine, right-click on the CoffeeMachineFigure that you just created and select New Child»Label. In the Properties view, give the new Label a name; sticking with the naming convention, it would be something like CoffeeMachineFigureLabel. The Text field denotes what will be displayed on the figure label when it is drawn in the editor. Enter a phrase that would help your user know that it is a coffee machine, such as "<- Coffee Machine ->". Once again, pick a standard way of denoting figures and stick with it; this will go a long way for your users.

In order for GMF to display a model's representation in a diagram, there needs to be a Node to map it to. Create a Node by right-clicking the Canvas and selecting New Child->Nodes Node. This configuration is very straightforward; give it a name and select the figure you want it to use when displaying.

The next step is to make a Diagram Label node. This element will add text to a diagram figure for user feedback. Right-click on the Canvas and select New Child->Labels->Diagram Labels. There are two properties to complete here: Name and Figure. Sticking with our naming conventions, name the new Diagram Label CoffeeMachineDiagramLabel. The Figure is the element from the Figure Gallery to use for display. Select the CoffeeMachineFigureLabel from the drop down list.

There you have it, a finished gmfgraph definition file for adding a CoffeeMachine to a diagram.

Introduction to JavaFX Script

by Anghel Leonard
08/01/2007
What Is JavaFX?
In the spring of 2007 Sun released a new framework called JavaFX. This is a generic name because JavaFX has two major components, Script and Mobile, and, in the future, Sun will develop more components for it.

The core of JavaFX is JavaFX Script, which is a declarative scripting language. It is very different from Java code, but has a high degree of interactivity with Java classes. Many classes of the JavaFX Script are designed for implementing Swing and Java 2D functionalities more easily. With JavaFX Script you can develop GUIs, animations, and cool effects for text and graphics using only a few straightforward lines of code. And, as a plus, you can wrap Java and HTML code into JavaFX Script.

The second component, JavaFX Mobile, is a platform for developing Java applications for portable devices. It will eventually be a great platform for JavaFX Script, but for now is largely irrelevant to the content of this article.

Some Examples of JavaFX Applications
Before we start learning a new language, let's see some examples of JavaFX code. A good resource for examples can be found at the official JavaFX site. To download the examples, please click on JavaFX Script 2D Graphics Tutorial. After the download is complete just double-click the tutorial.jnlp file. In a few seconds you should see something like Figure 1 (if you don't see this image, then you have to configure Java Web Start for the .jnlp extension).


Figure 1. Running the tutorial.jnlp tutorial

Take your time looking over these examples and the source code. There are many interesting effects that can be obtained with just a few JavaFX lines.

If you are still skeptical about the utility of JavaFX, take a look at these two demos; they are partial re-creations of StudioMoto and Tesla Motors sites. You can download them demos from Project OpenJFX by clicking JavaFX Script Studiomoto Demo and JavaFX Script Tesla Demo. They require Java Web Start in order to run, but depending on your machine configuration they may start automatically, or you may have to find and run the downloaded .jnlp file.

Download and Install JavaFX
If you are interested in learning to develop JavaFX applications, then you should know that there are at least three methods for working with JavaFX. Also, it is important to know that JavaFX applications are not browser-based. The simplest and quickest method is based on a lightweight tool called JavaFXPad. The major advantage of using this tool is that you can almost immediately see the effect of the changes you are making in the editor. You can download this tool from Project OpenJFX by clicking JavaFX Script JavaFXPad Demo. Again, running this requires Java Web Start (see Figure 2).


Figure 2. Running the JavaFXPad editor

Another way to work with JavaFX is to use the JavaFX Script Plug-in for NetBeans 5.5 or a JavaFX Script Plug-in for Eclipse 3.2 (of course, before downloading and installing any of these plug-ins you must have NetBeans 5.5 or Eclipse 3.2 already installed).

If you decide to start with the JavaFX plug-in for NetBeans 5.5, the instructions on Project OpenJFX for JavaFX for NetBeans will help you. Similarly, if you want to use the JavaFX plug-in for Eclipse, then go to JavaFX for Eclipse. Notice that all the examples from this article were tested with JavaFX plug-in for NetBeans 5.5, but should work in any of the other listed methods.

Testing the Hello World Application with JavaFX Plug-In for NetBeans 5.5
As always when learning a new language, we have to write the obligatory Hello World application:

Listing 1
import javafx.ui.*;
import java.lang.System;
Frame {
centerOnScreen: true
visible: true
height: 50
width: 350
title: "HelloWorld application..."
background: yellow
onClose: operation() {System.exit(0);}
content: Label {
text: "Hello World"
}
}
To develop and run this simple example in NetBeans 5.5 follow these steps:

Launch NetBeans 5.5.
From the main menu select File -> New Project.
In the New Project window, select the General category and Java Application project (click Next).
In the New Java Application window, type "FXExample" in the Project Name text field.
In the same window use the Browse button to select the location of the project.
Uncheck the "Set as main project" and "Create main class" checkboxes (click Finish).
Right-click on the FXExample -> Source Packages and select New -> File/Folder.
In the New File window select the Other category and the JavaFX File file type (click Next).
In the New JavaFX File window, type "HelloWorld" for File Name and "src" for Folder (click Finish).
Copy the code from Listing 1 and paste it in HelloWorld.fx.
Right-click an FXExample project and select Properties.
In the Project Properties – FXExample, select the Run node from the Categories pane.
In the Arguments text field, type "Hello World" (click OK).
Right-click on FXExample project and select Run Project option.
If everything works, you should see a frame like in Figure 3:


Figure 3. Running the Hello World application in NetBeans 5.5

Now you have the software support for developing and running any JavaFX application.

JavaFX Syntax
Before starting with JavaFX, let's go over some of the fine points of the syntax. If you are already familiar with the syntax of the Java language, most of this will look very familiar, but some of it is quite different.

JavaFX Primitive Types
JavaFX supports four primitive types: String (for java.lang.String), Boolean (for java.lang.Boolean), Number (for java.lang.Number) and Integer (for byte,short,int,long,BigInteger).

JavaFX Variables
A JavaFX variable is declared by using the var keyword. See the following examples:

var x:Number = 0.9;
var name:String = "John";
var y:Integer = 0;
var flag:Boolean = true;

var numbers:Number = [1,2,3,4,5];

What's the Matter with JMatter?

by Eitan Suez
08/21/2007


It has been approximately a year since I wrote my first article on JMatter, first article, and a year is a long time for a successful open source project. Many things have changed and I'd like to give you an update. My last article was an introduction to JMatter and it's time we tackled something more advanced.

Allow me to begin with a very brief, orienting description of JMatter.

JMatter proposes that you, the developer of a small business application, concern yourself primarily with the business logic or the domain in question, for example, say we're developing a solution for a school, perhaps to administer or manage a curriculum. Alternatively, perhaps we're trying to write a system to better manage parts at an automotive shop, or perhaps we're dealing with real estate properties for sale. You get the picture.

JMatter further proposes that you consider most software development tasks that are not directly related to the business domain (such as persistence, writing the user interface, authentication, deployment, and more) as plumbing: it's someone else's job. In fact it's JMatter's job.

Applications developed with JMatter sport user interfaces built on top of the Java Swing toolkit. They are deployed over Java Web Start. For persistence, JMatter leverages Hibernate Core, therefore is compatible with any database system supported by Hibernate.

To give you further insight into the nature of this framework, let's walk through the construction of a non-trivial JMatter application.

Let's Build an App!
The JMatter framework comes with a half dozen demonstration applications that are designed to teach various aspects of the framework.

For this article, let's develop an application that illustrates some of JMatter's object-oriented capabilities. Whether we've attended it or not, many of us are familiar with the JavaOne conference in San Francisco. Let us then develop an application for managing the JavaOne conference. This application somewhat resembles the Sympster demo application that comes with JMatter. A complete application with all use cases is, of course, a little beyond the time and space that we have for this article, so we'll build the foundation for such an application. I'll let you be the judge of the degree of leverage JMatter provides.

Initial Modeling
I happened to have a copy of the brochure for JavaOne 2006 underneath a stack of papers on my desk. After perusing it, I made the following observations:

JavaOne is a conference, an event, where many talks are given. There seem to be a number of different types of events such as Technical Sessions (TS), which are the meat of the conference. Let's not forget Keynote speeches, and the popular Birds of a Feather (BOF) sessions at night.

Both the BOFs and technical sessions have a unique code such as TS-1234 or BOF-2332, while Keynote sessions do not. BOFs and TSs are also categorized by track, and there appear to be five tracks: Java SE, Java EE, Java ME, Tools, and Cool Stuff. All talks have a speaker, a topic, and a description.

Some speakers are distinguished as rock star speakers, some are Java champions, and some are both. Let's call such accolades Speaker Recognitions.

Typically, a distinction is made between the definition of a talk and the scheduling of a specific talk at a specific time and location. This distinction doesn't appear to be necessary for this application.

Finally, talks are scheduled for different rooms. We might want to keep track of the seating capacity for each room, which would be important if we wanted to manage registration for specific talks.

Here, then is a tentative initial model for our application: Talk (with subclasses: Keynote, BOF, and Technical Session), Speaker (and Speaker Recognition), Room, and Track. Let's go ahead and throw in an additional categorization for a talk: a Talk Level (perhaps with three levels: Beginner, Intermediate, and Advanced) to help us ascertain the expertise level expected of attendees.

Creating Our Project
Download JMatter from http://jmatter.org/ and unzip (or untar) the distribution. Assuming you've got Ant installed, from the command line, cd into jmatter/ and issue the following command to create a new project:

ant new-project-ui


Figure 1. GUI for creating new JMatter projects

Give your project a name (JavaOneMgr). You have the choice a create a standalone or dependent project. In standalone projects, all the necessary dependencies are bundled into your project. It doesn't matter too much which you pick here. Dependent projects are simpler to work with if you're making changes to both your project and to the underlying framework.

After creating your project, quit this little app and cd to ../JavaOneMgr, your new project's base directory (feel free to move your new project to another parent directory). The project is already equipped with a build file and base directory structure.

Project Directory Structure and Configuration
The project's directory structure is fairly self-explanatory:

src/: This is where JMatter will expect to find your source code.
test/: Place any JUnit tests you write in this directory.
resources/: This directory contains a variety of application resources. The images/ folder is where you place various image resources: a splash screen and icons representing your model objects that will be used by the JMatter's user interface. hibernate.properties is where you configure your application's database connection (among other Hibernate-related concerns). Some model metadata can be specified in the file model-metadata.properties (more from Chapter 11 of JMatter's documentation); the application's localization resources are also located here.
doc/: Place any documentation specific to your application in this directory.
For standalone projects, you will also find a lib/ folder containing all of your application's dependencies. Dependent projects' build files reference dependencies in your JMatter installation.

You'll be using the generated Ant build file to compile your code, generate your database schema, test run your application, run unit tests, and, when your application is ready, to produce the artifacts necessary to deploy it over Java Web Start.

To configure your project with an IDE, you typically must communicate these pieces of information:

Where your source code is located (specify the src/ folder)
Where to output compiled code (to match the Ant build file, specify build/classes, though we'll typically use the build file for compilation)
Where dependencies are located (for dependent projects, that would be all the jars in jmatter/lib/runtime and the directory jmatter/build/classes)
JMatter requires Java SE version 5 or higher.

We're going to start coding soon, so go ahead and configure your working environment to your tastes.

Schemaless JavaXML Data Binding with VTDXML

Limitations of Schema-based XML Data Binding
XML data binding APIs are a class of XML processing tools that automatically map XML data into custom, strongly typed objects or data structures, relieving XML developers of the drudgery of DOM or SAX parsing. In order for traditional, static XML data binding tools (e.g., JAXB, Castor, and XMLbeans) to work, developers assume the availability the XML schema (or its equivalence) of the document. In the first step, most XML data binders compile XML schemas into a set of class files, which the calling applications then include to perform the corresponding "unmarshalling."
However, developers dealing with XML documents don't always have their schemas on hand. And even when the XML schemas are available, slight changes to them (often due to evolving business requirements) require class files to be generated anew. Also, XML data binding is most effective when processing shallow, regular-shaped XML data. When the underlying structure of XML documents is complex, users still need to manually navigate the typed hierarchical trees, a task which can require significant coding.
Most limitations of XML data binding come from its rigid dependency on XML schema. Unlike many binary data formats, XML is intended primarily as a schemaless data format flexible enough to represent virtually any kind of information. For advanced uses, XML also is extensible: applications may use only the portion of the XML document that they need. Because of XML's extensibility, Web Services, and SOA applications are far less likely to break in the face of changes.
The schemaless nature of XML has subtle performance implications in XML data binding. In many cases, only a small subset in an XML document (as opposed to the whole data set) is necessary to drive the application logic. Yet, the traditional approach indiscriminately converts entire data sets into objects, producing unnecessary memory and processing overhead.
Binding XML with VTD-XML and XPath
Motivation
While the concept of XML data binding has essentially remained unchanged since the early days of XML, the landscape of XML processing has evolved considerably. The primary purpose of XML data binding APIs is to map XML to objects and the presence of XML schemas merely helps lighten the coding effort of XML processing. In other words, if mapping XML to objects is sufficiently simple, you not only don't need schemas, but have strong incentive to avoid them because of all the issues they introduce.
As you probably have guessed by looking at the title of this section, the combination of VTD-XML and XPath is ideally suited to schemaless data binding.
Why XPath and VTD-XML?
There are three main reasons why XPath lends itself to our new approach. First, when properly written, your data binding code only needs proximate knowledge (e.g., topology, tag names, etc.) of the XML tree structure, which you can determine by looking at the XML data. XML schemas are no longer mandatory. Furthermore, XPath allows your application to bind the relevant data items and filter out everything else, avoiding wasteful object creation. Finally, the XPath-based code is easy to understand, simple to write and debug, and generally quite maintainable.
But XPath still needs the parsed tree of XML to work. Superior to both DOM and SAX, VTD-XML offers a long list of features and benefits relevant to data binding, some of which are highlighted in the following list.
High performance, low memory usage, and ease of use: The SAX parser uses a constant amount regardless of document size, but doesn't export the hierarchical structure of XML, which makes it difficult to use. It doesn't even support XPath. The DOM parser builds the in-memory tree, is easier to use, and supports XPath. But it is also very slow and incurs exorbitant memory usage. VTD-XML pushes the XML processing envelope to a whole new level. Like DOM, VTD-XML builds an in-memory tree and is capable of random access. But it consumes only 1/5 the memory of DOM. Performance-wise, VTD-XML not only outperforms DOM by 5x to 12x, but also is typically twice as fast as SAX with null content handler (the max performance). The benchmark comparison can be found here.
Non-blocking XPath implementation: VTD-XML also pioneers incremental, non-blocking XPath evaluation. Unlike traditional XPath engines that return the entire evaluated node set all at once, VTD-XML's AutoPilot-based returns an qualified node as soon as it is evaluated, resulting in unsurpassed performance and flexibility. For further reading, please visit http://www.devx.com/xml/Article/34045.
Native XML indexing: VTD-XML is a native XML indexer that allows your applications to run XPath query without parsing.
Incremental update: VTD-XML is the only XML processing API that allows you to update XML content without touching irrelevant parts of the XML document (See this article on devx.com), improving performance and efficiency from a different angle.
Process Description
The process for our new schemaless XML data binding roughly consists of the following steps.
Observe the XML document and write down the XPath expressions corresponding to the data fields of interest.
Define the class file and member variables to which those data fields are mapped.
Refactor the XPath expressions in step 1 to reduce navigation cost.
Write the XPath-based data binding routine that does the object mapping. XPath 1.0 allows XPath to be evaluated to four data types: string, Boolean, double and node set. The string type can be further converted to additional data types.
If the XML processing requires the ability to both read and write, use VTD-XML's XMLModifier to update XML's content. You may need to record more information to take advantage of VTD-XML's incremental update capability.
A Sample Project
Let me show you how to put this new XML binding in action. This project, written in Java, follows the steps outlined above to create simple data binding routes. The first part of this project creates read-only objects that are not modified by application logic. The second part extracts more information that allows the XML document to be updated incrementally. The last part adds VTD+XML indexing to the mix. The XML document I use in this example looks like the following:

Empire Burlesque
Bob Dylan
USA
Columbia
10.90
1985



Still Got the Blues
Gary More
UK
Virgin Records
10.20
1990


Hide Your Heart
Bonnie Tyler
UK
CBS Records
9.90
1988



Greatest Hits
Dolly Parton
USA
RCA
9.90
1982


Read Only
The application logic is driven by CD record objects between 1982 and 1990 (non-inclusive), corresponding to XPath "/CATALOG/CD[ YEAR <>1982]." The class definition (shown below) contains four fields, corresponding to the title, artist, price, and year of a CD.public class CDRecord {
String title;
String artist;
double price;
int year;
}
The mapping between the object member and its corresponding XPath expression is as follows:
The TITLE field corresponds to "/CATALOG/CD[ YEAR <>1982]/TITLE."
The ARTIST field corresponds to "/CATALOG/CD[ YEAR <>1982]/ARTIST."
The PRICE field corresponds to "/CATALOG/CD[ YEAR <>1982]/PRICE."
The YEAR field corresponds to "/CATALOG/CD[ YEAR <>1982]/YEAR."
The XPath expressions can be further refactored (for efficiency reasons) as following:
Use "/CATALOG/CD[ YEAR <>1982]" to navigate to the CD node.
Use "TITLE" to extract the TITLE field (a string).
Use "ARTIST" to extract the ARTIST field (a string).
Use "PRICE" to extract the PRICE field (a double).
Use "YEAR" to extract the YEAR field (an integer).

Introduction to Amazon S3 with Java and REST

by Eric Heuveneers 11/08/2007
digg_url = 'http://www.onjava.com/pub/a/onjava/2007/11/07/introduction-to-amazon-s3-with-java-and-rest.html';
digg_title = 'Introduction to Amazon S3 with Java and REST';
digg_bodytext = 'S3 is a file storage and serving service offered by Amazon. In this article, Eric Heuveneers demonstrates how to use Amazon S3 via its simple REST API to store and serve your own documents, potentially offloading bandwidth from your own application.';
digg_topic = 'programming';
Introduction
Amazon Simple Store Service (S3) is a service from Amazon that allows you to store files into reliable remote storage for a very competitive price; it is becoming very popular. S3 is used by companies to store photos and videos of their customers, back up their own data, and more. S3 provides both SOAP and REST APIs; this article focuses on using the S3 REST API with the Java programming language.
');
//-->

S3 Basics
S3 handles objects and buckets. An object matches to a stored file. Each object has an identifier, an owner, and permissions. Objects are stored in a bucket. A bucket has a unique name that must be compliant with internet domain naming rules. Once you have an AWS (Amazon Web Services) account, you can create up to 100 buckets associated with that account. An object is addressed by a URL, such as http://s3.amazonaws.com/bucketname/objectid. The object identifier is a filename or filename with relative path (e.g., myalbum/august/photo21.jpg). With this naming scheme, S3 storage can appear as a regular file system with folders and subfolders. Notice that the bucket name can also be the hostname in the URL, so your object could also be addressed by http://bucketname.s3.amazonaws.com/objectid.
S3 REST Security
S3 REST resources are secure. This is important not just for your own purposes, but also because customers are billed depending on how their S3 buckets and objects are used. An AWSSecretKey is assigned to each AWS customer, and this key is identified by an AWSAccessKeyID. The key must be kept secret and will be used to digitally sign REST requests. S3 security features are:
Authentication: Requests include AWSAccessKeyID
Authorization: Access Control List (ACL) could be applied to each resource
Integrity: Requests are digitally signed with AWSSecretKey
Confidentiality: S3 is available through both HTTP and HTTPS
Non repudiation: Requests are time stamped (with integrity, it's a proof of transaction)
The signing algorithm is HMAC/SHA1 (Hashing for Message Authentication with SHA1). Implementing a String signature in Java is done as follows:private javax.crypto.spec.SecretKeySpec signingKey = null;
private javax.crypto.Mac mac = null;
...
// This method converts AWSSecretKey into crypto instance.
public void setKey(String AWSSecretKey) throws Exception
{
mac = Mac.getInstance("HmacSHA1");
byte[] keyBytes = AWSSecretKey.getBytes("UTF8");
signingKey = new SecretKeySpec(keyBytes, "HmacSHA1");
mac.init(signingKey);
}
// This method creates S3 signature for a given String.
public String sign(String data) throws Exception
{
// Signed String must be BASE64 encoded.
byte[] signBytes = mac.doFinal(data.getBytes("UTF8"));
String signature = encodeBase64(signBytes);
return signature;
}
...
Authentication and signature have to be passed into the Authorization HTTP header like this:Authorization: AWS : .
The signature must include the following information:
HTTP method name (PUT, GET, DELETE, etc.)
Content-MD5, if any
Content-Type, if any (e.g., text/plain)
Metadata headers, if any (e.g., "x-amz-acl" for ACL)
GMT timestamp of the request formatted as EEE, dd MMM yyyy HH:mm:ss
URI path such as /mybucket/myobjectid
Here is a sample of successful S3 REST request/response to create "onjava" bucket:Request:
PUT /onjava HTTP/1.1
Content-Length: 0
User-Agent: jClientUpload
Host: s3.amazonaws.com
Date: Sun, 05 Aug 2007 15:33:59 GMT
Authorization: AWS 15B4D3461F177624206A:YFhSWKDg3qDnGbV7JCnkfdz/IHY=
Response:
HTTP/1.1 200 OK
x-amz-id-2: tILPE8NBqoQ2Xn9BaddGf/YlLCSiwrKP+OQOpbi5zazMQ3pC56KQgGk
x-amz-request-id: 676918167DFF7F8C
Date: Sun, 05 Aug 2007 15:30:28 GMT
Location: /onjava
Content-Length: 0
Server: AmazonS3
Notice the delay between request and response timestamp? The request Date has been issued after the response Date. This is because the response date is coming from the Amazon S3 server. If the difference from request to response timestamp is too high then a RequestTimeTooSkewed error is returned. This point is another important feature of S3 security; it isn't possible to roll your clock too far forward or back and make things appear to happen when they didn't.
Note: Thanks to ACL, an AWS user can grant read access to objects for anyone (anonymous). Then signing is not required and objects can be addressed (especially for download) with a browser. It means that S3 can also be used as hosting service to serve HTML pages, images, videos, applets; S3 even allows granting time-limited access to objects.
Creating a Bucket
The code below details the Java implementation of "onjava" S3 bucket creation. It relies on packages java.net for HTTP, java.text for date formatting and java.util for time stamping. All these packages are included in J2SE; no external library is needed to talk to the S3 REST interface. First, it generates the String to sign, then it instantiates the HTTP REST connection with the required headers. Finally, it issues the request to s3.amazonaws.com web server.public void createBucket() throws Exception
{
// S3 timestamp pattern.
String fmt = "EEE, dd MMM yyyy HH:mm:ss ";
SimpleDateFormat df = new SimpleDateFormat(fmt, Locale.US);
df.setTimeZone(TimeZone.getTimeZone("GMT"));
// Data needed for signature
String method = "PUT";
String contentMD5 = "";
String contentType = "";
String date = df.format(new Date()) + "GMT";
String bucket = "/onjava";
// Generate signature
StringBuffer buf = new StringBuffer();
buf.append(method).append("\n");
buf.append(contentMD5).append("\n");
buf.append(contentType).append("\n");
buf.append(date).append("\n");
buf.append(bucket);
String signature = sign(buf.toString());
// Connection to s3.amazonaws.com
HttpURLConnection httpConn = null;
URL url = new URL("http","s3.amazonaws.com",80,bucket);
httpConn = (HttpURLConnection) url.openConnection();
httpConn.setDoInput(true);
httpConn.setDoOutput(true);
httpConn.setUseCaches(false);
httpConn.setDefaultUseCaches(false);
httpConn.setAllowUserInteraction(true);
httpConn.setRequestMethod(method);
httpConn.setRequestProperty("Date", date);
httpConn.setRequestProperty("Content-Length", "0");
String AWSAuth = "AWS " + keyId + ":" + signature;
httpConn.setRequestProperty("Authorization", AWSAuth);
// Send the HTTP PUT request.
int statusCode = httpConn.getResponseCode();
if ((statusCode/100) != 2)
{
// Deal with S3 error stream.
InputStream in = httpConn.getErrorStream();
String errorStr = getS3ErrorCode(in);
...
}
}
Dealing with REST Errors
Basically, all HTTP 2xx response status codes are success and others 3xx, 4xx, 5xx report some kind of error. Details of error message are available in the HTTP response body as an XML document. REST error responses are defined in S3 developer guide. For instance, an attempt to create a bucket that already exists will return:HTTP/1.1 409 Conflict
x-amz-request-id: 64202856E5A76A9D
x-amz-id-2: cUKZpqUBR/RuwDVq+3vsO9mMNvdvlh+Xt1dEaW5MJZiL
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Sun, 05 Aug 2007 15:57:11 GMT
Server: AmazonS3


BucketAlreadyExists
The named bucket you tried to create already exists
64202856E5A76A9D
awsdownloads
cUKZpqUBR/RuwDVq+3vsO9mMNvdvlh+Xt1dEaW5MJZiL

Code is the interesting value in the XML document. Generally, this can be displayed as an error message to the end user. It can be extracted by parsing the XML stream with SAXParserFactory, SAXParser and DefaultHandler classes from org.xml.sax and javax.xml.parsers packages. Basically, you instantiate a SAX parser, then implement the S3ErrorHandler that will filter for Code tag when notified by the SAX parser. Finally, return the S3 error code as String:public String getS3ErrorCode(InputStream doc) throws Exception
{
String code = null;
SAXParserFactory parserfactory = SAXParserFactory.newInstance();
parserfactory.setNamespaceAware(false);
parserfactory.setValidating(false);
SAXParser xmlparser = parserfactory.newSAXParser();
S3ErrorHandler handler = new S3ErrorHandler();
xmlparser.parse(doc, handler);
code = handler.getErrorCode();
return code;
}
// This inner class implements a SAX handler.
class S3ErrorHandler extends DefaultHandler
{
private StringBuffer code = new StringBuffer();
private boolean append = false;
public void startElement(String uri, String ln, String qn, Attributes atts)
{
if (qn.equalsIgnoreCase("Code")) append = true;
}
public void endElement(String url, String ln, String qn)
{
if (qn.equalsIgnoreCase("Code")) append = false;
}
public void characters(char[] ch, int s, int length)
{
if (append) code.append(new String(ch, s, length));
}
public String getErrorCode()
{
return code.toString();
}
}
A list of all error codes is provided in S3 developer guide. You're now able to create a bucket on Amazon S3 and deal with errors. Full source code is available in resources section.
File Uploading
Upload and download operations require more attention—S3 storage is unlimited, but it allows 5 GB transfer maximum per object. An optional content MD5 check is supported to make sure that transfer has not been corrupted, although an MD5 computation on a 5 GB file will take some time even on fast hardware.
S3 stores the uploaded object only if the transfer is successfully completed. If a network issue occurs then file has to be to uploaded again from the start. S3 doesn't support resuming or object content partial update. That's one of the limits of the first "S" (Simple) in S3, but the simplicity also makes dealing with the API much easier.
When performing a file transfer with S3, you will be responsible for streaming the objects. A good implementation will always stream objects, as otherwise they will grow in Java's heap; with S3's limit of 5 GB on an object, you could quickly be seeing an OutOfMemoryException.
An example of a good upload implementation is available in the resources section of this article.
Beyond This Example
Many other operations are available through the S3 APIs:
List buckets and objects
Delete buckets and objects
Upload and download objects
Add meta-data to objects
Apply permissions
Monitor traffic and get statistics (still a beta API)
Adding custom meta-data to an object is an interesting feature. For example, when uploading a video file, you could add "author," "title," and "location" properties, and retrieve them later when listing the objects. Getting statistics (IP address, referrer, bytes transferred, time to process, etc.) on buckets could be useful too to monitor traffic.
Conclusion
This article introduced the basics of Amazon Simple Store Service REST API. It detailed how to implement bucket creation in Java and how to deal with S3 security principles. It showed that HTTP and XML skills are needed when developing with S3 REST API. Some S3 operations could be improved (especially for upload), but overall Amazon S3 rocks. To go beyond what was presented in this article, you could check Java S3 tools available in the resources section.
References and Resources
Source code: Source code for this article
SOAP: Simple Object Access Protocol
REST: REpresentational State Transfer
S3 APIs: Amazon S3 Developer Guide
HMAC: Keyed-Hashing for Message Authentication (RFC 2104)
S3 forum: S3 forum for developers
S3 upload applet: A Java applet to upload files and folders to S3
Java S3 toolkit: An S3 toolkit for J2SE and J2ME provided by Amazon
Jets3t: Another Java toolkit for S3
Eric Heuveneers is a software developer and an IT consultant with more than eight years of experience. His main skills are in Java/JEE and open source solutions.

Using XML and Jar Utility API to Build a Rule-Based Java EE Auto-Deployer

by Colin (Chun) Lu 11/16/2007
Introduction
Today's Java EE application deployment is a common task, but not an easy job. If you have ever been involved in deploying a Java EE application to a large enterprise environment, no doubt you have faced a number of challenges before you click the deploy button. For instance, you have to figure out how to configure JMS, data sources, database schemas, data migrations, third-party products like Documentum for web publishing, dependencies between components and their deployment order, and so on. Although most of today's application servers support application deployment through their administrative interfaces, the deployment task is still far from being a one-button action.
');
//-->

In the first few sections of this article, I will discuss some of the challenges of Java EE deployment. Then I will introduce an intelligent rule-based auto-deployer application, and explain how it can significantly reduce the complexity of Java EE system deployment. I will also give a comprehensive example on how to build XML rules using XStream utility library, how to extend and analyze the standard Java EE packaging (EAR), and perform a complex deployment task just by pushing one button.
Challenge 1: Package Limitations
A Java EE application is packaged as an enterprise application archive file (EAR). Java EE specification defines the format of an EAR file as depicted in Figure 1.
Figure 1. Standard Java EE EAR file structure
A standard EAR file meets the basic requirements for packaging an application as most of web-based JAVA EE applications are composed solely of web and/or EJB applications. However, it lacks the capability of packaging advanced JAVA EE application modules. For example, the following modules are often used in a JAVA EE application deployment, but cannot be declared in a standard EAR file:
JDBC Connection Pool and DataSource objects
JMS ConnectionFactory and Destination objects
JMX MBeans
SQL statements
Other resource files
Most of Java EE applications require Data sources, schema changes, data migrations, and JMS configurations. Today, these components have to be manually configured and deployed via an administration interface provided by the implementation vendor. This is typically the responsibility of the system administrator.
Challenge 2: Deployment Order and Dependencies
Another challenge to an application deployer is that he has to know the deployment dependencies and follow the exact order to deploy multiple deployments for one application.
A large Java EE application may have complex dependencies on other deployments. For example, corresponding database tables must be created before an application can be deployed; a JDBC data source must be configured ahead of a JMS server. In these situations, the deployer first has to coordinate with the application architect and developers to find out the deployment requirements and dependencies, and then make a detailed deployment plan. This process is not very efficient; we need a better solution.
Solution
How can we help a deployer survive these challenges? Is there a way to simplify this complex deployment process? A possible solution is to use the vendor proprietary capability to extend your EAR to be more intelligent. For example, WebLogic Server supports packaging JDBC and JMS modules into an EAR file, and the WebLogic Deployer can deploy your application as well as application-scope JDBC and JMS modules in one action. Isn't that useful? Wait a second, there are still limitations:
Tightly coupled - By doing this, your EAR is dependent on one vendor's application server. Your application has to be packaged according to the vendor's specification. This means if you want your product to be deployed across different application servers (or if your production wants to support multiple application servers), you have to maintain multiple EARs.
Complicated packaging - Since you have to follow specifications of different vendors, the application packaging is going to be very complicated and hard to understand.
Hard to maintain - For one application, you need to maintain different versions of EARs for different application servers, or even different versions of the same applications server.
Not a true one-button deployment. Since this type of deployment leverages vendor-specific tools, it cannot support deployment tasks that are not supported by application server. For example, one application may need to execute SQL statements to build schemas and load reference data or upload configurations to LDAP server to expose its service endpoints.
A practical solution is to make an intelligent XML rule-based auto-deployer by extending the Java EE packaging.
A Rule-Based Auto-Deployer
This solution has three main parts:
Tool: deployment XML rule generator using XStream
Packaging: extend EAR packaging to include the rule XML document using Ant
Deployer: EAR analyzer and auto-deployer using Java's Jar Utility API
The suggested deployment work flow is illustrated in Figure 2.
Figure 2. Deployment work flow
Case Study
Let's think about the deployment of a Service Order Processing application to a WebLogic server. Here are the deployment tasks that need to be done:
Configure a JDBC connection pool and data source for manipulating the order processing data.
Execute SQL statements for database objects creation (tables, triggers, and reference data, etc.).
Configure a JMS queue to dispatch service order requests.
Upload system properties (e.g., the URL and JNDI name of the JMS queue for order processing) to a LDAP server.
Finally, deploy this application to an application server
1. Deployment Tool: XML Rule Generator using XStream
The first step is to generate an XML rule from a plan by the application assembler.
Step 1: Define a deployment plan
To define a deployment plan, the application assembler discusses the deployment requirements with developers and architects. For the sample service order processing system, a deployment plan is defined below:DataSource,t3://localhost:7001,NONXA,jdbc/testDS,colin,password,jdbc:oracle:thin:@localhost:1521:localdb,oracle.jdbc.driver.OracleDriver
SQL,t3://localhost:7001,jdbc/testDS,sql/testDS.sql
JMS,t3://localhost:7001,PTP,testJmsServer,testJmsRes,jmsTestConnFactory,jms/conn,testQueue,jms/testQueue
LDAP,ldapread-server.com,489,cn=one_button_deployment,o=system_configuration,ldif/test.ldif
APPLICATION,t3://localhost:7001,SOManager,Release v1.0
Step 2: Use the Deployment Tool to generate an XML document from the plan
After the plan is defined, the application assembler runs the deployment tool application to feed in the plan and generate the XML rule document.
The sample application is shown Figure 3.
Figure 3. Sample deployment tool

Introducing Raven: An Elegant Build for Java

by Matthieu Riou 12/05/2007
Rational
There's a first step that every single Java project has to go through: setting up a build system. And often before that, choosing a build system. There hasn't been much improvement in the Java world in this area for quite a while; Ant 1.1 was released in July 2000 and Maven was created in 2004.
');
//-->

This lack of innovation could seem strange: for non trivial projects (which most end up being after some time), writing build scripts can take a lot of time. Given that builds usually aren't shipped to users, the time spent on their maintenance can seem like time lost... but a sub-optimal build system will actually make you lose time. So down the road, the best build is the one that saves you the most time when writing, debugging, and maintaining your scripts.
I started working on Raven because I was deeply dissatisfied with the solutions available in the Java world. And from what I've heard from other developers, I'm not the only one.
Now I'm going to say something controversial: both Ant and Maven have their strengths and weaknesses, but these tools are just toys compared to a full scripting environment. Think conditions, loops, exceptions, complex data structures. Most of all, think of all the details that you forgot to think about, all the little quirks and peculiarities that appear on most projects. What is going to be most powerful to solve these problems, a simple XML grammar or a full and powerful scripting language (not to mention Turing complete)? Would you rather write copy source, target or 3 lines of XML? And what fallback do you have when you're not within the boundaries imposed by the tool?
Getting Practical
Raven is based on the Ruby dynamic language and its most prominent build tool, Rake. Don't worry, you don't have to know either to read this article or start using Raven, you can learn little by little, starting simple. Rake itself is a little bit like Ant, it lets you define tasks and the dependencies between them. Only its syntax is much sweeter. For example:task "hello" do
puts "Hello"
end
task "world" => "hello" do
puts "World"
end
If you have Rake installed, put this in a file named Rakefile and execute rake world in a command in the same directory as the file. It will do what you would expect. Note that the syntax could be even more terse by using { ... } blocks on one line instead of do ... end but this demonstrates the most common case, where you'll have more than one line of code in your task body. And you can put pretty much any Ruby code within the task block (and even Java code as we'll be using JRuby), even rely on external libraries, instantiate objects, and call methods. Your build can be as simple or as complex as you need.
The limitation is that Rake only provides very generic tasks that just wrap some classic Ruby code but don't do anything much by themselves. You have to tell them what to do in the nested code. That's where Raven shines. To make the life of Java developers easier, Raven adds a set of predefined tasks that implement the most common needs for a Java build system, like compiling or resolving jar dependencies. Just like a library, but instead of being a set of classes and methods, it's a set of reusable tasks that are well-suited for Java builds. So, all the tasks you're going to see in the rest of this article are implemented by Raven.
But wait, I haven't told you how to install anything. The quickest way to get started is to use Raven packaged with JRuby (a Ruby interpreter written in Java), everything necessary is bundled in it.
Download the Raven distribution prepackaged with JRuby.
Unzip it on your disk and set the environment variable JRUBY_HOME to this location.
Add %JRUBY_HOME%\bin to your PATH environment variable.
Check your installation by typing jruby -v in a command window.
For a more complete installation using the native Ruby interpreter (it's much faster to start up), see the Raven web site.
A Simple Project
To show you how to use Raven, I'm going to start with a simple but still a real world example: building Apache Commons Net. The Apache Commons Net library implements the client side of many network protocols like FTP or SMTP. Right now, their build is based on Ant and is mildly complex, so it's a pretty good candidate for me to present Raven.
Raven being just a set of specific tasks (plus a bit more, but we'll see that later), the whole build is still directed by Rake. So, all of the code I'm going to show is part of a file named Rakefile that should be placed at the root of the Commons Net unpacked source distribution. When you start Rake, it always looks for that script.
This first snippet demonstrates initialization and dependency handling:require "raven"
require "rake/clean"
CLEAN.include ["target", "dist"]
dependency "compile_deps" do task
task.deps << "oro-oro" end The two first lines load Raven and a Rake subtask for cleaning. The require command in Ruby is a bit like import, only it can load either a whole library (like Raven) or a single file. The third line tells Rake which directories should be removed by the clean task. Lines 5 to 7 demonstrate the usage of the Raven dependency task. Commons Net depends on the Jakarta ORO library, so we're adding a dependency on it. It's just about listing which set of libraries will be needed. Calling the task (by executing rake compile_deps) will actually trigger the library download from a default Raven repository and depending on it will propagate a proper classpath as we'll see later. Also note that you can specify more than one library at a time and also give version numbers (Raven uses the latest by default). All of these library declarations are valid within a dependency task: task.deps << ["springframework-spring-core", { "taglibs-standard" => "1.1.2" }]
task.deps << ["axis2-kernel", "axis2-codegen"] The provided name should follow the Maven naming of groupId and artifactId separated by a dash. Browse the Raven repository to see which libraries are available. Partial names can also be provided when there's no ambiguity. Now that we're done with dependencies, let's see what compilation would look like:javac "compile" => "compile_deps" do task
task.build_path << "src/java" end jar "commons-net.jar" => "compile"
The javac task is another of the tasks that Raven provides. What it does is pretty simple to understand. The => notation declares the pre-requisite on the dependencies. From this Raven can automatically compute the classpath for you. Notice that we are also setting the build path. It needs to be explicit as Commons Net has its sources under src/java. If it were under src/main/java, no additional configuration would be needed, making this the sweet one-liner:javac "compile" => "compile_deps"
Finally, once compilation is done, the previous snippet also packages everything in a jar. That's the role of the jar task. The produced jar file is directly named like the task, minimizing the number of parameters.
With everything I've explained so far, you should end up with a 10 line Rakefile located at the root of the Commons Net source distribution. To run the build, just execute rake commons-net.jar and everything should get built in a target directory. You could also add a default task so that just running rake would build your jar:task "default" => "commons-net.jar"
Some More
Compiling and packaging is nice, but it's usually only the first step in a build. For example, the Commons Net Ant script also handles tests and Javadoc. How would you do this with Raven? Once again, it's pretty simple, really.junit "test"=>["compile", "compile_deps", "test_deps"] do task
task.build_path << "src/test" task.test_classes << "**/*Test.java" end javadoc "jdoc" do task task.build_path << "src/java" end You probably don't need much of an explanation to understand what this does. Just note that the settings inside the tasks are here because Commons Net directory structure doesn't follow the Raven defaults. If the tests were located under src/test/java and the test classes followed the Test* pattern, the tasks would just be empty. There are a few other tasks that I won't detail much more here, but that you should know of, in case you would need one of them. jar_source Builds a jar file containing your project sources war Builds a WAR file from your compiled classes and the additional web application resources located under src/main/webapp lib_dir Creates a library directory and places all your project dependencies in it, makes it very easy to construct a classpath for your command scripts (bat or sh) On the Shoulders of Giants To be complete, our real life example should include a way to build a distribution. The Commons Net original build has a dist task and, even if it didn't, distribution is a pretty common use case, perhaps even the most common. So, how would you go about doing it with Raven? Well, errr, you don't. There's nothing in Raven to help you build distributions. You see, there's no real standard way to make a distribution, it really depends on what you want to include. But don't worry, you're not left alone here. As I mentioned at the beginning of this article, Raven is built on top of Rake, which itself runs in a full Ruby interpreter. So our dist task is just going to be a simple Rake task:lib_dir("dist:libs" => "compile_deps") do task
task.target = "dist/lib"
end
task "dist" => ["commons-net.jar", "dist:libs"] do
cp ["LICENSE.txt", "NOTICE.txt", "target/commons-net.jar"], "dist"
File.open("dist/README.txt", "w") { f f << "Built on #{Time.now}" } end The first line of code demonstrates the usage of the lib_dir task that I explained previously. Then comes the interesting bit. The dist task is a standard Rake task, it only checks for the prerequisites and executes the code body afterward. Here I'm just making sure that the jar has been built and the libraries are included in a lib sub-directory. The rest is pure and simple Ruby. Rake pre-includes a Ruby module that handles all basic file operations. Things like cp (copy), mv (move), mkdir (make directory), or rm (remove). That's pretty handy in a build where you typically do a lot of file manipulations. So, the first line in my task block copies the license, the notice file, and the produced jar in the distro directory. The cp method, just like most of the others, accept arrays. The second line demonstrates how you would go about tweaking some file content. I'm creating a new README file (the "w" flag means new file) and adding a simple timestamp in it. Don't be put off by the #{..} syntax inside the string, it's just a way to place the result of a computation of a variable value inside of a string (the equivalent of "Built on" + new Date().toString()). Typically you would append that type of information in your README using the w+ flag, but Commons Net doesn't have a README, so I'm just creating an empty one here. With the dist task, our build is complete, I've shown you everything that was needed to replace the original Ant script. We've reduced a 170 lines build to a 20 lines one. That's almost 10 times less code to maintain. But to drive my point a little further, just let me give you one last example that would demonstrate the usage of a control structure:MODULES = ["web", "business", "persistence"] MODULES.each do mod javac "#{mod}:compile" => ["#{mod}_deps", "common_deps"]
end
This would create a compilation task for a given list of modules. No need to repeat, just iterate. You can even create a method and call it from a task with specific parameters. Very basic things when you're programming, but something we've lost with most current build tools.
I hope you're now starting to see how much power being based on a scripting language like Ruby gives to Raven. You have a pretty strong and terse basis with a set of Java-specific tasks provided by Raven, simple cases are very simple to write. For everything that doesn't fit in the framework, you have an elegant safety net (in place of a plugin framework).
Other Choices
Raven isn't the only one of its kind, it's my answer to the build problem and to the dissatisfaction I had with the currently available tools. Others came with other solutions coming from the same frustrations, and I don't pretend that my solution will be the best for everybody. So, there are a couple of alternatives, built on the same foundations as Raven, namely Rake, but with a different philosophy.
The first alternative would be Antwrap. I wouldn't actually consider it a replacement for Raven, so much as a very good complement. It lets you reuse all existing Ant tasks that have already been created, but with a much nicer syntax than XML. So, you could use Raven for everything that's already included and Antwrap when an existing Ant task does what you're looking for, all within the same script.
The second tool is Buildr. It's an Apache Incubator project and completely overlaps with Raven, so it could be a total replacement. The difference is in the philosophy: Raven is imperative, asking you to write how to build your project; Buildr is more declarative, you specify what your build looks like. So, said differently, those of you who prefer the style of Ant over Maven will prefer Raven, those who are more seduced by the Maven model will probably find Buildr more seducing. And I don't see this as a problem, software is also a matter of preferences and taste, you should just use the tool that makes you most comfortable.
Conclusion
In this article, you've learned how to write a build script for an existing Java project using Raven. You've seen how to handle dependencies, compile, package, and do all the tasks necessary to most Java software builds. However, there's much more to Raven than what I've explained in those lines, especially in the dependency management area. I encourage you to continue exploring, using the Raven web site and book (see references) to discover more. And hopefully you'll find interest in Rake and the Ruby language as well.
Beyond Raven, I hope you'll start being more demanding from your build system, a rich scripting environment should be a minimum. Too much time has been wasted writing XML.
Resources
The source for the Rakefile detailed in this article.
Raven distribution, download the pre-packaged JRuby one for easy installation.
Apache Commons Net to download the source distribution built in the article.
Raven's web site, with more information and examples.
The Raven Book, a definitive reference.
Rake documentation.
Antwrap
Buildr
Matthieu Riou has been a consultant, freelancer, developer, and engineer for a wide variety of companies. He's also a Vice President at the Apache Software Foundation and has founded several open source projects.

Introducing [fleXive] - A Complementary Approach to Java EE 5 Web Development

by Markus Plesser and Daniel Lichtenberger 05/01/2008
The daily bread and butter of an architect or developer dealing with web applications usually consists of a great many repetitive tasks. These start with setting up a development environment, choosing and downloading libraries (or let tools like Maven download them), creating basic build scripts, and wiring up all necessary components. After some time a naked skeleton for a web application is ready and waiting for further coding. While these steps are easy and can be efficiently handled by automation tools, other tasks like managing users, choosing a viable form of persistence (file based, JDBC, Hibernate, JPA, etc.), and implementing security for your sensitive data will still require a lot more time and effort.
');
//-->

There are many solutions out there that deal with some of these issues, but in most cases with some drawbacks: e.g., Ruby on Rails -- it is great and works well, but may not have corporate penetration, especially if a Java or .Net platform is already a company standard. We won't delve into the .Net world -- since this is a quite different situation than your typical Java environment -- but having a look at Java and especially Java EE, a web application will in most cases use JSF as its web framework, and the choice for a viable persistence framework will usually be Hibernate or JPA (in some Application Servers implemented using Hibernate). Depending on the use of some scaffolding tools you'll soon have some very basic versions of forms to create, read, edit, and delete data instances.
So far it has been pretty straightforward -- now imagine you also need authorization and authentication -- not only to be able to use (and hence see) data from your application, but even more to restrict access in a finer grained way than the usual "all or nothing" approach. You'll soon end up coding your own custom tailored mini-security framework - maybe based on established open source libraries like OSUser or Acegi coupled with some JAAS code.
Over the years, the authors did the same tasks over and over again. We learned a lot -- in particular about the capabilities and effort to integrate various libraries, as well as their major advantages and drawbacks -- and came up with a list of requirements for a framework:
Built in security, from authentication to fine grained authorization
Datatypes with inherent support for multiple languages
Versioning
Hierarchical data structures
Support for workflows
Every little bit of the framework should be scriptable
No vendor or technology lock-in
Interoperability with other applications
Figure 1. [fleXive] core components
At its heart [fleXive] is a pure Java EE 5 application, the core is made up of EJB3 beans, sharing common states and configuration using a clustered cache (including out-of-the-box support for JBoss Cache 2.x with pluggable interfaces that could be used for other providers like GigaSpaces or Coherence), while the web layer is based on JSF using Facelets, Richfaces/Ajax4JSF, and the Dojo toolkit. As a persistence alternative to JPA/Hibernate (which can be used as well of course) [fleXive] comes with its own persistence implementation offering some advantages like integrated ACL based security, versioning, support for multilingual data types, inheritance, and reuse. The persistence framework is not intended as an object-to-relational mapper, but rather as generic objects with all instance data accessible using XPath-like statements or traversing object graphs.
All these so called engines (implemented as Enterprise JavaBeans) can be used in your project. [fleXive] supports you by creating application skeletons where you just have to implement your business logic, use some of the pre-made JSF user interface components while giving you the freedom to use which ever Java EE 5 compatible library you wish.
Figure 2. [fleXive] support for writing applications
A big advantage of using [fleXive] is the powerful, and extendable, backend application where you can model your data structures, manage users and security, visually create queries, store search results in so called briefcases, or edit your data instances.
While being designed and written from scratch, [fleXive] uses very mature and approved concepts dating back to 1999. Originally intended as a framework for content management systems it grew to a feature reach multi purpose framework incorporating state of the art open source projects and tools.
Not everything is perfect yet and some features (like import/export and webservice support) are still in the works, but the majority of the framework is very stable and solid and soon ready for production use. Since we at UCS (unique computing solutions gmbh), the company sponsoring [fleXive] and being responsible for development, believe in OpenSource and "give and take," we decided to release the whole framework licensed under the LGPL v2.1 or higher.
A backend application showcasing most of [fleXive]'s features which is built on top of the framework is licensed under the GPL v2 or higher. It helps you to visually manage most aspects of [fleXive] - like defining data structures, building queries, manage users and security, etc.
And while we are currently the only ones maintaining and extending [fleXive] we certainly do hope for some positive feedback, feature requests, and helping hands when it comes to development and documentation from you, the community, to make [fleXive] a valuable choice for upcoming web applications.
We tried not to reinvent the wheel, but to make it easier and faster to develop web applications using up-to-date technology, provide means to extend the framework using plugins, and provide a backend administration application that is ready to use and can easily be adopted to your needs.
Current development snapshots and the "Release Candidate 1" are available for download at http://www.flexive.org/download - the final release following hopefully soon after [fleXive] is feature complete and more or less bug free. For further information please have a look at the roadmap.

Does Enterprise Development Have to Be Painful?

by chromatic 02/28/2008
Despite the buzz about social networking, mashups, collaborative filtering, machine learning, and everything else grouped under the convenient label of Web 2.0, writing business software seems to be business as usual: push messages around, present data entry screens, produce reports, and occasionally make people's work easier by automating repetitive tasks. I fled corporate IT in 2000, believing that business software—especially "enterprise software"—is bulky, complex, and uninteresting.
It can be. Enterprise-wide software must be reliable and fault-tolerant. That's not simple or easy or even fun to build. Unless you have the time and resources and talent to write and maintain and deploy your own completely custom software (who does?), you use generalized software packages and adapt them to your business. Only the generality of such a framework offers the potential for customization... at the cost of complexity.
Recently, Tim O'Reilly spoke at SAP's Tech Ed Conference. He found inspiration in subsequent conversations, and wrote SAP as a Web 2.0 Company?. SAP Labs invited other O'Reilly folks to see what they're working on and to ask for advice on how to engage the large community of SAP users, developers, and consultants more effectively.
I went there, and met Will Gardella (see SAP's Composition on Grails). His work convinced me that my perception of SAP and its software was incomplete. While there's still necessary complexity in producing robust, reliable, business-wide and business-critical software, writing that software does not have to be an exercise in tedium. Will, Moya Watson, and the other people I met actually live that idea.
The team at SAP made me an offer. If I would give their software a fair try and write about my experience installing it, learning it, and building a couple of modest sample applications, they'd give me all of the support I wanted. We decided that the right approach was to explore the software behind Will's Groovy on Grails, so I agreed to install and explore the SAP development environment called SAP NetWeaver Composition Environment, or SAP NetWeaver CE.
Why does this matter?
Business software isn't going away. If you're a consultant or a small ISV, you probably make money writing, customizing, and maintaining software of this sort. Maybe your platform isn't J2EE or ABAP, but learning an extra tool and platform gives you and your customers more options.
Most of the components in this stack are at least open standards. Some are free and open source software. You can interact with a SAP installation through SOAP/WSDL, with Groovy, and as a J2EE provider. SAP NetWeaver CE itself is an Eclipse-based IDE. These are well-established and well-understood technologies, not a proprietary concrete jungle.
It's good to learn something new. I haven't done serious Java development in several years. Most of my recent programming is low-level, cross-platform C code. Stretching my brain and switching habits away from my Vim, GCC, Valgrind, and GDB habits helps me grow as a developer.
Good development habits and good ideas come from all over. SAP NetWeaver CE and some of its tools encourage a nice separation of concerns that, applied well, appears to allow a rapid yet robust approach to developing and deploying applications. I've built MVC applications in several languages, but it's nice to see it encouraged as well as it is here.
First Approach
My initial impression was, "This software sounds great, if you're an expert already." Will and Moya have built impressive systems, but they're experienced SAP insiders. I'd have to start from zero, relying only on a decade of experience building software, mostly in different realms.
I'd long heard that installing and configuring SAP was complex. Thus, downloading and installing the SAP NetWeaver Composition Environment was my first milestone. Once I'd accomplished that, I could survey the landscape and review my initial impressions. Even discovering what I needed to download took some time, so I gave myself a week. I relied heavily on help from Armand Wilson, a consultant within SAP for advice over email (and once, via telephone and a shared desktop) to resolve at least one troublesome problem.
None of the machines in my office were suitable installation candidates. I convinced O'Reilly IT to loan me a spare ThinkPad with a 1.5 GHz Centrino CPU and 2 GB of RAM—and, most important, a fresh Windows XP installation. A virtual machine image will, apparently, not do the trick, even on a monster multi-core 64-bit Ubuntu development box.
Installing the SAP Server
Armand told me to download the SAP NetWeaver CE Trial Version from SAP NetWeaver Composition Environment Downloads on the SAP Developer Network (SDN). This file is really big; it's an RAR file more than a gigabyte in size. I never successfuly completed a download on the ThinkPad due to a combination of wireless networking and server cancellations.
After several abortive attempts, I downloaded the file on a Linux machine thanks to curl and resuming downloads, extracted the archive there, and used rsync to copy all of the files from the Linux machine to the Windows machine.
This gave me a directory that included an HTML file called Start. I launched the HTML file and skimmed the instructions. An installation link in the sidebar prompted me to download or save an executable file named sapinst.exe. I launched it myself from JavaEE\CE71_03_IM_WIN_I386_ADA\sapinst.exe.
Unfortunately, the whole directory path had spaces in it, so the installer refused to run. I moved the top folder to C:\ and this time the installer launched successfully. It offered only a few prompts: accept the license, specify a SAP system ID (I kept the default of CE1).
The next step asked for my JCE unlimited strength jurisdiction policy archive. I didn't have one, so the installer refused to proceed. I found the JCE on Sun's Java downloads site. I extracted the JAR file from the ZIP, and gave the installer its path. That didn't work either. When I gave the installer the full path to the ZIP file, it proceeded.
Next, it asked for a master password for the server. This step gave me some trouble. The installer rejected my first, a strong password with non-alphanumeric characters. I wondered if it only allowed alphanumerics, then finally read the password directions and realized that it was one character too short. I've spent too much time working around bad password systems to trust that any password system could actually work well.
I wrote down the password. I had a feeling I'd use it later.
The installer then scanned my system and helpfully reported that I had 2047 megabytes of memory and the minimum recommendation is 2048. I risked it, as I couldn't find a spare 1 MB stick and an empty slot in the ThinkPad. The installer purred through all 33 installation phases.
I told Armand that I thought I'd completed things on my own. He asked the innocent question, "Is the SAP server running?" He told me to launch the management console to verify that both little icons in the left tree under the SAP Root and CE1 were green. They looked green to me, but when we looked in the Process List entry under both icons, neither service was actually running.
After more scrambling, I noticed that I had two SAP management consoles running. I closed both consoles and waited for a moment, then launched only one console and attempted to start both services. Twenty minutes later, they had both started. Step one was complete. With a better download system than I have—and existing hardware—you should be able to install an SAP server in two hours. Read the installation requirements better than I did, and you should have no trouble.
Installing the SAP NetWeaver Composition Environment with IDE
Step two was to install the developer components, including the Eclipse-based IDE. I went to the same SAP download page as before and downloaded only the Developer Studio, which seemed slim at 680 megabytes. This was a mistake. I needed the 1.2 gigabyte Composition Environment download. The smaller download lacks the Composition Environment plugins for Eclipse. Unfortunately, I only discovered this when I started to build applications, and I found no good way to install the plugins separately. I had to uninstall and reinstall, but that only took a few clicks and some time.
The downloading process was again painful, but the Linux/rsync approach worked fine. Installation proceeded until the installer tried to find JDK 1.5.0_06 or better on my machine.
I knew I had one installed, but I couldn't find it either. After another trip to Sun's download page, I had installed the entire JDK. I even included the optional parts I knew I didn't need, as my instincts had already led me astray enough.
The installer ran for an hour and then wanted to connect to the Internet to perform more updates. I let it update everything.
With the proper package downloaded and installed, I was ready to write applications—starting by working through the example code Armand provided.
Conclusions
The two most difficult parts of the installation process, for me, were almost entirely external. One was getting the right hardware, and that's because of a little scramble in our IT department right before the Christmas holidays. The other problem was getting a reliable download onto the ThinkPad. If I had a much faster Internet connection, or if I were more familiar with the Windows tools for managing long, potentially-interrupted downloads, that process might have been easier too. I spent most of a work-week downloading the software.
Both installations were time-consuming, but not troublesome. Paying more attention to the installation instructions, particularly the dependencies, would have saved me time. I did search SDN for some error messages and workarounds to see if I could find solutions for any problems I encountered, but I seem to have avoided any serious troubles not of my own making. Even the one problem I had with SDN (an invalid download link from a Wiki page) saw a very quick fix from Moya Watson.
I have a lot to learn to write applications with SAP NetWeaver CE, but I'm past the first hurdles, and that gives me a lot of confidence that things will make a lot of sense in context from here. While the download-and-go score is much less than the simple aptitude install build-essential I normally use on a new machine, the immediate out-of-the-box capabilities are greater. Writing software this way may be much easier than I thought.
chromatic promotes free and open source software for O'Reilly's Open Technology Exchange.

Does Enterprise Development Have to Be Painful? (Part Two)

by chromatic 05/07/2008
As I mentioned in Does Enterprise Development Have to be Painful, Part One, I've been exploring the world of enterprise software development with SAP NetWeaver Composition Environment (after this, SAP NetWeaver CE), as part of a challenge from SAP Labs to see how much I could accomplish with minimal training and direction (though with the offer of assistance from one or two of their consultants if I managed to get myself completely stuck).
I decided that my best approach would be to build a simple, self-contained application with their system, writing as little code as possible and using as many of their tools as I could. I settled for a tiny task tracking application, in which a task has a due date, a description, and an associated category. The entire application consists of two models, their business logic, and a user interface. SAP NetWeaver CE provides plenty of tools to build, manage, and deploy these types of business objects and their relationships, so I thought this would be a good basic experiment. This is a standard CRUD-style application, where the code needs to Create, Read, Update, and possibly Delete data.
Is it too basic? Perhaps; if this were the only type of application I ever built, SAP's tools are definitely overkill. I don't need clustering or monitoring or failsafe deployment and rollback to keep track of what I need to do in a day. However, it was the minimal application I could imagine that exercised most of the parts of the system that a real application would actually use. In building and deploying the task tracker, I performed the work that a real team would perform when building and deploying a much larger application. I just didn't have to invest several months to design and build such an application. My design took me a day, and I figured that building the application myself should take a couple of ideal calendar days.
What's in SAP NetWeaver CE
SAP NetWeaver CE has two main parts. The first is a server component that represents the large database, cluster, services registry, user management, and central configuration of an enterprise-wide installation. For the most part, I ignored it except to make sure it was running and to perform a few configurations. The second part of the system is an IDE built on Eclipse. If you're at home in Eclipse or another IDE and don't mind performing some visual modeling instead of writing heaps of code yourself, you will find the IDE very comfortable.
This modeling was simple. Although I like opening a text editor and writing some declarative code to tell an object-relational mapper the structure of my database (or to make that tool generate my schema for me), the model design tool in the IDE was easy to use. I didn't have to think about creating tables or choosing column types or optimizing data for JOIN operations. If the abstraction holds through my application's lifecycle, I won't have to worry about versioning or migrating data between schema changes.
Declaring my Task and Category models was as easy as creating new business objects and selecting from menus of available attribute types. Although mousing around was probably slower than typing the corresponding short declarations in a text editor, there's enough metadata slinging happening behind the scenes that I didn't perceive any mild inefficiencies in the UI; it was doing enough of the other work for me.
This was the easy part of the process, and with the help of one of the built-in tutorials, I had two models built and associated very quickly.
Modeling Business Objects
The word model should make you think that these models contain business logic. They do. However, this is where I first ran into trouble. Models have operations -- business logic -- and the IDE gives you an easy way to declare them. For example, I wanted an operation which returned a list of all open tasks and another operation that returned a list of all tasks for a given date. You can create an operation which filters the entire collection of model items on a particular attribute, but apart from creating some metadata (I assume hidden somewhere) and adding a method stub to the generated Java Bean for your model, nothing else happens. You have to write code that uses the appropriate SAP Java APIs to perform this filtering. The help system has some information on how to write query filters, though it is unclear. (Likewise, the tutorial example provided is missing code and writes to a deprecated API.)
As with the basic model structure, you model operations in the IDE, selecting the input and output types (both provided by the Composite Application Framework and modeled explicitly on your own) as well as any exceptions that the operation might throw. The IDE generates methods on your model beans for you, but only signatures and empty implementations that return null. It's up to you to implement the rest of the code. One of my initial experiments was to create an operation that returned a collection of all of the open tasks by filtering out all tasks with an open status. I originally modeled it believing that taking the status type as an input parameter was the right approach, but it appears that creating a non-parameterized filter in the body of the method is correct.
Producing a UI
I set aside the notion of finding the most correct and purest design in favor of getting the back-end model to communicate with a front-end UI, specifically through the use of Visual Composer. Visual Composer is a UI-builder with intelligent widgets configurable almost entirely through a drag-and-drop interface built with SVG and other web technologies. There's no code required. Visual Composer can consume web services if you have them deployed properly, which means you need to produce a valid WSDL file and publish it somewhere that Visual Composer can access it.
I had trouble with this step. There are several different ways to expose business models as web services. Their context menus available from the project navigator give you the option of exposing them directly. You can also model services with their own operations apart from your business models. I assumed that providing an application service would be the proper approach; however, all of the tutorials and documentation I saw described the very configuration of application services and again gave very little information about what the body of the generated methods should contain. I'm comfortable writing business logic, but I didn't find a good reference to the types of operations most often found within these methods, nor the preferred and current APIs provided for performing this logic.
Although the generated business models all have CRUD methods provided to create, read, update, and delete business model instances, the generated application service has no operations by default. I didn't see an easy way to link in the operations of the business models. Presumably it is possible to expose those operations directly, or to wrap them in the application service. I can understand the organizational principle of modeling business objects and providing different API bundles for different types of applications, but the enforced striation seemed excessive for my very simple purposes. (In larger projects, it's likely very important.)
I decided to expose the Task models operations directly as a web service. Configuring and registering this web service with my SAP server for Visual Composer's consumption was the most complex part of this process. My contact, Armand, walked me through testing the web service from the IDE (which launches a web service browser), configuring the web service both inside the IDE and deploying the service to the SAP NetWeaver server, and creating a destination for the web service in the SAP NetWeaver Administrator. At that point, we restarted Visual Composer, and I was able to see my web service as a data component within Visual Composer. Since that point, I've learned that you can right-click on the services search widget within Visual Composer to refresh the services cache without restarting the system.
After all of that, building a UI with Visual Composer was simple. Visual Composer presents a few menus of widgets, including buttons, table lists, and input boxes. Because WSDL includes remote calls and argument types, you can easily connect a UI widget with the proper parameters such that input and output displays properly. You can consume several web services in a single form; one view of the UI shows logical relationships and data flow between widgets and services and the other is a layout view, which allows you to rearrange the actual view of the UI.
Yet More Than One Afternoon
With everything working together, I had finally achieved my write-test-debug cycle. Even though my actual code is minimal, my web services are small, and my operations are few, the cycle is not fast. My under-powered laptop running the SAP server, the IDE, and Visual Composer takes several minutes to generate, build, and deploy a new version of my web service to the J2EE server, and Visual Composer takes a few minutes to start. The effective cycle of experimentation is by no means instantaneous or cheap. If you're interested in performing similar experiments, I cannot recommend more highly browsing through an existing non-trivial application to get a feel for how components connect. The better your understanding of the pieces and their relationships, the less time you'll have to spend backtracking and redeploying to correct your mistakes. Experimenting on your own from scratch is very time-consuming. I also recommend a high-powered development machine -- or better yet, a separate machine for the server and another for the development station.
Having built a trivial application, I see the power of this system. It took me much more than an afternoon to put things together correctly the first time, but reproducing my results even on a new project will be easier. Except for the initial system configuration and deploying the web service, the only difficult or time-consuming steps of the process were those for which the available documentation is skimpy or absent.
My next task is to bundle the application for deployment and distribution. That's the subject of my next article.
chromatic promotes free and open source software for O'Reilly's Open Technology Exchange.