Showing posts with label continuous integration. Show all posts
Showing posts with label continuous integration. Show all posts

Sunday, November 23, 2008

The Supply Chain of Continuous Integration

When I was first introduced to Continuous Integration, I viewed it as a black box with a well defined interface. It was the same kind throw-it-over-the-wall mentality that some people have with testing: When the code is "done", give it to the testing shop and mark the checkbox complete. It may or may not return with feedback attached.

I got a different perspective of how CI should work, however, while working on a CI team. More than a little of this new perspective is probably due to working in a company that believes in and practices agile development and scrum management. With agile, we are much better at involving testers into day-to-day activities rather than the cold baton hand-off as sprint review time rolls near.

What got me thinking about a CI supply chain concept was the story from The Earth is Flat about how UPS has evolved from a package delivery company to an integral part of many companies day-to-day business operations. They started with a core business of picking up and shipping packages. But it turns out, there are inefficiencies to this bolted on approach. In order for the shipping to be efficient for both UPS and the contracting company, the entire supply chain had to be prepped in advance. This includes both pre and post shipment. The visibility into business processes had to be bidirectional.

I think the same is true with continuous integration. It is based on the old computer adage, GIGO, Garbage In Garbage Out. So a team that writes plenty of code, but no tests. What is the value of CI for them? The feedback is minimal. To get the most value out of an automated build system, there must be some forethought into what kind of quality feedback you want to see, and then what teams can do to define and integrate the right reporting tools into the build.

For example, even after automated tests written and incorporated into the build, you may want to get feedback on some quality measures on your code. In our case, we wanted Checkstyle and PMD reporting on our Java code. We use maven as our build tool, so adding the reporting into our builds was simple. But then the question becomes, what coding standards do we want to compare to? What PMD rulesets represent a sane minimum that teams can deal with and learn from at the same time?

So now, a team dedicated to providing CI services is recommending quality reports to both developers and stakeholders AND helping to define the rules that code should be compared against. My first take on this approach, deviating from a traditional CI definition was a sense of invasiveness. Upon further reflection, I have embraced the blurred lines between teams for several reasons. Firstly, it breaks artificial team boundaries and keeps communication lines open. CI is no longer a black box, it is a visible, value contributing component of system development. Secondly, developers won't (at least consistently across an organization) take the time to inject QA into their processes. A CI team, however, does have the time to take the first steps and get the ball rolling. The conversation is more constructive when you have something in place actually working then just a bunch of talking going on about what could be.

I think a simple diagram emphasizes the point. If "A" is a development team, "B" is a CI team, and "C" represents the stakeholders of the solution being developed, then the shaded areas pinpoint a missed opportunity unless you adopt the supply chain analogy. These are not hard control points, but juicy overlap, waiting to be optimized by as AB and BC working together.

So far, I've talked about the pre-shipment benefit leading into CI. I also now believe the supply chain post-CI is equally important. That is, where does the feedback go and how do you make is so easy, there is no reason not to use it. In our case, this encompassed two audiences with two different needs.

1) Developers. They need the low level test results, test coverage, Checkstyle and PMD reports.

2) Stakeholders (Project Managers, Solution Owners, Scrum Masters). They need to know that the developer teams are using the system. What is the source control commit frequency? How long do CI builds stay broken? How long are the automated builds taking? In short, the interest is around are they getting their monies worth on CI investment and knowing if the teams "get it". It provides them with just enough information to start asking the right questions to the right people.

So the supply chain from CI into management visibility, in our case, ended up being a enterprise level portal aggregating project CI metrics together into an at-a-glance view of how well their agile teams are performing. This could be stop light charts on current CI conditions or simple graphs of build times or number of tests as plotted over the last 30 days.

As always seem to be the case, the benefit of this proposed supply chain approach is inceased communication.

Again, this initially seemed to stray from traditional core CI functionality, but in reality it is simply providing the visibility into the process that scrum promises. Warts and all.

Monday, April 21, 2008

Maven SNAPSHOT Traceability

So your scrum team is creating non-unique SNAPSHOT versioned artifacts throughout your sprint. How do trace that SNAPSHOT version back to a baseline that is of any QA relevance (build number, subversion revision, datetime stamp)?


<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<phase>generate-resources</phase>
<configuration>
<tasks>
<tstamp>
<format property="now" pattern="MM/dd/yyyy hh:mm" unit="hour" />
</tstamp>
<property name="build.version" value="${version} (private-${now}-${user.name})" />
<property name="hudson.build" value="hudson-${BUILD_NUMBER}, subversion-${SVN_REVISION}" />

<!-- put the version file -->
<echo message="The build id is: ${build.version}" />
<mkdir dir="target/${project.build.finalName}/" />
<echo file="target/${project.build.finalName}/version.properties">version=${build.version} ${hudson.build}
</echo>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>

...

<profiles>
<profile>
<id>release</id>
<properties>
<!-- for releases, just use the POM version. -->
<build.version>${version}</build.version>
<hudson.build></hudson.build>
</properties>
</profile>
</profiles>


This little piece of magic creates a version.properties file that contains some valuable information.

version=1.1.0-SNAPSHOT (private-04/21/2008 02:23-jblack), hudson-453, subversion-1124

When you do a formal release, specify the -Prelease profile to have this file simply hold the pom version.

For the plugin configuration above, this file gets put in a war project root webapp directory, suitable for immediate viewing from your browser!

The mad props for this idea go to Kohsuke. :)

Tuesday, December 4, 2007

Continuous Integration Strategies (Part III)

How do you get your code to talk to you? Continuous integration is all about automated feedback. Beyond test reports and self-describing-code there are many techniques and tools that you can use to find out if you are producing "quality" code.

When you pick the right reports and integration them into the software build, the level of effort required to use these tools becomes very low (and we all know that developers are legendarily lazy).

Ideally, you should not have to remember to ask, the code should tell.

It's like being the parent of a teenager. At the dinner table you ask,

"So, how was school today?"
"I dunno."

"What did you learn?"
"Nothin'"

"Did anything interesting happen?"
"I dunno."

"How did you do on your history test"
(shrug)

You would rather he came home with the enthusiasm of a kindergartener eager to tell you about his day as soon as he gets home. No effort on your part to ask, he just gushes unprompted,

"Dad!! Guess what happened today at school?? It was so cool, I got an A+ on my spelling test!!"
That's what continuous integration can do for you.

So, recalling the computer adage, Garbage In, Garbage Out, we have carefully picked a select few reports that we feel give us a good measure of quality and integrated them into our build so that the feedback from CI is meaningful. Although several CI engines (we currently use Hudson) do have plugins to generate quality reports such as unit test results and test coverage, we feel it is important that developers have full access to run all the same reports that CI will run. Since we run maven, it is easy to plugin the reports of interest into the <reporting> section of the pom. That keeps the report configurations in source control along with the code.

So which reports do we run? Here's the rundown:

  • Checkstyle. Validates source code against coding standards and reports any violations. We customized the ruleset, packaged it in a versioned jar and deployed it to our maven repository. Additionally, we forced it to run as part of every build in the validate phase, and configured it to fail the build if violations are found. It's possible for an individual project to override the custom ruleset, but we don't encourage that.
  • PMD. Performs design time analysis of the source code against a standard ruleset. We customized the ruleset, packaged it in a versioned jar and deployed it to our maven repository. Additionally, we forced it to run as part of every build in the validate phase, and configured it to fail the build if violations are found.
  • Test Results (surefire). This is a no-brainer. You want to see the results of your tests.
  • Code Coverage (cobertura). Shows branch and line coverage for your source files. I think the key to using this metric is to not to set a percentage that the project team must meet, but to be smart about interpreting the trends. "Metrics are meant to help you think, not to do the thinking for you."
  • Javadoc. Creates the API documentation for your project.
  • Dashboard. Aggregates maven multiproject build reports into a single report page. This is critical so that developers and stakeholders don't have to hunt and drill down page after page to find the meaningful metrics you worked so hard to set up.


In addition, make full use of Maven's pom to declare all the sections that feed the default generated site, including:

  • <scm> Source control management. New team members, for example will need to know the subversion URL for checkout.
  • <developers> Identifies subject matter experts and feeds developer activity reports.
  • <ciManagement> Identifies the CI engine being used and the URL to see the live status and force new builds.
  • <issueManagement> Identifies the issue tracking system. I think there are maven plugins that will map issues to source control commits, providing bi-traceability of code to requirements.


Also, don't forget for maven multiproject builds to review the Dependency Convergence report. It will show all dependencies for all projects along with the versions of those dependencies. This will help you find dependencies your using inadvertently using multiple versions of.

Once you have these reports baked in, make the results easy to find. Use your CI engine to generate the maven site on a nightly basis and publish the results to a web server where developers or project stakeholders can find them.

After you spend the time and effort to identify which reports you want and get them configured and working correctly, make it repeatable for new projects by creating an archetype template project. This sets up a model pom.xml and project directory structure right off the bat for new projects. When it's there from the start, with no effort on the team's part, good things happen.

With a little up front effort, your code (with a little help from continuous integration) can talk to you. What are your strategies? How do you use CI to reveal code quality? I would be interested in hearing your strategies.

Wednesday, October 31, 2007

Continuous Integration Strategies (Part II)

Your CI environment is reporting a broken build. Now what?

I would like to stress that the faster you jump on the problem, the easier it is to solve. The changeset will be smaller and the person who most likely committed the offending code will have the changes he made freshly in mind.
It is a good policy that your team does not commit any additional changes to source control until the build is fixed.
At times, it is too easy to ignore a broken build message, whether it is an email notification or a flashing red light or a lava lamp. Sometimes, the team will assume someone else is working the issue. I will always recommend having immediate and active communication that the problem is being worked so that positive control in maintained.

There were some additional words of wisdom that recently circulated here that I would like to share with you. Props to Chad for nailing the importance of the entire team owning CI and having active communication about its status. The emphasis is mine.

Make sure you're at least running unit tests before you commit. You can also have a buddy immediately update and build if you want feedback before waiting for Cruise Control to build. Also, after you commit, watch your email for a notification that the build has broken. If you're not going to be around, use a check-in buddy as mentioned previously.

If you've just committed a change and receive a build failure notification email, look into what's causing the problem asap. If it's a quick fix, just make the change and re-commit. Optionally reply to the build failure notification so that the team is aware you're putting in the fix. If it's not a quick fix, reply to the build failure notification stating that you're working the issue; again so the team is aware.

...The build is everyone's responsibility.

If you see that the build is remaining broken for a period of time, take it upon yourself to investigate. Find out if anyone is working the issue. If not, try to identify the problem and notify the party responsible so that the build can be fixed quickly. Now if you can't figure out what's wrong and identify who is responsible, take it upon yourself to fix the issue. If you are too busy, find someone that can. If you can't figure it out, ask for help. Use it as an opportunity to learn something. When you do finally find the problem, let the responsible party know what happened. Then they can learn something as well.

To sum it up, there shouldn't be any duration of time where the build is broken but isn't being looked into. While everyone should be watching after they commit to see if they've broken the build, it won't always get caught. You shouldn't be saying to yourself "I didn't break it so it's not my job to fix it".
And that's the word.

Friday, October 19, 2007

Continuous Integration Strategies (Part I.I)

At the end of each successful continuous integration build and test suite, we label the workspace with a certified build tag within source control. This allows for bi-traceability from build sequence number to the tag name for QA purposes. Additionally, we can also do a simple lookup on the build number in Hudson to get a subversion revision number.

Below are two examples of how we have accomplished this.

Maven 1 and cruisecontrol and cvs:

We wrote a custom jelly goal that called ant's cvs task.

<goal name="nct:createcertifiedtag">
<ant:cvs command="tag certified-build-${label}" />
</goal>

Notice the "label" property. Cruisecontrol provides maven that property to use at runtime with the value set to the build number. We use this custom goal at the end of the cruisecontrol project's maven goal element:

<maven projectfile="${PROJECT_ROOT}/project.xml" goal=clean install nct:certifiedtag" />

Maven 2 and hudson and subversion:

In the maven pom.xml (or the parent pom.xml), specify the all the source control details so things become easier later. For example, our build includes:

<scm>
<connection>scm:svn:https://svnhost/svn/sto/trunk</connection>
<developerConnection>scm:svn:https://svnhost/svn/sto/trunk</developerConnection>
<url>https://svnhost/svn/sto/trunk</url>
</scm>

Hudson provides maven a "hudson.build.number" property to use at runtime populated with the build number. We use it by referencing that on the Goals line in the Hudson job configuration. Additionally, we made an improve over using an external process call to 'svn' by using the maven 2 SCM plugin.
clean install scm:tag -Dtag=certified-build-${hudson.build.number} 


[update: ${hudson.build.number} seems to be buggy. I have successfully used ${BUILD_NUMBER} in its place]

Thursday, October 18, 2007

Continuous Integration Strategies (Part I)

Continuous integration is a powerful concept, usually associated with only compilation and unit testing. However, there is additional benefit to be had if you look beyond unit testing. I would like to present some strategies that I have tried that allow full suites of tests to be ran in orderly stages, from unit tests, to integration tests to acceptance tests. For this series unit tests are defined as single class tests with no external dependencies on network or container resources. Integration tests are white box testing of class interactions and acceptance tests are black box system tests.

This first post on the subject deals with strategies using maven and cruisecontrol. Later posts will move on to maven 2 and hudson.

First off, a lesson learned. When we first migrated from ant to maven, we were not sure how best to configure cruisecontrol to handle CI. Our code base is a large selection of components that comprise a toolset of capabilities. There are many small projects that build on each other, so there are many dependencies on our on artifacts. In fact, from a maven point of view, we could build our toolset with a single, rather large, multiproject build.

It seemed logical to map each component maven project (itself a multiproject consisting of api + implementations + tests) to a cruisecontrol project. That presented a nice one-to-one view of the system on the build status page. Each project was independently triggered via cvs commits. This seemed to work for awhile, but it became clear that this is highly unstable because commits spanning multiple cruisecontrol projects would trigger the builds in an unpredictable order, causing the build to break or tests to fail.

The lesson learned and correction we took was to not fight maven, but let it determine the build order from start to finish. So we created a single cruisecontrol project, pointed it at the top-most maven project.xml with the goal multiproject:install and the property -Dmaven.test.failure.ignore=true. Then for each component project we wanted test status granularity on, we created a cruisecontrol project that ran a custom maven plugin that scanned the test-results directory and failed that project if test failures were found. Additionally, that cruisecontrol project also used <merge> to aggregate maven's test-results files so our developers could drill down and see which test failed and the details why.



As a quick aside, Hudson has very nice maven integration that mimics (and improves on) this kind of setup automatically.
Our next step was to enable a controlled progression of testing, where all unit test would run first, followed by integration tests only if all unit tests passed. This was accomplished in three steps: 1) one maven multiproject build responsible for compiling and unit testing, 2) a custom test failure check plugin (basically find + grep) serving as the go-no-go gate, then 3) another maven multiproject build running only integration tests. This three step orchestration was handled by a custom maven plugin running a mix of jelly and shell scripting.

The different types of tests were in different directories, as maven subprojects, under the component, so we were able to use maven.multiproject.includes and maven.multiproject.excludes on the directory names to achieve steps 1) and 3) above.

To facilitate the the including and excluding for the unit test pass, the ~/build.properties included these properties:
  maven.multiproject.includes=**/project.xml
maven.multiproject.excludes=project.xml,*/project.xml,**/inttest/project.xml,**/tck/project.xml
For the integration test pass, the ~/build.properties included these properties:
  gcp.integration.multiproject.includes=**/inttest/project.xml
gcp.integration.multiproject.excludes=**/tck/project.xml
and the plugin goal to actually run the integration tests looked like this:
  <goal name="gcp:integration-tests">
<j:set var="usethese" scope="parent" value="${gcp.integration.multiproject.includes}"/>
<j:set var="notthese" scope="parent" value="${gcp.integration.multiproject.excludes}"/>
<j:set var="thisgoal" scope="parent" value="test:test"/>
${systemScope.put('maven.multiproject.includes', usethese)}
${systemScope.put('maven.multiproject.excludes', notthese)}
${systemScope.put('goal', thisgoal)}
<maven:maven descriptor="${CC_HOME}/checkout/gcp/project.xml" goals="multiproject:goal"/>
</goal>
The crazy jelly maneuvering to get the includes and excludes properties to stick after the call to maven:maven is a story for another day. (If you want a sneek peek, however, start here.) The concept of dynamically using maven properties to properly setup the integration test run should still be clear.

Hopefully, this first post in a series will help you think about ways to get more out of your CI environment. I'm curious to know what you think about the strategy we took and I invite you to share how you accomplish CI for your projects.

Wednesday, October 10, 2007

Hudson, At Your Continuous Integration Service


I love Hudson. I have previously been a CruiseControl fan, but no longer. Hudson just works and has some excellent integration features with Maven2 and Subversion.

Generally, Hudson does this kind of stuff you would expect from a CI engine:

  1. Easy installation: Just java -jar hudson.war, or deploy it in a servlet container. No additional install, no database.
  2. Easy configuration: Hudson can be configured entirely from its friendly web GUI with extensive on-the-fly error checks and inline help. There's no need to tweak XML manually anymore, although if you'd like to do so, you can do that, too.
  3. Change set support: Hudson can generate a list of changes made into the build from CVS/Subversion. This is also done in a fairly efficient fashion, to reduce the load of the repository.
  4. RSS/E-mail Integration: Monitor build results by RSS or e-mail to get real-time notifications on failures.
  5. JUnit/TestNG test reporting: JUnit test reports can be tabulated, summarized, and displayed with history information, such as when it started breaking, etc. History trend is plotted into a graph.
  6. Distributed builds: Hudson can distribute build/test loads to multiple computers. This lets you get the most out of those idle workstations sitting beneath developers' desks.
  7. Plugin Support: Hudson can be extended via 3rd party plugins. You can write plugins to make Hudson support tools/processes that your team uses.
Then the cool stuff kicks in!

Since each build has a persistent workspace, we can go back in time to see that workspace to trace what happened. This means Hudson can do after-the-fact tagging and has permanent links to all builds, including "latest build"/"latest successful build", so that they can be easily linked from elsewhere.

I really like the matrix style jobs that you can create. For a matrix build, you can specify the JDK version as one axis and a slave Hudson (distributed builds) as another axis. Add to that a third axis of arbitrary property/values pairs that your build understands and can act on (e.g. think, maven -PmyProfile or -Dapp.runSystemTests=true or -Ddatabase.flavor=mysql). This is powerful stuff right out of the box to support multiple compilers, OSes, etc., without having to create a new job for each configuration.

For maven 2 projects, Hudson will autodiscover the <modules> in a multiproject build and list them as sub-jobs in Hudson, complete with their own status and viewable workspaces. Links to project artifacts are linked to from the build status pages.

The plugins are also starting to become plentiful. There are plugins for JIRA and Trac integration, code violations charting, publishing builds to Google calendar!, and many more. The one with the most potential I think is the Jabber plugin. Not only can it send IM notifications to an individual or a group, but the newest (cvs head) version comes with a bot that you can interact with to schedule builds, get project statuses, and monitor the build queue.

As always, sometimes small things mean a lot. An example I appreciate in Hudson is seeing the build in progress scrolling by via the browser. For CruiseControl, you would have to login to the build box and "tail -f" to get the same real-time information.

I'm not a groovy fan yet, but I was blown away when I found a built in groovy console right there in the web UI! You can use for trouble-shooting and diagnostics of your builds or plugins.

All in all, I am really, really impressed with Hudson as a product, and with the support and development going on around it. There is a new version released literally every week. It makes me, for the first time, want to contribute to an OSS project. This is good stuff. Check it out.

Friday, August 17, 2007

Using JBI To Keep An Eye on Continuous Integration

I'm a big fan of Continuous Integration. We thrive on it at work to get feedback for our code integration on a constant basis. As part of a bigger company effort, we wanted to be able to create team dashboards showing CI health (server up, building, not broken too long, etc). The teams here mostly use CruiseControl, but we also a few teams utilizing Hudson and Luntbuild.

So what's an easy way to keep tabs on 3 different build systems? RSS of course!

CruiseControl publishes out an RSS feed, Luntbuild publishes an ATOM feed and Luntbuild recently added RSS and ATOM feeds (committed, but not distributed yet, as of 1 Aug 2007).

And I don't want to write any code to aggregate these feeds together.

Enter JBI, Open-ESB, and the RSS Binding Component (BC).

Start by downloading the latest Open-ESB/Glassfish bundle. Start up Netbeans. To subscribe to multiple RSS feeds via the RSS BC, we need an RSS provider and an RSS consumer composite application.

Create the provider BPEL module by creating a new Netbeans project (New Project > Service Oriented Architecture > BPEL Module). Name it CIProviderBpelModule. Now we need to import two xml schemas into our project (rssbcext.xsd and wsaext.xsd). Follow the steps outlined here to do the imports.

The WS-Addressing extension schema is used to have access to the element EndpointReferenceList, which we'll use to feed the RSS feed URLs into the system via a SOAP request.

Create two WSDLs, one for http and the other for rss, with the New WSDL Document wizard.

On the Name and Location step, name the rss WSDL "rssciprovider" and import the rssbcext.xsd schema, By File, with a prefix of "rssbcext". On the Concrete Configuration section, make sure RSS is selected as the Binding Type.

On the Name and Location step, name the http WSDL "httpci" and import the wsaext.xsd schema, By File, with a prefix of "wsaext". On the Concrete Configuration section, make sure SOAP is selected as the Binding Type.

At this point both WSDLs should validate (Alt+Shift+F9) correctly. This will make sure all schemas are imported correctly.

Open the httpci.wsdl and navigate to the request message part1. Click the element and then in the Properties pane of Netbeans, change the element attribute from type="xsd:string" to element="wsa:EndpointReferenceList" Do the same for the reply message part1. (Make sure you pick element and not type. Thanks James!).

Open the rssciprovider.wsdl and navigate to the request message part1. Change the type to an wsaext element EndpointReferenceList as above. You can also remove the reply message and the output from the operation and binding as this will be an In-Only message. For the binding operation, change the input to . For the service port change the rss:input to . This correlationId is important and will match up to a correlationId in our rssciconsumer.wsdl.

Validate the WSDLs and then Process Files > New > BPEL Process. Give it a name "rssProviderBpelProcess". Drag and drop the two WSDLs into the process flow diagram; this will create partner links. Name them "httpPartnerLink" and "rssProviderPartnerLink". Swap Roles for the rssPartnerLink to "Partner Role".

From the Palette pane, drag and drop Receive, Assign, Invoke, Assign and Reply operations onto the BPEL flow. Edit Receive1 to point to the httpPartnerLink and create an input variable. Edit Invoke1 to point to rssProviderPartnerLink and create an input variable. Edit Reply1 to point to httpPartnerLink and create an output variable. Click Assign1 and using the BPEL Mapper pane at the bottom of Netbeans drag a line from HttpciOperationIn.part1 to RSSciOperationIn.part1 (ignore data types don't match warning).

For the SOAP response, we will just hardcode something to acknowledge the RSS provider is subscribed. Click Assign2 and using the BPEL Mapper create a String Literal with a value of "Done.". Drag a line from the String Literal to HttpciOperationOut.part1.

Validate the BPEL file.

Create a new Composite Application project "CIProviderCA" and add the JBI Module project CIProviderBpelModule to it. Clean and build.

Halfway there!

Create the consumer BPEL module by creating a new Netbeans project named CIConsumerBpelModule. Import the rssbcext.xsd schema into the project.

Create a rssciconsumer.wsdl with the rssbcext.xsd schema imported as before. Make sure "RSS" is selected as the Binding Type. Edit the wsdl and change the message part element to element="rssbcext:EntryList". Change the operation input to . Change the service port address to .

Create a fileci.wsdl with the rssbcext.xsd schema imported as before. Make sure "File" is selected as the Binding Type. Edit the wsdl and change the message part element to element="rssbcext:EntryList". Remove all output references; In-Only again. Change the operation input to .

Validate the WSDLs and then create a new BPEL Process named "rssConsumerBpelProcess". Drag and drop the two WSDLs into the process flow diagram; this will create partner links. Name them "filePartnerLink" and "rssConsumerPartnerLink". Swap Roles for the filePartnerLink to "Partner Role".

From the Palette pane, drag and drop Receive, Assign, Invoke operations onto the BPEL flow. Edit Receive1 to point to the rssConsumerPartnerLink and create an input variable. Edit Invoke1 to point to filePartnerLink and create an input variable. Click Assign1 and using the BPEL Mapper drag a line from RssciconsumerOperationIn.part1 to FileciOperationIn.part1 (ignore data types don't match warning).

Validate the BPEL file.

Create a new Composite Application project "CIConsumerCA" and add the JBI Module project CIConsumerBpelModule to it. Clean and build.

Start glassfish and deploy both JBI Composite Applications to it.

In the provider CA project, create a test a new test case, pointing it to the httpci.wsdl and the httpciOperation. Sweeeeeeeeet. Edit the test case input. Each Endpoint reference only needs the Address element to be valid. Add as many as EndpointRerefences as you need to the EndpointRerefenceList. Run the test.

Look in C:\Temp (or whatever directory the file service port referenced) and you should see a ci-feeds.xml file with an aggregation of all the continuous integration RSS/ATOM feeds in it.