Apache Karaf Cellar
Version 2.3.3
\\\\\\\\Apache Karaf CellarManual\\\\\\\\
Table of contents
  1. Overview
  2. User Guide
  3. Architecture Guide

Overview

Karaf Cellar Overview

Apache Karaf Cellar is a Apache Karaf sub-project which provides clustering support between Karaf instances.

Cellar allows you to manage a cluster of several Karaf instances, providing synchronization between instances.

Here is a short list of Cellar features:

User Guide

Introduction

Karaf Cellar use cases

The first purpose of Cellar is to synchronize the state of several Karaf instances (named nodes).

It means that all resources modified (installed, started, etc) on one Karaf instance will be propagated to all others
nodes.
Concretely, Cellar will broadcast an event to others nodes when you perform an action.

The nodes list could be discovered (using multicast/unicast), or explicitly defined (using a couple hostname or IP
and port list).

Cellar is able to synchronize:

The second purpose is to provide a Distributed OSGi runtime. It means that using Cellar, you are able to call an OSGi
service located on a remote instance. See the Transport and DOSGi section of the user guide.

Cellar network

Cellar relies on Hazelcast (http://www.hazelcast.com), a memory data grid implementation.

You have a full access to the Hazelcast configuration (in etc/hazelcast.xml) allowing you to specify the network
configuration.

Especially, you can enable or not the multicast support and choose the multicast group and port number.

You can also configure on which interface and IP address you configure Cellar and port number used by Cellar:

    <network>
        <port auto-increment="true">5701</port>
        <join>
            <multicast enabled="true">
                <multicast-group>224.2.2.3</multicast-group>
                <multicast-port>54327</multicast-port>
            </multicast>
            <tcp-ip enabled="false">
                <interface>127.0.0.1</interface>
            </tcp-ip>
            <aws enabled="false">
                <access-key>my-access-key</access-key>
                <secret-key>my-secret-key</secret-key>
                <region>us-east-1</region>
            </aws>
        </join>
        <interfaces enabled="false">
            <interface>10.10.1.*</interface>
        </interfaces>
    </network>

By default, the Cellar node will start from network port 5701, each new node will use an incremented port number.

Cross topology

This is the default Cellar topology. Cellar is installed on every nodes, each node has the same function.

It means that you can perform actions on any node, it will be broadcasted to all others nodes.

Star topology

In this topology, if Cellar is installed on all nodes, you perform actions only on one specific node (the "manager").

To do that, the "manager" is a standard Cellar node, and the event producing is disable on all others nodes
(cluster:producer-stop on all "managed" nodes).

Like this, only the "manager" will send event to the nodes (which are able to consumer and handle), but no event can
be produced on the nodes.

Installation

This chapter describes how to install Apache Karaf Cellar into your existing Karaf based installation.

Pre-Installation Requirements

As Cellar is a Karaf sub-project, you need a running Karaf instance.

Check in etc/config.properties of your Karaf instances, if the following property is set:

org.apache.aries.blueprint.synchronous=true

Karaf Cellar is provided under a Karaf features descriptor. The easiest way to install is just to
have an internet connection from the Karaf running instance.

Building from Sources

If you intend to build Karaf Cellar from the sources, the requirements are:

Hardware:

Environment:

Note: Karaf Cellar requires Java 6 to compile, build and run.

Building on Windows

This procedure explains how to download and install the source distribution on a Windows system.

  1. From a browser, navigate to http://karaf.apache.org/sub-projects/cellar/download.html.
  2. Select the desired distribution.

    For a source distribution, the filename will be similar to: apache-karaf-cellar-x.y-src.zip.

  3. Extract Karaf Cellar from the ZIP file into a directory of your choice. Please remember the restrictions concerning illegal characters in Java paths, e.g. !, % etc.
  4. Build Karaf Cellar using Maven 3.0.3 or greater and Java 6.

    The recommended method of building Karaf Cellar is the following:

    cd [cellar_install_dir]\src
    

    where [cellar_install_dir] is the directory in which Karaf Cellar was uncompressed.

    mvn
    
  5. Proceed to the Deploy Cellar chapter.

Building on Unix

This procedure explains how to download and install the source distribution on an Unix system.

  1. From a browser, navigate to http://karaf.apache.org/sub-projects/cellar/download.html.
  2. Select the desired distribution.

    For a source distribution, the filename will be similar to: apache-karaf-cellar-x.y-src.tar.gz.

  3. Extract the files from the tarball file into a directory of your choice. For example:
    gunzip apache-karaf-cellar-x.y-src.tar.gz
    tar xvf apache-karaf-cellar-x.y-src.tar
    

    Please remember the restrictions concerning illegal characters in Java paths, e.g. !, % etc.

  4. Build Karaf using Maven:

    The preferred method of building Karaf is the following:

    cd [karaf_install_dir]/src
    

    where [karaf_install_dir] is the directory in which Karaf Cellar was uncompressed.

    mvn
    
  5. Proceed to the Deploy Cellar chapter.

Deploy Cellar

This chapter describes how to deploy and start Cellar into a running Apache Karaf instance. This chapter
assumes that you already know Apache Karaf basics, especially the notion of features and shell usage.

Registering Cellar features

Karaf Cellar is provided as a Karaf features XML descriptor.

Simply register the Cellar feature URL in your Karaf instance:

karaf@root> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.3.2/xml/features

Now you have Cellar features available in your Karaf instance:

karaf@node1> features:list|grep -i cellar
[uninstalled] [2.3.2          ] cellar-core                   karaf-cellar-2.3.2 Karaf clustering core
[uninstalled] [2.5            ] hazelcast                     karaf-cellar-2.3.2 In memory data grid
[uninstalled] [2.3.2          ] cellar-hazelcast              karaf-cellar-2.3.2 Cellar implementation based on Hazelcast
[uninstalled] [2.3.2          ] cellar-config                 karaf-cellar-2.3.2 ConfigAdmin cluster support
[uninstalled] [2.3.2          ] cellar-features               karaf-cellar-2.3.2 Karaf features cluster support
[uninstalled] [2.3.2          ] cellar-bundle                 karaf-cellar-2.3.2 Bundle cluster support
[uninstalled] [2.3.2          ] cellar-shell                  karaf-cellar-2.3.2 Cellar shell commands
[uninstalled] [2.3.2          ] cellar-management             karaf-cellar-2.3.2 Cellar management
[uninstalled] [2.3.2          ] cellar                        karaf-cellar-2.3.2 Karaf clustering
[uninstalled] [2.3.2          ] cellar-dosgi                  karaf-cellar-2.3.2 DOSGi support
[uninstalled] [2.3.2          ] cellar-obr                    karaf-cellar-2.3.2 OBR cluster support
[uninstalled] [2.3.2          ] cellar-event                  karaf-cellar-2.3.2 OSGi events broadcasting in clusters
[uninstalled] [2.3.2          ] cellar-cloud                  karaf-cellar-2.3.2 Cloud blobstore support in clusters
[uninstalled] [2.3.2          ] cellar-webconsole             karaf-cellar-2.3.2 Cellar plugin for Karaf WebConsole

Starting Cellar

To start Cellar in your Karaf instance, you only need to install the Cellar feature:

karaf@root> features:install cellar

You can now see the Cellar components (bundles) installed:

karaf@node1> la|grep -i cellar
[  55] [Active     ] [Created     ] [   30] Apache Karaf :: Cellar :: Core (2.3.2)
[  56] [Active     ] [Created     ] [   31] Apache Karaf :: Cellar :: Utils (2.3.2)
[  57] [Active     ] [Created     ] [   33] Apache Karaf :: Cellar :: Hazelcast (2.3.2)
[  58] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Shell (2.3.2)
[  59] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Config (2.3.2)
[  60] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Bundle (2.3.2)
[  61] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Features (2.3.2)
[  62] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Management (2.3.2)

And Cellar cluster commands are now available:

karaf@root> cluster:<TAB>
cluster:config-list           cluster:config-proplist       cluster:config-propset        cluster:consumer-start        cluster:consumer-status       cluster:consumer-stop         cluster:feature-install      cluster:features-list
cluster:feature-uninstall     cluster:group-create          cluster:group-delete          cluster:group-join            cluster:group-list            cluster:group-quit            cluster:group-set             cluster:handler-start
cluster:handler-status        cluster:handler-stop          cluster:list-nodes            cluster:ping                  cluster:producer-start        cluster:producer-status       cluster:producer-stop
...

Cellar nodes

This chapter describes the Cellar nodes manipulation commands.

Nodes identification

When you installed the Cellar feature, your Karaf instance became automatically a Cellar cluster node,
and hence tries to discover the others Cellar nodes.

You can list the known Cellar nodes using the list-nodes command:

karaf@node1> cluster:node-list
   ID                               Host Name              Port
* [vostro.local:5701             ] [vostro.local        ] [ 5701]

The starting * indicates that it's the Karaf instance on which you are logged on (the local node).

Testing nodes

You can ping a node to test it:

karaf@node1> cluster:node-ping vostro.local:5701
PING vostro.local:5701
from 1: req=vostro.local:5701 time=67 ms
from 2: req=vostro.local:5701 time=10 ms
from 3: req=vostro.local:5701 time=8 ms
from 4: req=vostro.local:5701 time=9 ms

Nodes sync

Cellar allows nodes to 'sync' state. It currently covers features, configs, and bundles.

For instance, if you install a feature (eventadmin for example) on node1:

karaf@node1> features:install eventadmin
karaf@node1> features:list|grep -i eventadmin
[installed  ] [2.3.1 ] eventadmin                    karaf-2.3.1

You can see that the eventadmin feature has been installed on node2:

karaf@node2> features:list|grep -i eventadmin
[installed  ] [2.3.1 ] eventadmin                    karaf-2.3.1

Features uninstall works in the same way. Basically, Cellar synchronisation is completely transparent.

Configuration is also synchronized.

Cellar groups

You can define groups in Cellar. A group allows you to define specific nodes and resources that are to be
working together. This permits some nodes (those outside the group) not to need to sync'ed with changes of
a node within a group.

By default, the Cellar nodes go into the default group:

karaf@node1> cluster:group-list
   Group                  Members
* [default             ] [vostro.local:5701* ]

As for node, the starting * shows the local node/group.

New group

You can create a new group using the group-create command:

karaf@root> cluster:group-create test

For now, the test group hasn't any nodes:

karaf@node1> cluster:group-list
   Group                  Members
* [default             ] [vostro.local:5701* ]
  [test                ] []

You can use cluster:group-join, cluster:group-quit, cluster:group-set commands to add/remove a node into a cluster group.

For instance, to set the local into the test cluster group:

karaf@node1> cluster:group-join test

The cluster:group-delete command deletes the given cluster group:

karaf@node1> cluster:group-delete test

Group configuration

You can see the configuration PID associated with a given group, for instance the default group:

karaf@root> cluster:config-list default
PIDs for group:default
PID                                     
org.apache.felix.fileinstall.3e4e22ea-8495-4612-9839-a537c8a7a503
org.apache.felix.fileinstall.1afcd688-b051-4b12-a50e-97e40359b24e
org.apache.karaf.features               
org.apache.karaf.log                    
org.apache.karaf.features.obr           
org.ops4j.pax.logging                   
org.apache.karaf.cellar.groups          
org.ops4j.pax.url.mvn                   
org.apache.karaf.jaas                   
org.apache.karaf.shell  

You can use the cluster:config-proplist and config-propset commands to list, add and edit the configuration.

For instance, in the test group, we don't have any configuration:

karaf@root> cluster:config-list test
No PIDs found for group:test

We can create a tstcfg config in the test group, containing name=value property:

karaf@root> cluster:config-propset test tstcfg name value

Now, we have this property in the test group:

karaf@root> cluster:config-list test
PIDs for group:test
PID                                     
tstcfg                                  
karaf@root> cluster:config-proplist test tstcfg
Property list for PID:tstcfg for group:test
Key                                      Value
name                                     value

Group nodes

You can define a node member of one of more group:

karaf@root> cluster:group-join test node1.local:5701
  Node                 Group
  node1:5701 default
* node2:5702 default
  node1:5701 test

The node can be local or remote.

Now, the nodes of a given group will inherit of all configuration defined in the group. This means that
node1 now knows the tstcfg configuration because it's a member of the test group:

karaf@root> config:edit tstcfg
karaf@root> proplist
  service.pid = tstcfg
  name = value

Group features

Configuration and features can be assigned to a given group.

karaf@root> cluster:features-list default
Features for group:default
Name                                                  Version Status 
spring-dm                                               1.2.1 true 
kar                                                     2.3.1 false
config                                                  2.3.1 true
http-whiteboard                                         2.3.1 false
application-without-isolation                             0.3 false 
war                                                     2.3.1 false
standard                                                2.3.1 false
management                                              2.3.1 false
transaction                                               0.3 false 
jetty                                         7.4.2.v20110526 false 
wrapper                                                 2.3.1 false
jndi                                                      0.3 false 
obr                                                     2.3.1 false
jpa                                                       0.3 false 
webconsole-base                                         2.3.1 false
hazelcast                                               1.9.3 true 
eventadmin                                              2.3.1 false
spring-dm-web                                           1.2.1 false 
ssh                                                     2.3.1 true
spring-web                                      3.0.5.RELEASE false 
hazelcast-monitor                                       1.9.3 false 
jasypt-encryption                                       2.3.1 false
webconsole                                              2.3.1 false
spring                                          3.0.5.RELEASE true 
karaf@root> cluster:features-list test
Features for group:test
Name                                                  Version Status 
webconsole                                              2.3.1 false
spring-dm                                               1.2.1 true 
eventadmin                                              2.3.1 false
http                                                    2.3.1 false
war                                                     2.3.1 false
http-whiteboard                                         2.3.1 false
obr                                                     2.3.1 false
spring                                          3.0.5.RELEASE true 
hazelcast-monitor                                       1.9.3 false 
webconsole-base                                         2.3.1 false
management                                              2.3.1 true
hazelcast                                               1.9.3 true 
jpa                                                       0.3 false 
jndi                                                      0.3 false 
standard                                                2.3.1 false
jetty                                         7.4.2.v20110526 false 
application-without-isolation                             0.3 false 
config                                                  2.3.1 true
spring-web                                      3.0.5.RELEASE false 
wrapper                                                 2.3.1 false
transaction                                               0.3 false 
spring-dm-web                                           1.2.1 false 
ssh                                                     2.3.1 true
jasypt-encryption                                       2.3.1 false
kar                                                     2.3.1 false

Now we can "install" a feature for a given cluster group:

karaf@root> cluster:feature-install test eventadmin
karaf@root> cluster:feature-list test|grep -i event
eventadmin                                     2.3.1 true

Below, we see that the eventadmin feature has been installed on this member of the test group:

karaf@root> features:list|grep -i event
[installed  ] [2.3.1 ] eventadmin                    karaf-2.3.1

OBR Support

Apache Karaf Cellar is able to "broadcast" OBR actions between cluster nodes of the same group.

Enable OBR support

To enable Cellar OBR support, the cellar-obr feature must first be installed:

karaf@root> features:install cellar-obr

Of course, if the Cellar core feature is already installed, you can use it to install the cellar-core features on all
nodes of the same group:

karaf@root> cluster:feature-install group cellar-obr

The Cellar OBR feature will install the Karaf OBR feature which provides the OBR service (RepositoryAdmin).

Register repository URL in a cluster

The cluster:obr-addurl command registers a OBR repository URL (repository.xml) in a cluster group:

karaf@root> cluster:obr-add-url group file://path/to/repository.xml
karaf@root> cluster:obr-add-url group http://karaf.cave.host:9090/cave/repo-repository.xml

The OBR repository URLs are stored in a cluster distributed set. It allows new nodes to register the distributed URLs:

karaf@root> cluster:obr-list-url group
file://path/to/repository.xml
http://karaf.cave.host:9090/cave/repo-repository.xml

When a repository is registered in the distributed OBR, Cave maintains a distributed set of bundles available on the
OBR of a cluster group:

karaf@root> cluster:obr-list group
  NAME                                                                                   SYMBOLIC NAME                                                 VERSION
[Apache ServiceMix :: Specs :: Java Persistence API 1.4                               ] [org.apache.servicemix.specs.java-persistence-api-1.1.1     ] [1.9.0.SNAPSHOT          ]
[camel-jms                                                                            ] [org.apache.camel.camel-jms                                 ] [2.9.0.SNAPSHOT          ]
[camel-example-spring-javaconfig                                                      ] [org.apache.camel.camel-example-spring-javaconfig           ] [2.8.1.SNAPSHOT          ]
[Apache ServiceMix :: Features :: Examples :: WSDL First OSGi Package :: CXF BC Bundle] [wsdl-first-cxfbc-bundle                                    ] [4.4.0.SNAPSHOT          ]
[camel-dozer                                                                          ] [org.apache.camel.camel-dozer                               ] [2.9.0.SNAPSHOT          ]
[OPS4J Pax Web - Extender - Whiteboard                                                ] [org.ops4j.pax.web.pax-web-extender-whiteboard              ] [1.0.6                   ]
[OPS4J Pax Web - Runtime                                                              ] [org.ops4j.pax.web.pax-web-runtime                          ] [1.0.6.SNAPSHOT          ]
[camel-mina                                                                           ] [org.apache.camel.camel-mina                                ] [2.9.0.SNAPSHOT          ]
[camel-jackson                                                                        ] [org.apache.camel.camel-jackson                             ] [2.9.0.SNAPSHOT          ]
[camel-example-route-throttling                                                       ] [org.apache.camel.camel-example-route-throttling            ] [2.9.0.SNAPSHOT          ]

When you remove a repository URL from the distributed OBR, the bundles' distributed set is updated:

karaf@root> cluster:obr-remove-url group http://karaf.cave.host:9090/cave/repo-repository.xml

Deploying bundles using the cluster OBR

You can deploy a bundle (and that bundle's dependent bundles) using the OBR on a given cluster group:

karaf@root> cluster:obr-deploy group bundleId

The bundle ID is the symbolic name, viewable using the cluster:obr-list command. If you don't provide the version, the OBR deploys the latest version
available. To provide the version, use a comma after the symbolic name:

karaf@root> cluster:obr-deploy group org.apache.servicemix.specs.java-persistence-api-1.1.1
karaf@root> cluster:obr-deploy group org.apache.camel.camel-jms,2.9.0.SNAPSHOT

The OBR will automatically install the bundles required to satisfy the bundle dependencies.

The deploy command doesn't start bundles by default. To start the bundles just after deployment, you can use the -s option:

karaf@root> cluster:obr-deploy -s group org.ops4j.pax.web.pax-web-runtime

OSGi Event Broadcasting support

Apache Karaf Cellar is able to listen all OSGi events on the cluster nodes, and broadcast each events to other nodes.

Enable OSGi Event Broadcasting support

OSGi Event Broadcasting is an optional feature. To enable it, you have to install the cellar-event feature:

karaf@root> feature:install cellar-event

Of course, if Cellar is already installed, you can use Cellar itself to install cellar-event feature on all nodes:

karaf@root> cluster:feature-install group cellar-event

OSGi Event Broadcast in action

As soon as the cellar-event feature is installed, Cellar listens all OSGi events, and broadcast these events to all nodes of the same cluster group.

Cellar cloud discovery

Cellar relies on Hazelcast in order to discover cluster nodes. This can happen either by using multicast or by unicast (specifying the ip address of each node).
Unfortunately multicast is not allowed in most IaaS providers and specifying the all the ip addresses is not very flexible, since in most cases they are not known in advance.

Cellar solves this problem using a cloud discovery service powered by jclouds.

Cloud discovery service

Most cloud providers among other provide cloud storage. Cellar uses the cloud storage via jclouds, in order to put there the ip addresses of each node so that Hazelcast can found them.
This approach is also called blackboard and in other words is the process where each nodes registers itself in a common storage, so that other nodes know its existence.

Installing Cellar cloud discovery service

To install the cloud discovery service simply the appropriate jclouds provider and then install cellar-cloud feature. For the rest of this manual I will use amazon s3 as an example, but it applies to any provider supported by jclouds.

features:install cellar-cloud

Once the feature is installed, it requires you to create a configuration that contains credentials and type of the cloud storage (aka blobstore).
To do that add a configuration file under etc with the name org.apache.karaf.cellar.cloud-<provider>.cfg and put there the following information:

provider=aws-s3 (this varries according to the blobstore provider)
identity=<the identity of the blobstore account>
credential=<the credential/password of the blobstore account)
container=<the name of the bucket>
validity=<the amount of time an entry is considered valid, after that time the entry is removed>

After creating the file the service will check for new nodes. If new nodes are found the Hazelcast instance configuration is updated and the instance is restarted.

Architecture Guide

Architecture Overview

The core concept behind Karaf Cellar is that each node can be a part of one or more groups that
provide the node distributed memory for keeping data (e.g. configuration, features information, other)
and a topic which is used to exchange events with the rest of the group nodes.

Each group comes with a configuration, which defines which events are to be broadcast and which are
not. Whenever a local change occurs to a node, the node will read the setup information of all the
groups that it belongs to and broadcasts the event to the groups that whitelisted to the specific event.

The broadcast operation happens via a distributed topic provided by the group. For the groups
that the broadcast reaches, the distributed configuration data will be updated so that nodes
that join in the future can pickup the change.

Supported Events

There are 3 types of events:

For each of the event types above a group may be configured to enabled synchronization, and to provide
a whitelist/blacklist of specific event IDs.

For instance, the default group is configured to allow synchronization of configuration. This means that
whenever a change occurs via the config admin to a specific PID, the change will pass to the distributed
memory of the default group and will also be broadcasted to all other default group nodes using the topic.

This happens for all PIDs but not for org.apache.karaf.cellar.node which is marked as blacklisted
and will never be written or read from the distributed memory, nor will be broardcasted via the topic.

The user can add/remove any PID he wishes to the whitelist/blacklist.

The role of Hazelcast

The idea behind the clustering engine is that for each unit that we want to replicate, we create an event,
broadcast the event to the cluster and hold the unit state to a shared resource, so that the rest of the
nodes can look up and retrieve the changes.

For instance, we want all nodes in our cluster to share configuration for PIDs a.b.c and x.y.z. On node
"Karaf A" a change occurs on a.b.c. "Karaf A" updated the shared repository data for a.b.c and then notifies
the rest of the nodes that a.b.c has changed. Each node looks up the shared repository and retrieves changes.

The architecture as described so far could be implemented using a database/shared filesystem as a shared
resource and polling instead of multicasting events. So why use Hazelcast ?

Hazelcast fits in perfectly because it offers:

In other words, Hazelcast allows us to setup a cluster with zero configuration and no dependency to external
systems such as a database or a shared file system.

See the Hazelcast documentation at http://www.hazelcast.com/documentation.jsp for more information.

Design

The design works with the following entities:

The OSGi specification uses the Events and Listener paradigm in many situations (e.g. ConfigurationChangeEvent
and ConfigurationListener). By implementing such Listener and exposing it as an OSGi service to the Service
Registry, we can be sure that we are "listening" for the events that we are interested in.

When the listener is notified of an event, it forwards the Event object to a Hazelcast distributed topic. To keep
things as simple as possible, we keep a single topic for all event types. Each node has a listener
registered on that topic and gets/sends all events to the event dispatcher.

When the Event Dispatcher receives an event, it looks up an internal registry (in our case the OSGi Service Registry)
to find an Event Handler that can handle the received Event. The handler found receives the event and processes it.

Broadcasting commands

Commands are a special kind of events. They imply that when they are handled, a Result event will be fired
containing the outcome of the command. For each command, we have one result per recipient. Each command
contains an unique id (unique for all cluster nodes, created from Hazelcast). This id is used to correlate
the request with the result. For each result successfully correlated the result is added to list of results
on the command object. If the list gets full or if 10 seconds from the command execution have elapsed, the
list is moved to a blocking queue from which the result can be retrieved.

The following code snippet shows what happens when a command is sent for execution:

public Map<node,result> execute(Command command) throws Exception {  
   if (command == null) {  
      throw new Exception("Command store not found");  
   } else {  
      //store the command to correlate it with the result.  
      commandStore.getPending().put(command.getId(), command);  
      //I create a timeout task and schedule it  
      TimeoutTask timeoutTask = new TimeoutTask(command, commandStore);  
      ScheduledFuture timeoutFuture = timeoutScheduler.schedule(timeoutTask, command.getTimeout(), TimeUnit.MILLISECONDS);  
   }  
   if (producer != null) {  
      //send the command to the topic  
      producer.produce(command);  
      //retrieve the result list from the blocking queue.  
      return command.getResult();  
   }  
   throw new Exception("Command producer not found");  
}