Thursday, March 21, 2013

How to Decide: Single Queue or Multiple Queues?


Introduction


Typically, a Service Oriented Architecture (SOA) system has multiple and different services receiving the same type of messages. For example, a purchase order can trigger either creation or modification of an order, which may be handled by different business flows connecting to different back-end systems. Ideally, you provide to the outside world a coarse grain service for order creation, and another coarse grain service for order modification, etc, so that an edge application, such as an order capturing application, can call these coarse grain services accordingly.

But the challenges come if the only interfaces to the external edge application or applications are message queues. Immediately it begs the question: should I use the same queue or different queues for these messages, which are of the same type but intended for different services? What are the pros and cons? What are the technical challenges and best practices for each approach?

This blog post is trying to tackle these challenges. There is not a single solution that fits all situation. Rather, we will examine the pros and cons of each solution from different angles.

Edge Applications

The first angle to look at is of course the edge application that initiate the message.

Some edge application can only communicate to a single queue for various types of business transactions. In this situation you have no choice but to opt for the single queue approach. Fortunately, the message headers or content typically contains the routing information, so that the messages can be routed to the matching services downstream.

Message Routing

When you have multiple services, the first thing  you want to make sure is probably that each service only receives the messages belong to it and no cross-routing happens.

In the dedicated-queue approach, since each service use message adapter to consume messages from its own queue, there is no need to route the messages from the queues to the services.

In contrast, the shared-queue approach requires the system having the ability to route the messages to the proper services that they belong to. The routing logic have to be based on the incoming messages themselves. If there is no routing information embedded in the messages,  you can not use the shared-queue approach.

Please note that mid process receive via message adapter face the same routing challenge as the initial receive. Despite of a common misunderstanding, the correlation mechanism for mid process receive can not help the adapter to route the messages to the correct composite.

Operation Management

From the perspective of operational management complexity, the shared-queue approach has advantage over the dedicated-queue approach. Obviously, you need to manage multiple queues in a dedicated-queue approach, compared to managing single queue in a shared-queue approach. This is usually the main reason why people choose shared-queue instead of dedicated-queue.

To ease the complexity of managing dedicated-queues, you may want to name the queues to reflect the names of the services they belong to.

Both the shared-queue or dedicated-queue approaches demand you to answer the same question: should you run the message queue(s) in the same servers as the services or in its/their own servers? I would recommend to run the message queue or queues in server cluster different from the service, so that queues and services will not compete with each other for system resources. This separation also help the diagnose of performance and high-availability issues easier.

Separation of Concerns

From the perspective of separation of concerns, the dedicated queue approach has advantages over the shared queue approach.

These concerns we could separate by using dedicated-queue include:
  • Performance profile and requirements
  • High-availability requirements
  • Service release cycles
Now let's look into each of them, to see how the dedicated-queue approach addresses the separation of concerns differently from the shared-queue approach.

Performance profiles and requirements

First is performance profile and requirements. Some services may have much higher transaction volumes than others. Some services may involve much larger message size. The performance management of the message queues are very different between these performance profiles and requirements. By separating the queues, each queue can be managed individually with minimum impacts to others.

High-availability requirements

Different high availability requirements also lead to different configuration of message queues. Some services is more mission critical than others therefore need to be highly available, which requires whole server migration accompanied by disaster recovery strategy. Others may be able to sustain more down time. The requirements for the HA infrastructure underlying the message queues for these services could be very different. To separate these infrastructures could help you save resources: you only need to invest the resources and capacity to some of the message queues but not all.

Service release cycles

When services share the same message queue but have different release cycles, you need to consider whether the software release cycles of services can interfere with each other.

In the dedicated-queue approach, adding a new service requires the creation of a new queue, but will not cause interruption of the existing services. The same to renewing/redploying an existing service.

In the shared-queue approach, adding a new service or renewing an existing service could cause interruption to the existing services. These interruptions could happen in the following ways:
  • If you use message selector, the message selection criteria is built into each service independently. Adding a new service typically only involves defining the message selection criteria within the scope of the new service without impacting others. However there could be exceptional cases when you also need to update the message selection criteria of the existing services, especially if there are changes to the semantics of the messages.
  • If you use message routing service, adding a new service may also need to update the centralized routing service it can route messages to the new service. Updating the routing service always cause temporary interruption to the existing services because all services depend on it.
  • If you use message routing service, renewing an existing service in the shared-queue approach could cause interruption to other services as well. For example, if there happen to be a message in the queue that is intended for a service that happens to be brought down, the router service will have no destination to route to and encounter errors. To mitigate this risk, you may want to setup an error queue for the router service to which the router service redirect the messages.

Conclusion

By now, you should be equipped with a framework to evaluate the pros and cons of using dedicated-queue and shared-queue approach, and the best practices for each approach.

Tuesday, December 4, 2012

Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics

The WebLogic JMS Topic are typically running in a WLS cluster. So as your SOA composites that receive these Topic messages. In some situation, the two clusters are the same while in others they are sepearate. The composites in SOA cluster are subscribers to the JMS Topic in WebLogic cluster. As nature of JMS Topic is meant to distribute the same copy of messages to all its subscribers, two questions arise immediately when it comes to load balancing the JMS Topic messages against the SOA composites:

  1. How to assure all of the SOA cluster members receive different messages instead of the same (duplicate) messages, even though the SOA cluster members are all subscribers to the Topic?
  2. How to make sure the messages are evenly distributed (load balanced) to SOA cluster members?

Here I am going to walk you through how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that you receive one copy of messages per composite. Or more accurately, one copy of message per *.jca file per composite.

1. The typical configuration

In this typical configuration, we achieve the load balancing of JMS Topic messages to JmsAdapters by configuring a partitioned distributed topic along with sharable subscriptions. You can reference the documentation for explanation of PDT. And this blog posting does a very good job to visually explain how this combination of configurations would message load balancing among clients of JMS Topics.

Our job is to apply this configuration in the context of SOA JMS Adapters. To do so would involve the following steps:
  • Step A. Configure JMS Topic to be UDD and PDT, at the WebLogic cluster that house the JMS Topic
  • Step B. Configure JCA Connection Factory with proper ServerProperties at the SOA cluster
  • Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file)

Here are more details of each step:

Step A. Configure JMS Topic to be UDD and PDT,

You do this at the WebLogic cluster that house the JMS Topic.
 
You can follow the instructions at Administration Console Online Help to create a Uniform Distributed Topic. If you use WebLogic Console, then at the same administration screen you can specify "Distribution Type" to be "Uniform", and the Forwarding policy to "Partitioned", which would make the JMS Topic Uniform Distributed Destination and a Partitioned Distributed Topic, respectively




Step B: Configure ServerProperties of JCA Connection Factory

You do this step at the SOA cluster.

This step is to make the JmsAdapter that connect to the JMS Topic through this JCA Connection Factory as a certain type of "client".

When you configure the JCA Connection Factory for the JmsAdapter, you define the list of properties in FactoryProperties field, in a semi colon separated list:

ClientID=myClient;ClientIDPolicy=UNRESTRICTED;SubscriptionSharingPolicy=SHARABLE;TopicMessageDistributionAll=false

You can refer to Chapter 8.4.10 Accessing Distributed Destinations (Queues and Topics) on the WebLogic Server JMS of the Adapter User Guide for the meaning of these properties.

Please note:
  • Except for ClientID, other properties such as the ClientIDPolicy=UNRESTRICTED, SubscriptionSharingPolicy=SHARABLE and TopicMessageDistributionAll=false are all default settings for the JmsAdapter's connection factory. Therefore you do NOT have to explicitly specify them explicitly. All you need to do is the specify the ClientID.
  • The ClientID is different from the subscriber ID that we are to discuss in the later steps. To make it simple, you just need to remember you need to specify the client ID and make it unique per connection factory.
Here is the example setting:







Step C. Reference the JCA Connection Factory and define a durable subscriber name, at composite's JmsAdapter (or the *.jca file)

In the following example, the value 'MySubscriberID-1' was given as the value of property 'DurableSubscriber':
    <adapter-config name="subscribe" adapter="JMS Adapter" wsdlLocation="subscribe.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
      
      <connection-factory location="eis/wls/MyTestUDDTopic" UIJmsProvider="WLSJMS" UIConnectionName="ateam-hq24b"/>
      <endpoint-activation portType="Consume_Message_ptt" operation="Consume_Message">
        <activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec">
          <property name="DurableSubscriber" value="MySubscriberID-1"/>
          <property name="PayloadType" value="TextMessage"/>
          <property name="UseMessageListener" value="false"/>
          <property name="DestinationName" value="jms/MyTestUDDTopic"/>
        </activation-spec>
      </endpoint-activation>
    
    </adapter-config>
    

You can set the durable subscriber name either at composite's JmsAdapter wizard,or by directly editing the JmsAdapter's *.jca file within the Composite project.


2.The "atypical" configurations:

For some systems, there may be restrictions that do not allow the afore mentioned "typical" configurations be applied. For examples, some deployments may be required to configure the JMS Topic to be Replicated Distributed Topic rather than Partition Distributed Topic. We would like to discuss those scenarios here:

Configuration A: The JMS Topic is NOT PDT

In this case, you need to define the message selector 'NOT JMS_WL_DDForwarded' in the adapter's *.jca file, to filter out those "replicated" messages.

Configuration B. The ClientIDPolicy=RESTRICTED

In this case, you need separate factories for different composites. More accurately, you need separate factories for different *.jca file of JmsAdapter.

References: