Wednesday, November 9, 2011

Skipping an unnecessary step

Say you have a simple BPEL process that gets messages from JMS queue through a JMSAdapter, transforms the message, and then calls a down stream service via SOAP.

For such a simple BPEL process, are there any optimization that can be done for performance?

The answer is yes. And the low hanging fruits can be picked from layer between the inbound JMSAdapter and BPEL process.

The Problem

Just open up the WSDL file of the inbound adapter. You will notice something like the following:

<portType name="Consume_Message_ptt">
<operation name="Consume_Message">
<input message="tns:ExpenseRecord_msg"/>
</operation>
</portType>

Notice there is only an INPUT tag but no OUTPUT tag? This means that the adapter is sending a one-way message to BPEL.

One-way message is one of the
interaction patterns between the caller and the BPEL process that Oracle BPEL supports. When a BPEL process is initiated by an adapter, the adapter acts more or less like a regular SOAP caller to the BPEL process, except calls will go through a more efficient native binding protocol instead of SOAP. As such, the interaction between the two COULD BE any of the interaction patterns above. But when you wire this adapter to BPEL in JDeveloper, JDeveloper by default chooses the "one-way message" interaction pattern. Hence the WSDL file above.

By default, incoming one-way messages are saved in the delivery service database table dlv_message (it is invoke_message table in 10G). These requests are later acquired by Oracle BPEL Server worker threads and delivered to the targeted BPEL process. Such two-phase interaction can be illustrated as follows:





Figure 1. Default Setting of One-Way Interaction (oneWayDeliveryPolicy=async)


Separating message sending and message processing, and saving message to the database make sense for a lot of cases. The biggest benefit is that allows the sender of message to "fire and forget" - swiftly send the message and get done with it, even though the actual processing of the message make take longer time. One-way interaction is like email: with one click of the button you send an email, while it may make take the recipient of the email 1 hour to read and act on it.

The other benefit is message reliability: Even if the SOA server is down before processing the message, the sender rest for sure that the message is reliably saved to the database so that it can still be processed once the server is up again.

But the same benefits may not make sense for an inbound JMSAdapter.

First the adapter probably doesn't care about fire and forget. What does matter is usually the overall speed or throughput of send+process, instead of just "send".

Second, the message was already saved to the inbound message queue before being picked up. It does not need to be saved at another place for the sake of reliability.

When the benefits of have separate threads to send and process the message are less meaningful to adapter, its cost becomes more stand-out: it introduces extra latency by saving the message to database. In short, this interaction pattern pays the cost without enough benefits.

The Improvement

If we combine the two threads, and disable the insert into the database table, load against the database is reduced, latency of the insert will be eliminated, and the extra work the container was doing to manage it is skipped. Skipping this layer may lead to improved response times.

It takes two steps to achieve the improvement and these two steps must be done in tandem.

Improvement Step 1: Disabling saving of one-way message to database

In SOA 10.1.3.x, you can set the deliveryPersistPolicy=off.immediate in the bpel.xml of the BPEL process in question.

<BPELSuitcase>
 <BPELProcess  id="ProcessFulfillmentOrderBillingBRMCommsAddSubProcess"   src="ProcessFulfillmentOrderBillingBRMCommsAddSubProcess.bpel">
 ...
 <configurations>
        ...
        <property  name="deliveryPersistPolicy"  encryption="plaintext">off.immediate</property>
    </configurations>
 </BPELProcess>
</BPELSuitcase>

In SOA 11G, you should set the bpel.config.oneWayDeliveryPolicy=sync in the composite.xml of the composite application.

...
<component name="myBPELServiceComponent">
....
<property name="bpel.config.oneWayDeliveryPolicy">sync</property>
</component>

For details of how the oneWayDeliveryPolicy works, please refer to the 13 Oracle BPEL Process Manager Performance Tuning of the Oracle® Fusion Middleware Performance and Tuning Guide. The 10G equivalent is http://docs.oracle.com/cd/B32110_01/core.1013/b28942/tuning_bpel.htm

For details of how to set this property in 11G, please refer to Appendix C Deployment Descriptor Properties of Oracle® Fusion Middleware Developer's Guide for Oracle SOA Suite. For 10G, please refer to Oracle® Application Server Performance Guide 10g Release 3 (10.1.3.1.0)



Figure 2. Results of Improvement Step 1 (oneWayDeliveryPolicy=sync)

Improvement Step 2: Modifying the threads to increase concurrency

Now you have done step 1 to skip saving the one-way message to database. But just by looking at the picture above you may already notice there is a problem.

By skipping saving one-way message to database table, the adapter threads become the sole control of concurrency. The famous BPEL Invoke threads and engine threads do not apply anymore. This essentially means the application becomes single threaded, and the throughput may drop.

That is is why setting deliveryPersistPolicy=off.immeidate (in 10G) or oneWayDeliveryPolicy=sync (11G) should be done in tandem with increasing the number of adapter threads. Otherwise, you will likely create a drop in throughput. Optimal values of adapter threads should be determined through performance testing.



Figure 3. Results of Improvement Step 2 (oneWayDeliveryPolicy=sync with Adapter Threads)

For JMSAdapter in 11G, you can set the adapter threads in the the properties for tuning the thread counts of various adapters are listed as follows.
<service name="dequeue" ui:wsdlLocation="dequeue.wsdl">
<interface.wsdl interface="http://xmlns.oracle.com/pcbpel/adapter/jms/textmessageusingqueues/textmessageusingqueues/dequeue%2F#wsdl.interface(Consume_Message_ptt)"/>
<binding.jca config="dequeue_jms.jca">
<property name="adapter.jms.receive.threads" type="xs:string" many="false">10</property>
</binding.jca">
</service>

For 10G BPEL, you set the JMSAdapter receiver threads at bple.xml of BPEL:
<activationAgents>
<activationAgent className=“…" partnerLink="MsgQueuePL">
   ...     <property name="adapter.jms.receive.threads”>5</property>
</activationAgent>
</activationAgents>

For 10G ESB, you set the JMSAdapter receiver threads at *esbsvc file of OESB:
<service name="ListenForNewEmployees" ….>
...
<endpointProperties>
  <property name="adapter.jms.receive.threads" value="10"/>
</endpointProperties>
</service>

The ways to configure adapter threads vary between adapters and product versions. Please refer to documentation for details. I will have another post soon on this subject.

More Benefits

After setting the oneWayDeliveryPolicy=sync (or deliveryPersistPolicy=off.immediate for 10G) AND configure multiple threads, you achieve shorter latency and higher throughput.

But besides the gain in performance, you can also enjoy the following side benefits:

1. Ease of throttling

When oneWayDeliveryPolicy=async, two layers control the incoming rate of messages into BPEL service engine: the adapter threads AND the BPEL invoke threads (see Figure 1 above). And because Invoke Threads are shared among BPEL processes running on the same JVM, you can't just throttle for one process without affecting another.

Now setting adapter threads along with oneWayDeliveryPolicy=sync, you only need to throttle the incoming load with adapter threads, which only affect the SOA component that the adapter calls. You can throttle per adapter individually.

2. Avoid Back Log Messages Flooding the SOA server

If there is an outage of SOA servers and the adapter stops dequeuing from the inbound message queue, the enterprise application on the other side of the queue may continue to enqueue. That builds up a large back log in the inbound message queue.

By the time the SOA servers are restarted, with oneWayDeliveryPolicy=async, the large back log of messages in the queue may flood in the SOA servers, saturating all the Invoke Threads and leave no Invoke Threads to work for other BPEL processes. In contrast, there is no such worry if you throttle the incoming loads to the BPEL process by setting oneWayDeliveryPolicy=sync along with adapter threads.

3. Ease of Monitoring:

Rather than checking both the inbound message queue AND the dlv_message/invoke_message table at database for the flow rate of messages into BPEL service engine, you have one place to monitor: the inbound message queue.

Oracle SOA 10G Tricks: Optimize calling path between ESB and BPEL

In SOA 10G, it is a good practice to have BPEL and ESB co-located at the same JVM. Doing so will not only improves performance but also allows JTA global transaction propagates between the two. However, these advantages will not be fully materialized, or even will totally disappear, if wrong protocols was chosen when these two components call each other.

1. BPEL calling ESB

When developer write BPEL processes that call ESB, many don't think about the protocols used in the calls. The fact of the matter is, if you don't pay attention you may end up making the call via SOAP, instead of a more efficient protocol, even if the ESB endpoint is running on the same JVM. The problems of having the call going through SOAP are

Problem 1: It breaks the JTA transaction. Often time you want a JTA global transaction propagate from BPEL to ESB
Problem 2: It is less performant. The call may first have to go to the HTTP load balancer and then routed back, only to the same JVM from which the call is initiated.

Here is the syntax of making BPEL -> ESB invocation native:

Original partner binding at bpel.xml
     <partnerlinkbinding name="PLESBSalesOrderOrchastrationEBS">
<property name="wsdlLocation">http://xxx:7777/esb/wsil/AIASystem/EBS/SalesOrderOrchastrationEBS?wsdl</property>
</partnerlinkbinding>


Should be changed to:
    <partnerlinkbinding name="PLESBSalesOrderOrchastrationEBS">
<property name="wsdlLocation">http://xxx:7777/esb/wsil/AIASystem/EBS/SalesOrderOrchastrationEBS?wsdl</property>
<property name="preferredPort">__esb_{partnerlink}_{porttype}</property>
</partnerlinkbinding>



You should get the actual value of __esb_{partnerlink}_{porttype} from the ESB's WSDL file http://xxx:7777/esb/wsil/AIASystem/EBS/SalesOrderOrchastrationEBS?wsdl


2. ESB calling BPEL

On the other hand, when you have ESB call BPEL at the same JVM, there are three possibilities for the calling protocols:
A. SOAP
B. RMI
C. EJB local interface.

EJB local interface would typically deliver better performance than RMI (which involves JNDI look-up, serialization, etc), and RMI often time delivers better performance than SOAP. Both RMI and EJB local interface could allow the JTA transaction context to be propagated (as long as both ESB and BPEL are in OC4J), while SOAP will break the JTA transaction boundary as we mentioned earlier.

For best performance and also preserving the transaction context, you typically want to choose the EJB local interface. In the SOA 10g's term, it is call "local invocation".

To choose the right protocol, you start at the design time. You want to choose the BPEL endpoint from the auto generated "BPEL services" from the ESB designer. If you create an "external service" using the WSDL of BPEL, the designer will choose the communication protocol to be SOAP. To verify whether the BPEL endpoint is treated as a "BPELService" or "External Service", you can simply open the *.esbvc file for that BPEL endpoint. You should be able to find "BPELService" in the esbvc file.

Now, once the designer creates the BPEL endpoint as a BPELService, the default protocol would be RMI.

You can then further optimize this call by changing the protocol from RMI to local interface. The latter means ESB calls BPEL as if one EJB calls another EJB in the same JVM via local interface. The configuration for this optimization is a bit more tricky than optimizing the BPEL -> ESB route.

Step 1: Enable the global JNDI for the OC4J container by adding global-jndi-lookup-enabled="true" to the application-server element in server.xml.

Step 2: Add an endpoint property called InvocationMode to the Oracle ESB service that represents the BPEL process, and specify a value of local. Do this through the Properties tab on the ESB Service Definition page in Oracle ESB Control. Possible values for this property are local and remote. The default value is remote, which implies that by default Oracle ESB calls BPEL processes over RMI. RMI protocol still allows the JTA transactions can be propagated except it incur more latency than 'local'. The "local" invocation mode only applies when ESB and BPEL are running within the same JVM.

Step 3: (to work around a known issue), to ensure that the routing service can call a BPEL process using InvocationMode=local, you must make sure that property is available by adding the InvocationMode property to the bpel.xml of the BPEL process being called, as shown in the following example.

<bpelsuitcase>
<bpelprocess id="BPElProcess1" src="BPElProcess1.bpel">
<partnerlinkbindings>
<partnerlinkbinding name="client">
<property name="wsdlLocation">BPElProcess1.wsdl</property>
</partnerlinkbinding>
</partnerlinkbindings>
<configurations>
<property name="InvocationMode">local</property>
</configurations>
</bpelprocess>
</bpelsuitcase>


Step 4. ESB to BPEL communication uses "oc4jinstancename" property defined in "ant-orabpel.properties" to determine the instance name of the current oc4j node. This is required for successful lookup of BPEL delivery service. In clustered environment this property takes a different meaning (it is used for group name) and cannot be used as "oc4jinstancename" so another property called "local.oc4jinstancename" should be defined to specify the local oc4j instance name, for the lookup to function correctly.


Step 5: Restart the server (required for step 1 above). Setting the global JNDI attribute to true flattens the JNDI tree. This means that any J2EE application can access any JNDI object in the container without providing credentials. This may increase security