Powered By Blogger

Thursday, December 12, 2013

OSB polling with JCA in clustered environment

We all know that OSB supports JCA protocol.In this post I will discuss one issue that we encountered when we implemented FTP poll with JCA in OSB  in soa/osb clustered environment

Setup 
  1. Created JCA FTP deployment connection factory with HA parameters
  2. Created FTP adapter in Jdeveloper
  3. Imported required artifacts from Jdev project into OSB
  4. Created OSB proxy with JCA
When we tested OSB proxy for polling files,we saw below error

Error in osb log

<Error> <JCA_FRAMEWORK_AND_ADAPTER> <osb01> <[ACTIVE] ExecuteThread: '145' for queue: 'weblogic.kernel.Default (self-tuning)'>  <097b277aaf0a827d:40ef1fa1:142b5024552:-8000-0000000000079fde> <1386802596992> <BEA-000000> Unable to create clustered resource for inbound.
Unable to create clustered resource for inbound.
Unable to create clustered resource for inbound.
Please make sure the database connection parameters are correct.
[Caused by: Unable to resolve 'jdbc.SOADataSource'. Resolved 'jdbc']>

<Error> <JCATransport> <osb01> <[ACTIVE] ExecuteThread: '145' for queue: 'weblogic.kernel.Default (self-tuning)'> <pravva01> <> <097b277aaf0a827d:40ef1fa1:142b5024552:-8000-0000000000079fde> <1386802596993> <BEA-381959> <Failed to activate JCABindingService for wsdl: servicebus:/WSDL, operation: Get, exception: BINDING.JCA-12600
Generic error.
Generic error.
Cause: {0}.
Please create a Service Request with Oracle Support.
BINDING.JCA-12600
Generic error.
Generic error.
Cause: {0}.
Please create a Service Request with Oracle Support.
        at oracle.tip.adapter.sa.impl.inbound.JCABindingActivationAgent.activateEndpoint(JCABindingActivationAgent.java:340)
        at oracle.tip.adapter.sa.impl.JCABindingServiceImpl.activate(JCABindingServiceImpl.java:113)
        at com.bea.wli.sb.transports.jca.binding.JCATransportInboundOperationBindingServiceImpl.activateService(JCATransportInboundOperationBindingServiceImpl
 

Solution

As highlighted in color 'jdbc/SOADataSource'  is one of parameters configured on JCA adapter connection factory and it is mandatory parameter.This data source is available with installation and  points to soa dehydration store schema SOAINFRA where high availability features are installed

By default this data source is targeted to soa cluster but in our case it is OSB which is using JCA framework.When OSB access configured FTP JNDI connection factory which is configured with 'jdbc/SOADataSource', this data source could not be looked up since it is not being targeted to OSB cluster

So we need to target  data source to osb cluster in addition to soa cluster and bounce OSB managed servers.This would resolve the issue

Monday, December 9, 2013

OSB Business Service with JCA

Configuring OSB services to work with JCA protocol is easy, but I would like to post one error I faced when I generated OSB business service with FTP JCA file

Eclipse and OSB console gives a way to generate OSB services from JCA file.
If it is from OSB console, import JCA file and its dependency artifacts,go to resource browser,select JCA bindings from left side menu and click on actions icon from required JCA file on right side page
If it is from Eclipse,import JCA file and its dependency artifacts, right click on it, select Oracle service bus then select generate service.

With above steps you can generate business service.We use Jdeveloper to model adapter interface and use generated files to design OSB service.
Jdeveloper creates abstract wsdl but OSB requires concrete WSDL.When we generate OSB service either from Eclipse or console it will generate concrete WSDL by adding binding and service elements to newly generated WSDL and import abstract WSDL in it for abstract definitions.

If you choose  to add concrete definitions(bindings and service) to WSDL manually and trying to use that WSDL in service definition then that works fine for proxy service but business service fails with below error

An error occured while validating JCA transport endpoint properties, exception: oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException: Please specify a Service. Choices are: {{http://xmlns.oracle.com/pcbpel/adapter/ftp/CRM61OM/Project1/putFTP}putFTP, {http://xmlns.oracle.com/pcbpel/adapter/ftp/CRM61OM/Project1/putFTP}putFTP_ep} oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException:

I could not figure out why business service fails and proxy generation is fine but it could be resolved by generating service automatically either from eclipse or console



Friday, November 15, 2013

OSB FTP/SFTP poller transaction Rollback

In this article we will talk about how OSB 11g FTP poller works and common known issues

When OSB proxy connects to FTP server for reading file or business service for uploading file, OSB is always FTP client and FTP server is server.

How FTP authentication happens?
FTP transport support two types of authentications and it is always one-way. FTP server authenticates OSB client but not the other way around
  • Anonymous:FTP server does not expect any credentials to connect to it.OSB is connected as anonymous user
  • External user:OSB has to pass specific user credentials via service account
How SFTP authentication happens?
SFTP  transport authentication happens in two-way mode.FTP server authenticates OSB client and vice versa
SFTP transport supports following authentication models
  • Username-password authentication:This is easiest and quickest method.FTP server authenticates the connection/client with static user name/password combination and OSB provides login credentials via service account.OSB authenticates FTP server via known_hosts file.known hosts file is a  combination of  FTP host name,IP address and public key of FTP server and resides at DOMAIN_NAME/config/osb/transports/sftp. We need to create path if doesn't exists
  • Host based authentication: This method is used when all users/services in OSB domain share same OSB server public key.FTP server authenticates OSB using public key of OSB and OSB client authenticates FTP server using known_hosts file.OSB passes its public key using service key providers.Check OSB documentation on how to create service accounts.
  • Public key authentication:This method is used when each user/service in OSB domain has their own public/private key pair.Authentication process is same as Host based mechanism with only difference is that we configure service key provider with user/service specific public key
In above last two methods, public keys of FTP server and OSB needs to be exchanged and setup by admin.FTP server needs to setup to accept connections from OSB.

How FTP/SFTP poller mechanism works?
File processing is done asynchronously.
FTP poller thread is pinned to one managed node in cluster and we need to choose that managed node during deployment.
  • FTP transport proxy polls inbound directory at configured intervals
  • If proxy finds file then it renames file to .stage extension to make sure it will not be picked up during next polling cycle.This is required because there might be chances that file is still under delivery until next poll cycle or poll cycles are too quick.
  • A JMS entry with file metadata is made in wlsb.internal.transport.task.queue.sftp queue to indicate file is waiting to be read
  • There are domain wide deployed MDBs which poll on above JMS for new file poll requests
  • MDB receives the request and makes a new connection to FTP server to read file content.This task happens in transactional context
  • MDB delivers the file to proxy request pipe-line
  • After successful delivery of file, .stage file is removed from FTP server
  • Now MDB job is done and OSB proxy process the file
If for some reason file couldn't be delivered then redelivery is attempted based on configured retry count on proxy.After retry count is exhausted then file is moved to configured error directory on proxy

Common issues?
  • Permission issues on FTP server
  • Network/Firewall issues between OSB and FTP.
  • Please be aware of that OSB doesn't support certain types of FTP servers.
  • Repeatedly trying to deliver same file
  • Permission issues on Error/Archive/Download directories to OSB install user
I would like to discuss one interesting issue and resolution
When you see error something like below repeatedly in OSB log then something interested happened

ExecuteThread: '54' for queue: 'weblogic.kernel.Default (self-tuning)'>
<<anonymous>> <BEA1-4344C4674547A7E28CBA> <0000K8^j7x9FS8m5srt1iX1ITzwG000003> <1384451797938> <BEA-381803> <Unable to get file :<filename.stage on attempt number : 0 :java.io.IOException: No such file

Above error comes when OSB proxy found file in FTP server,renames file to .stage and JMS task has been created.When MDB connects to FTP server then somehow .stage file not found.It might got deleted mistakenly by someone or by some automated cleanup  process running on FTP server.
Because of MDB retrieves file within transaction context, now this transaction rolls back and JMS poll task is again put back in JMS queue.
This process repeats when MDB again picks up JMS task and try to read .stage file which will never be found again transaction rolled back.This goes to infinite loop and continues forever.

If we carefully analyze the logs, we can see below MDB transaction roll backs
[ACTIVE] ExecuteThread: '43' for queue: 'weblogic.kernel.Default (self-tuning)'> <alsb-system-user> <>
<0000K8^j7x9FS8m5srt1iX1ITzwG000003> <1384451801706> <BEA-010213> <Message-Driven EJB: PolledMessageListenerMDBEJB's transaction was rolled back.
The transaction details are: Name=[EJB com.bea.wli.sb.transports.poller.listener.PolledMessageListenerMDB.onMessage(javax.jms.Message)],Xid=BEA1-4350C4674547A7E28CBA(834804898),Status=Rolled back

Only solution I could think of is delete JMS entry from wlsb.internal.transport.task.queue.sftp.
 

Monday, November 11, 2013

JCA adapters and message rejection handling

Oracle JCA adapters support error handling capabilities.In this post we will see what is message rejection and how they can be handled.

Message rejection:Any message that got error out before being posted to SCA infrastructure is called rejected message.A best example would be when we have file adapter to translate text format to XML.If there are errors in translation then message will be rejected by JCA framework.

We can handle such type of messages by using a mechanism called rejection handlers.This mechanism is supported via Fault-policies This rejection handlers works only with Synchronous process
JCA framework categorizes the errors into two types
  • Retryable (which can be retried safely)
  • Non-Retryable (cannot be retried)
Messages are rejected if they are non-retryable for example, translation errors
They will be retired if they can be, for example connection errors
Retryable errors are retried either indefinitely(default behaviour configured globally) or number of times equal to jca.retry.count  in composite.xml and reject message after count is exhausted.

All rejected messages are stored in SOA dehydration store table-rejected_message

Configuring message rejection handlers
Rejection handlers are defined in fault-policies.Message rejection handlers comes into picture only when messages got rejected by JCA

If we do not configure rejection handlers then default file based rejection handler will kick off and all rejected messages will be forwarded to &lt;domain_name>/rejmsgs/&lt;managed_server>

Available rejection handlers
Following handlers can be defined in fault-policies
  1. JMS queue (Rejected messages are written to configured JMS queue)
  2. Web service (Configured WS will be called with rejected message)
  3. Custom java (Java class will be executed)
  4. File (Rejected messages are written to specified folder path)
JCA Retries

When errors are retryable, then retrying happens based on either JCA global retry which is indefinite by default or JCA properties in composite.xml
Below JCA properties are used for this purpose in composite.xml
jca.retry.count
jca.retry.interval
jca.rerty.backoff
jca.retry.maxInterval

If we do not specify any JCA properties in composite.xml then global JCA retry will be used. If we specify at composite level then composite JCA properties will take precedence.

We also have an option of changing global JCA count from indefinite to finite number using system Mbean browser from em console.Follow below steps to do this

  1. Right click on soa-infra then select Administration then system Mbean browser
  2. Select oracle.as.soainfra.config then adapterconfig then adapter
  3. Change GlobalInboundJCAretryCount to required number
Please remember this will affect all JCA retries in the domain.Keeping indefinte is also dangerous because this will cause JCA to retry indefiniely resulting one instnace for each retry
 

Tuesday, November 5, 2013

File naming convention sequencing strategy for FILE/Ftp Adapter

As we know, File/Ftp adapter allows us to configure sequence as part of file naming convention when we generate files.For example, if you choose PO-data_%SEQ%.xml as the FileNamingConvention, all files would be generated as PO-data_1.xml, PO-data_2.xml,..
JCA file would have property like

<property name="FileNamingConvention" value="PO-data_%SEQ%.xml"/>

As per Oracle, sequence is maintained as mentioned below 
The sequence number is being maintained in the control directory of  corresponding composite project. For each project that use the File or Ftp Adapter, a unique directory is created for book-keeping purposes. Because this control directory must be unique, the adapter uses a digest to ensure that no two control directories are the same.
The control information for project would go under FMW_HOME/user_projects/domains/soainfra/fileftp/controlFiles/[DIGEST]/outbound where the value of DIGEST would differ from one project to another.
Within this directory, there is a file control_ob.properties file where the sequence number is maintained. The sequence number is maintained in binary form and you might need a hexadecimal editor to view its content. There is another zero byte file, SEQ_nnn. This extra file is maintained as a backup.

One of the challenges faced by the adapter run time is to guard all writes to the control files so no two threads inadvertently attempt to update the control files at the same time. It does this guarding with the help of a "Mutex". The mutex is of different types:
  • In-memory
  • DB-based
  • Coherence-based
  • User-defined
There might be scenarios, particularly when the Adapter is under heavy transactional load, where the mutex is a bottleneck. The Adapter, however, enables you to change the configuration so the adapter sequence value is derived from a database sequence or a stored procedure. In such a situation, the mutex is by-passed, and the process results in improved throughput.
The simplest way to achieve improved throughput is by switching your JNDI connection factory location for the outbound JCA file to use the eis/HAFileAdapter:
eis/HAFileAdapter is available by default when we install SOA
Above connection factory can also be used to support high availability in SOA cluster in active-active mode

Check control directory is setup for specific path to keep house keeping information 


Check how HAFile or HAFtp Adapter is configured with SOA data sources to maintain sequence in soainfra database schema


With this change, the Adapter run time creates a sequence on the Oracle database(SOA dehydration store). For example, if you do a select * from user_sequences in your soa-infra schema, you see a new sequence being created with name as SEQ___ (where the GUID differs by project).
However, to use your own sequence, you must add a new property to your JCA file called SequenceName. You must create this sequence on your soainfra schema beforehand

<property name="SequenceName" value="Adapter_Seq"/>

Writing same file to multiple FTP locations using FTP Dynamic Partnerlink

When we have a requirement to send the same file to multiple FTP servers, then probably we could create, multiple FTPAdapter references, one for each FTP server. However, this is not the most optimal approach; instead, you can use the concept of "Dynamic Partner Links".

In the later approach, we could create just one FTP adapter reference partner link and have multiple invoke activities invoking same reference.

First create all required FTP JCA deployment JNDIs (one for each FTP server)  in WLS admin console
For example,
eis/FTP/FTP1
eis/FTP/FTP2
eis/FTP/FTP3

In composite, create FTP adapter reference using one of above JCA connection-factory location JNDI
Connect to FTP adapter reference using invoke activity

Using dynamic partner link concept, we can change this connection-factory location dynamically using JCA header properties in both BPEL and Mediator services. To do so, BPEL or Mediator is required to use a reserved JCA header property jca.jndi as shown in the following.


If we carefully examine above code,we can understand below things
Two invoke activities are pointing to same reference 'FTPPut' and using JCA property jca.jndi which will be assigned required FTP connection-factory location JNDI.
This will invoke different FTP servers for writing same file.

Note: jca.jndi property is not available on invoke activity properties tab.We need to add manually to .bpel source file

Saturday, September 21, 2013

UMS Email driver Errors

In this post, I would like to share two UMS errors and their solutions

Error#1

ID 1341107.1Error status received from UMS.[[
Status detail :
         Status type : DELIVERY_TO_DRIVER:FAILURE,
         Status Content : No matching drivers found for sender address = EMAIL:soaadmin@example.com,
         Addressed to : EMAIL:user1@example.com,
         UMS Driver : null,
         UMS Message Id : 1823d47b0a58948725252c998e082228,
         Gateway message Id : null,
         Status Received at : Thu Jul 25 16:22:16 PDT 2013.
Check status details and fix the underlying reason, which caused error.

Cause: The UMS Driver for the appropriate channel(Email) is configured with a specific list of
SenderAddresses, and the message sent by the application has a Sender Address that does not
match.

Remedy: Make the Sender Address blank. This will ensure that all outbound messages will be sent
regardless of the sender address.

Error#2

Error Message: ORABPEL-31023 - Cannot send email notification to address. The
address is being marked as invalid and no further notification would be sent to this address.
 

Cause: Due to a number of previous failed send attempts to an email address, SOA Suite has
added the address to the Bad Address List thus preventing subsequent sends to this address from
failing. The previous failed attempts may been caused by a configuration issue.

Remedy: Remove the email address from the Bad Address List. This can be done by selecting View
Bad Addresses from the Notification Management page in Enterprise Manager. When the Bad Address
List dialog box is displayed, remove the email address from this list.

Thursday, September 19, 2013

java.sql.SQLException: ORA-25215: user_data type and queue type do not match

I would like to discuss possible causes that might throw error in subject

Above error comes when AQ adapter  is used from middleware point of view

As error is reminiscent, first obvious reason is AQ user data type in Oracle Apps is different than what AQ adapter is expecting from SOA composite.If you change AQ data type after deploying composite then re-run the AQ adapter wizard to sync up AQ data type definitions

Even after making sure above all is correct, if we still see error then following case also causes above error

If we drop and recreate AQ /data type definitions in Oracle Apps then SOA log throws errors for some time and stops. SOA should recover from this error after some time or DBA can also flush the database cache to recover quickly

Hope this helps

Sunday, September 15, 2013

XML-22036: (Error) Cannot convert result tree fragment to NodeSet

In this post we talk about context of error in subject.

XSLT 1.0 spec has a data type called 'result tree fragment' and this data type is introduced by XSLT variables.A result tree fragment is treated equivalently to a node-set that contains just a single root node. However, the operations permitted on a result tree fragment are a subset of those permitted on a node-set.In particular, it is not permitted to use the /, //, and [] operators on result tree fragments

For example, if we have xslt like below then that is scope for  mentioned error

 <xsl:stylesheet version="1.0"....
 <xsl:param name="InputVariable.payload" />
 <xsl:template match="/">
 <top:ResultTreeCollection>
 <top:Application>
  <xsl:value-of select="$InputVariable.payload/ns1:Applications/ns1:ApplicationName" />
  </top:Application>
...
</xsl:stylesheet>

In above XSLT, we are trying to access a BPEL variable called  "InputVariable"(having result tree fragment data type) and applying an operation '/' which is not allowed as per XSLT 1.0 spec

There are two solutions possible
  • Using Oracle xslt extension function node-set() which converts result tree fragment to node set
  • Use XSLT 2.0 version
Simplest solution is use XSLT 2.0 version  that means,  <xsl:stylesheet version="2.0">         2.0 version deprecates result tree fragment data type and implicitly convert that to node-set data type on which /, //, [] operations are allowed safely.

Same error comes when xsl:variable is defined to have sequence of nodes. For example <xsl:variable name="Applications" select="./ApplicationsList" />

Friday, September 6, 2013

Weblogic JMS Message Throttling

In this post, we will see how can we setup weblogic JMS message throttling.
There are three possible setups to implement this feature
  1. Flow control (used for message producer)
  2. JCA property(used for message consumer)
  3. Work managers(used at thread level)
We need not implement all three but which one to use depends upon requirements

Message throttling setup is really required for following reasons
  1. To prevent resource monopolizing
  2. To maintain good server health, ultimately preventing unexpected server downs
  3. Preventing resource starvation
For example, let's take a very common scenario in IT environments.If message consumer is down for routine maintenance and producer is up and producing huge number of messages then by the time consumer comes back online there will be huge number of messages piling up.
If there is no flow control enabled then, by default WLS creates many threads to meet the demand  which intern will start consuming all server resources just to process pending JMS messages.This will ultimately lead to resource starvation for other threads/processes running in the server.


Steps required to setup flow control

This can be achieved by two level setups
  1. Threshold setup on Queue under JMS module (Here we define min/max thresholds which triggers message throttling)














 2.  Connection factory flow control setup(Here we define how message producer should behave in case of threshold event)

















How it works?

When thresholds defined on destination reached(for example, max no of messages on a destination reached), since we have enabled "flow control" on connection factory, throttling will kick off automatically.Producers created using that CF will be instructed to control their message production as per numbers defined on CF.

Once Flow control kicks in, it will instruct message producer how fast or slow he has to produce messages to meet consumer speed.There will be no instructions provided to control consumer speed

Flow Maximum: Indicates how many messages a producer can generate per second when system is under flow control event

Flow Minimum: System will not slowdown producer further, if he is already generating messages per second equivalent to this value

Flow Interval: In how many seconds producer will come down from Flow Maximum to Flow Minimum, when system is under overloaded.It is a cool down period.In stead of rapid decreasing in production, producer goes steady slowdown causing smooth transition.

Flow Steps: Again second level of smooth transition.In how many breaks it will adjust flow interval
For example, if flow interval is 60 secs and flow steps is 10  then  60 divided by 10 = 6
in 6 breaks/pauses it will cool down the producer.

Throttling messages from consumer side
In a clustered environment, set following properties in consumer JMS JCA file 
<property name="minimumDelayBetweenMessages">10000 </property>

Delay is mentioned in milliseconds
Above properties causes JMS adapter to post messages to composite keeping 10000 secs gap between each message consumption.

Work managers
Though I have never tried this option,I believe we can throttle message processing using work managers also.Work managers is a weblogic way of expressing how client requests should be scheduled and processed.
Work managers can be defined and associated with deployment descriptors at various levels like domain(config.xml),application (weblogic-application.xml),EJB module level(weblogic-ejb-jar.xml) and web application(weblogic.xml)

As of version 11.1.1.6, we cannot associate work managers to BPEL composites  but can be assigned to OSB proxy and business services
I would recommend to explore this option too

Hope this helps

Tuesday, August 27, 2013

Transaction propagation from OSB to composite

Transaction context is not available by default between OSB and composite calls unless special care is taken.
OSB transaction scope/boundary depends upon below
  1. Transport provider
  2. QOS
Not all transport providers and all QOS semantics support transaction in OSB.We need to be particular to achieve transactions.

In this post, let's explore what is required to achieve this.We will take particular scenario and explain the steps
Scenario is OSB proxy consumes messages from JMS queue and invokes BPEL. In case of any failure either in OSB  or in BPE,L transaction should be rolled back and message should be available in JMS queue

To make above scenario statement true, we might need to develop services according to below principles

OSB design
  • Use JMS XA connection factory on proxy service
  • Please enable transactions on OSB proxy message handling to make sure messages are rolled back to JMS  queue.This setting essentially makes  proxy as sync service despite of original pattern

  • If OSB proxies has error handler stages then use RaiseError action from error stage.
  •  
  • Make sure all publish/service call out actions in proxy flow has ‘exactly-once’ QOS otherwise they will be executed outside of transaction context 
  • Do not  use "Reply with success or failure" from error handlers. This action will commit the transaction
  • Create OSB business service with SOA-DIRECT transport which calls composite.Only SOA-DIRECT transport supports transactions 

Composite design
  • Create composite with direct binding service exposure.Composite direct binding is compliant to SOA-DIRECT and invokes composite end point via RMI using t3/t3s protocol
  • If BPEL has catch/catch all, then throw  Rollback exception to caller.If BPEL doesn’t has  catch/catch all then  system level fault handler will roll back the transaction to the caller
  • If composite has fault-polices then transaction might not be rolled back to OSB in case of BPEL faults, so do not use fault-policies when planning for transaction propagation.Fault policies executes in its own thread and transaction, suspending existing transaction
  • Set component  property name="bpel.config.transaction"  to “required”
  • If we want to perform some tasks (DB,JMS) in BPEL outside of transaction scope then use NonXA connections

Tuesday, June 25, 2013

Oracle JMS Sync/Async Request-Reply Pattern

 We can use the Oracle SOA 11g JMS Adapter to implement following interaction patterns
  • synchronous request reply interaction pattern
  • asynchronous request reply interaction pattern 

Synchronous request reply interaction pattern

This pattern allows the Oracle JMS Adapter sends a request to the request JMS queue and waits for a response from the reply JMS queue before further execution continues.
In turn, the adapter will set the JMSReplyTo header to the reply destination. This value is then used by a message consumer  to send the message to the reply destination which is then dequeued by the Oracle JMS Adapter and continue further processing.
Behind the scenes, the Oracle JMS Adapter uses a new interaction pattern called JmsRequestReplyInteractionSpec.
In above case JMS adapter framework automatically sets JMSReplyTo header on request message  to jms/ExceptionRespQueue and waits for the message on jms/ExceptionRespQueue. It is message consumer responsibility to read JMSReplyTo header and send message to the destination mentioned in the header

JMS adapter wizard showing Request/Reply operation modelling


Above wizard generates below JCA file

<adapter-config name="SyncRequest-Reply" adapter="JMS Adapter" wsdlLocation="SyncRequest_Reply.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
<connection-factory location="eis/wls/Queue" UIJmsProvider="WLSJMS" UiOperationMode="Synchronous" UIConnectionName="Dev2"/>
<endpoint-interaction portType="Request_Reply_ptt" operation="Request_Reply" UITransmissionPrimitive="Request-response">

<interaction-spec className="oracle.tip.adapter.jms.outbound.JmsRequestReplyInteractionSpec">
<property name="TimeToLive" value="0"/>
<property name="PayloadType" value="TextMessage"/>
<property name="DeliveryMode" value="Persistent"/>
<property name="ReplyDestinationName" value="jms/b2b/B2B_IN_QUEUE"/>
<property name="RequestDestinationName" value="jms/b2b/B2B_OUT_QUEUE"/>
</interaction-spec>
</endpoint-interaction>
</adapter-config>

Oracle recommendations for this pattern

  • Oracle suggests when using the Oracle JMS Adapter in a synchronous pattern ensure that you use a non-XA connection factory and set the connector factory isTransacted property to true in weblogic-ra.xml.
  • The connection factory must be weblogic.jms.ConnectionFactory or any other non-XA connection factory.
If we get into BPEL design we will be having invoke activity pointing to JMS adapter reference invoking Request_Reply operation and invoke activity is blocked from further execution untill it receives response from response queue

 Asynchronous request reply interaction pattern


We can use the Adapter configuration wizard to model a process that allows Oracle JMS Adapter to be used in an asynchronous request reply interaction pattern.

This pattern allows an Oracle JMS Adapter to send a message to a JMS destination.
When sending message it sets JMSReplyTo header on request message  to Reply destination JNDI name
Consumer reads the header on message and replies to that destination.
When a message is received on the reply queue, the Oracle JMS Adapter is able to route message to the correct composite or the component instance. The correlation is done based on the JMSMessageID of the request message, which becomes the JMSCorrelationID of the reply message, and the conversation ID of the underlying component.

JMS adapter wizard showing Async Request/Reply operation modelling






Above wizard generates below JCA file.Note that there are two interaction specs generated. One for producing and another for consuming message

<adapter-config name="Async-Request-Reply" adapter="JMS Adapter" wsdlLocation="Async_Request_Reply.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">

<connection-factory location="eis/wls/Queue" UIJmsProvider="WLSJMS" UiOperationMode="Asynchronous" UIConnectionName="Dev2"/>

<endpoint-activation portType="Reply_ptt" operation="Reply" UITransmissionPrimitive="Request-response">
<activation-spec className="oracle.tip.adapter.jms.inbound.JmsConsumeActivationSpec">
<property name="PayloadType" value="TextMessage"/>
<property name="UseMessageListener" value="false"/>
<property name="DestinationName" value="jms/b2b/B2B_OUT_QUEUE"/>
</activation-spec>
</endpoint-activation>

<endpoint-interaction portType="Request_ptt" operation="Request" UITransmissionPrimitive="Request-response">
<interaction-spec className="oracle.tip.adapter.jms.outbound.JmsProduceInteractionSpec">
<property name="TimeToLive" value="0"/>
<property name="PayloadType" value="TextMessage"/>
<property name="DeliveryMode" value="Persistent"/>
<property name="DestinationName" value="jms/b2b/B2B_IN_QUEUE"/>
</interaction-spec>
</endpoint-interaction>
</adapter-config>

If we get into BPEL design  we will be having invoke activity pointing to JMS adapter reference invoking Request operation and we need to use receive activity pointing to same JMS adapter reference but waiting on Receive operation

Accessing Remote WLS Queues and Topics

In this post, let's throw some light on how Oracle SOA can access JMS destinations  deployed onto remote weblogic domain

Oracle JMS Adapter can be used to access remote WLS JMS destinations. Remote destinations refer to queues/topics that are available in a WLS JMS server, which is part of a remote Oracle WebLogic Server domain.
To enable Oracle JMS Adapter to read/write from/to a remote queue that is present in a remote WLS JMS server, we must configure the following:


1.You must have a unique domain name and JMS server name in both the servers.
2.You must enable global trust between the two servers.

Refer to the following link for information about how to enable global trust between servers:

http://download.oracle.com/docs/cd/E13222_01/wls/docs100/ConsoleHelp/taskhelp/security/EnableGlobalTrustBetweenDomains.html
This configuration is appropriate when you connect to queues or topics present in WLS9.2 server.

There are two options supported that enable you to access remote destinations via the JMS adapter:

Direct Access

Direct access is defined via specification of the FactoryProperties property in the weblogic-ra.xml file, with access parameters indicating the remote domain.

FactoryProperties

java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory;
java.naming.provider.url= t3://:;java.naming.security.principal= ;java.naming.security.credentials=

The disadvantage of this approach is security credentials will be displayed in plain text  

Foreign JMS server

Configure the foreign server to access the remote domain.
Please refer to Oracle documentation for this
http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13952/taskhelp/jms_modules/foreign_servers/ConfigureForeignServers.html


Notes:

  1. For inbound use cases, both options are supported. For outbound use cases only, direct access is supported, but configuring the foreign server is not supported.
  2. The JMS Adapter enables you to interact with WebLogic Server JMS destination locations in a domain that are remote to the WebLogicServer domain where SOA is installed. 
There are two more options to move JMS messages between remote destinations.These options  are nothing to do with JCA JMS adapter but they are WLS administrative objects

  1. SAF(Store and Forward Agents)
  2. Bridges
SAF: SAF's let local producer reliably produce messages to remote destination irrespective of destination's availablity.
In SAF scenario, when producer produced the message and if remote destination is not reachable then messages are stored locally and forwared to remote destination later when that is available
 
Bridge:  This is similar to SAF but oracle recommendation is use it when we are connecting different messaging products(implemented by different vendors like Oracle and MS) or different implementations/versions of WLS messaging (like WLS 9.2 and WLS 10.3.x)

Wednesday, June 12, 2013

Oracle Database Adapter high availability/Scalability


Oracle Database Adapter can be configured for high availability in two models with in cluster environment
  1. Active-Active
  2. Active-Passive 
Active-Active setup.
In an active-active setup, distributed polling techniques can be used for inbound database adapters to ensure that the same data is not retrieved more than once.

One of the best practices for multiple Oracle database Adapter process instances deployed to multiple cluster nodes is to use the adapter configuration wizard to set the Distributed Polling check box in the Adapter

With an Oracle DB, this automatically creates SQL select appended with SELECT FOR UPDATE SKIP LOCKED. Concurrent poller threads from each node trying to select and lock the available rows will follow below orchestration,
If current row is locked, the next unlocked row is locked and fetched instead. If many threads all execute the same polling query at the same time, they should all relatively quickly obtain a disjoint subset of unprocessed rows.

With a non-Oracle database the SELECT FOR UPDATE safely ensures that the same row cannot be processed multiple times, however you should consider taking additional measurements like using a partition field or singleton property as a best practice

Active-Passive :
We use Singleton property in composite.xml for active-passive setup. This allows multi threaded inbound Oracle Database Adapter instance to follow a fan out pattern and invoke multiple composite instances across a cluster. The Oracle DB Adapter supports the high availability feature when there is a database failure,restart or crash . The DB adapter picks up again without any message loss.

This is essentially multi-threading on a single node with fan-out to multiple composite instances running on multiple nodes

Singleton (Active/Passive) is applicable only for inbound endpoint adapters
Set this property in the composite.xml within the element and set it to a value of true as shown in example.
This  property can be set for any JCA adapter

Example:
<service name="databasepoll" ui:wsdlLocation="databasepoll.wsdl">
<interface.wsdl interface="http://xmlns.oracle.com/...#wsdl.interface(Subscr_ptt)"/>
<binding.jca config="databasepoll_db.jca">
<property name="singleton">true</property>
</binding.jca>
</service>

Note:This property is not applicable to outbound adapters

How singleton works: If there are multiple activations of the same adapter endpoint for same composite service in an Oracle WebLogic cluster is identified, then only one activation is allowed to start the reading or publishing of messages.The JCA binding component instances choose one among the activations randomly as the primary activation responsibility and all other as Standby

If a primary activation at some point of time becomes unresponsive,unavailable or crashes then
remaining JCA binding component members of the Oracle WLS cluster immediately detect the deactivation, and reassign the primary activation responsibility to one of available endpoints

For scalability, think of setting below properties
Set MaxTransactionSize and increase concurrency by setting the adapter _db.JCA property NumberOfThreads.

Saturday, June 8, 2013

Exceeded maximum number of subscribers for queue

In SOA 11.1.1.6, when  someone sees error like "java.sql.SQLException: ORA-24067: exceeded maximum number of subscribers for queue APPS.SAMPLE_QUEUE"


" in soa log then I suggest to follow below steps
This generally occurs in the clustered environment, but standalone should work fine.

Cause: An attempt was made to add new subscribers to the specified, but the number of subscribers for this queue has exceeded the maximum number (1024) of subscribers allowed per queue.

Why does some one create this many number of subscribers?
Answer is obviously nobody wants to do it intentionally but this might happen without developer even knowing. Let's say if you have below plsql code to create subscribers and you deploy code multiple times then each deployment create one subscriber.Please check for this kind of code sections

dbms_aqadm.add_subscriber(queue_name => 'SAMPLE_QUEUE'
,subscriber => TempSubscriber );   Solution: If this issue occurs then you have to delete the AQ table and AQ in your Schema and recreate them

I have also seen another issue if PL/SQL code creates subscriber like above and BPEL 11g service also deployed with durable subscriber set in jms JCA file

Error I see in above case is

BINDING.JCA-12134
ERRJMS_ERR_CR_TOPIC_CONS.
ERRJMS_ERR_CR_TOPIC_CONS.
Unable to create Topic consumer due to JMSException.
Caused by: oracle.jms.AQjmsException: JMS-230: Illegal operation on durable subscription with active TopicSubscriber

Tuesday, June 4, 2013

Oracle Service Bus Inbound Custom Authentication

In addition to supporting standard authentication mechanisms using security policies and HTPP basic authentication, OSB also supports custom authentication mechanism for inbound requests

Authentication can be implemented at two levels
  • Transport level
  • Message level
Transport level:

Transport level authentication mechanism resorts to sending authentication information in protocol headers and authentication providers validate the header
  • Custom token in an HTTP header
Message level:
Message level authentication mechanism uses either actual business XML payload or SOAP header  to store the authentication data.Oracle Service Bus accepts and attempts to authenticate a username and password passed in a SOAP header/XML payload
  •  For SOAP protocol based proxy services
                  1.Custom token in a SOAP header
                  2.Username/Password in a SOAP header
  • For non-SOAP protocol based proxy services
                 1.Custom token in the payload of any XML-based proxy services
                 2.Username/Password in the payload of any XML-based proxy services

Let's talk about what is custom authentication token

Custom Authentication Token:

An authentication token is some kind of data, represented as XML or a string, that identifies an entity, such as an X509 client certificate. Typically, authentication tokens are designed to be used within specific security protocols.
A custom authentication token is an identity assertion token in a user-defined location in the request.  An identity assertion token is allowed in an HTTP header, in a SOAP header (for SOAP-based services), or in the payload of some non-SOAP proxy service. The Oracle Service Bus domain must include an Identity Assertion provider that supports the token type.Assertion provider that maps the client's credential to an Oracle Service Bus user. Oracle Service Bus uses this resulting username to establish a security context for the client


Oracle Service Bus uses the authenticated user to establish a security context for the caller. The security context established by authenticating a custom token or username and password can be used as the basis for outbound credential mapping and access control.
In this post I am going to talk about message level authentication(Username/Password in a SOAP header) 

In this mechanism client passes username/password information in SOAP header.Remember that this information can be passed in any identifiable XML elements and need not be named as UserName and PassWord 

Assume that we have designed a service to accept below request and let's see how we design proxy service to accept and process the authentication header

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:ns1="http://www.oracle.com">
<soapenv:Header>
<AuthenticationHeader>
<UserName>
<user123>  
</UserName>
<PassWord>
<pasword123>
</PassWord>
</AuthenticationHeader>
</soapenv:Header>
<soapenv:Body>
</soapenv:Body>
</soapenv:Envelope>    Proxy service configuration to accept  
  1. Create a WSDL based proxy service
  2. Go to  Security tab
  3. Go to Custom Authentication section and select 'Custom User Name and Password' as Authentication Type
  4. In User Name XPath, enter in same line  declare namespace ns1="http://www.oracle.com";./ns1:AuthenticationHeader/ns1:UserName/text()
  5. In User Password XPath,enter in same line  declare namespace ns1="http://www.oracle.com";./ns1:AuthenticationHeader/ns1:PassWord/text()
  6. Leave Context Properties  empty

user123 has to be configured in appropriate authentication provider of WLS
Test the proxy service by passing above sample request then OSB will authenticate the user and establish security context
Also try testing by passing wrong user name/password in the SOAP header then you will see authentication failure errors

HttpOutboundMessageContext.RetrieveHttpResponseWork.run: java.lang.ArrayIndexOutOfBoundsException

When anyone see below error in OSB logs then need to think about applying a patch

[WliSbTransports:381304]Exception in HttpOutboundMessageContext.RetrieveHttpResponseWork.run: java.lang.ArrayIndexOutOfBoundsException
at weblogic.utils.io.ChunkedInputStream.read   Seems this error happens when ssl is configured and there is no clarity/explanation from Oracle on why this error is thrown.
One of the possible explanations is, after encrypting using ssl cipher suite payload is becoming too large.


Please check metalink with bug# 14747231 and apply the patch. This patch resolved the issue

Friday, May 31, 2013

Duplicate JMS message consumption in cluster


We come across common situation where we have duplicate JMS message consumption from BPEL where there is a cluster


For example, if we have a cluster of 3 nodes then when we deploy BPEL to consume message then there will be three BPEL instances processing same JMS message.This is because the nature of how distributed topics combined with clustered environment works

Obviously nobody wants to process same message multiple times.
To make sure only one node processes the message we need to set below property in JMS .JCA file

<property name="DurableSubscriber" value="UniqueSubscriberName"/>

This property name is used to identify a durable subscription. When working with durable subscriptions ensure that ClientID is also specified in addition to DurableSubscriber property. ClientID is specified as part of the factoryProperties property when defining a JMS adapter managed connection factory instance
Weblogic jms expects the combination of durable subscription + clientID to be unique across the durable subscriptions, just client ID or durable subscriber property alone does not suffice.

To set ClientID on deployment JNDI, follow below steps

1.Log in in to the WLS console.
2.Navigate to => Deployments and select "JmsAdapter" from the right hand side.
3.On the configuration tab expand oracle.tip.adapter.jms.IJmsConnectionFactory and select your connection factory (e.g. "eis/wls/Topic")
4.In the "FactoryProperties" add the Id (e.g. "ClientID=SOAClientSubscription1;")
5.Click on save and follow the onscreen instructions and remember to update the JmsAdapter, in order for the changes to take effect

Tuesday, May 14, 2013

[OSB Kernel:398139] The binding type of service is based on wsdl, the service should have ws-policy configuration

If anybody has encountered below error message when importing OSB projects into sbconsole, then no need to panic we have a solution to correct that

Error
OSB Kernel:398139]The binding type of service is based on wsdl, the service should have ws-policy configuration

Cause
This happens when we have OSB service already existing with different security policy setup than importing one.

I would like to discuss one of possible causes here

Lets say if we have existing service in the server is(policies are being referred from WSDL)










and importing one with below security policy configuration(policies are being referred from OWSM store)




This will conflict

Solutions


  1. A simple solution would be remove existing service and import new one
  2. Import new OSB code with WSDL chosen (if WSDL has already reference policy)
  3. Change existing service in OSB to to refer policies from OWSM

Hope this would help

Wednesday, May 8, 2013

BPEL 11g Default JMS Properties

When I was working on a project I observed BPEL 11g is putting some useful JMS message properties when message is being produced from BPEL to JMS queue/topic.

Following are JMS message properties  that BPEL engine copies automatically

tracking_ecid
tracking_compositeInstanceId
tracking_parentComponentInstanceId
tracking_conversationId

We can use above proprieties in many ways and to solve certain correlation problem between BPEL instance that generated it and applications that are consuming.

For example, we build a reporting application which reads messages from JMS destination and report messages onto dashboards. If we wanted to display composite instance id that generated the message then we can simply resort to this property .




Monday, February 18, 2013

Changing Jdev SVN repository credentials when username/password changed

Jdeveloper caches SVN repository credentials when versioning application(SVN) is configured
There is no easy way to update user name/password when credentials are changed
You might think you can create new connection with changed credentials but Jdev doesn't allow you to create one more connection with same repository url

The only way I can feel is clearing Jdev cache.

Follow below steps to clear Jdev cache.

  • Stop Jdeveloper
  • Locate your cached svn credential in the C:\Users\<username>\AppData\Roaming\JDeveloper\system11.1.1.6.38.61.92\o.jdeveloper.subversion\repository.xml
  • Rename the repository.xml
  • Recreate your connection to the repository in Jdeveloper
  • The repository.xml is recreated with correct credentials
  • Start the jdeveloper
Another option is we can directly go to above said path,open repository.xml  and put changed credentials details over there and start the jdev, but problem with this option is plain text password is stored which is not a good practise.
If you wanted to verify certain SVN parameters then please go to

Tools -> Preferences -> Versioning -> General -> Edit "server"

Saturday, February 2, 2013

schemaLocation vs xsi:schemaLocation attributes usage

In this post let's talk about schemaLocation and xsi:schemaLocation attributes usage and their significance which are being used by XML schema.
Remember that both attributes are having same name but coming from different namespace so they are used for different purpose
There are three circumstances for using this attributes

Scenario1:(xsi:schemaLocation ) in an XML instance document

This attribute is hint to XML processor from the document author regarding the access location of schema documents.These schema documents are used to checking the validity of the document content, on a namespace by namespace basis. For example, we can indicate the location of the Order schema to a processor of the Order XML document 

In the below example processor [may] contact given location to download the schema and use it
xmlns=http://www.oracle.com/Order
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xsi:schemaLocation="http://www.oracle.com/Order
http://www.oracle.com/Order.xsd">


schemaLocation attribute value contains of one or more pairs of URI references, separated by white space. The first part of each pair is a namespace name, and the second part of the pair is a an URI describing where to find an appropriate schema document for that namespace. The presence of these hints does not mean that the processor has to obtain or use the cited schema documents, and the processor is free to use other schemas obtained by any suitable means like loading from classpath jars, or to use no schema at all.

There is an interesting error I have seen when URI in this attribute is not reachable

Error

[name of xml file being processed] is invalid; nested exception is oracle.xml.parser.schema.XSDException: Network is unreachable
at

at oracle.xml.parser.v2.XMLError.flushErrorHandler(XMLError.java:425)at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:287)at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:376)

at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:226)at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:155)at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:75)

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:396)

... 71 more

Caused by: oracle.xml.parser.schema.XSDException: Network is unreachable
at oracle.xml.parser.schema.XSDBuilder.build(XSDBuilder.java:652)
at oracle.xml.parser.schema.XSDValidator.processSchemaLocation(XSDValidator.java:1003)

at oracle.xml.parser.schema.XSDValidator.startElement(XSDValidator.java:604)
at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingParser.java:1524)

at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:409)

at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:355)

... 75 more

:oracle.xml.parser.schema.XSDException:Network is unreachable

at oracle.xml.parser.schema.XSDBuilder.build(XSDBuilder.java:652)

at oracle.xml.parser.schema.XSDValidator.processSchemaLocation(XSDValidator.java:1003)

at oracle.xml.parser.schema.XSDValidator.startElement(XSDValidator.java:604)

at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingParser.java:1524)

at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:409)

at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:355)

at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:226)

at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:155)

at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:75)


Solution for this issue is to remove xsi:schemaLocation from xml instance before processor starts processing it.

Scenario2:(schemaLocation in include) in  xsd


In a schema, the include  element has a required schemaLocation attribute, and it contains a URI reference which must resolve to accessible schema document. The effect is to compose a final effective schema by merging the declarations and definitions of the including and the included schemas.
Unlike xsi:schemaLocation this is not a hint  but it is directive to processor and failure to connect to given URI causes error

Scenario3:(schemaLocation in import ) in  xsd

import element  in schema also uses an optional namespace and schemaLocation attributes.

The difference between include and import is, include is used to load the xml elements/type definitions from same target namespace as including document
import is used to load xml elements/types from different namespace than importing document


Friday, February 1, 2013

BPEL With SFTP Error in establishing a session with SSH Server


BPEL FTP adapter can be configured with SSH protocol  using  two ways
  • username/password authentication 
  • public key based authentication
When you see below errors in the log, then there is a reason for it

oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.setUpPasswordSocketConnection
The SSH API threw an exception.
"BINDING.JCA-11447"
Error in establishing a session with SSH Server..

Why this error?

There are two reasons I am aware of 
  1. When Daemon that accepts connections on FTP server is not running.If you identify this is the issue then start the daemon then shutdown and start polling composite otherwise files will not be polled even after daemon is started.I believe registered JCA endpoint is getting deactivated after few attempts.
  2. When we change FTP adapter deployment  JNDI properties such as host name,port, authentication type etc..then we will update adapter deployment expecting everything will be all right
For all polling integrations SOA engine will register and activate polling listener during server start up or process deployment or process activate/deactivate and shutdown /start up

After updating adapter deployment,we have adapter instance with modified  values but agent listener is activated with old values.

Solution

It is very simple activate/reactivate the process which will re register the agent listener with modified values
We will see below error when we run the process without restarting

SOA Error Log  
setting up session>
&lt;Jan 30, 2013 2:19:12 AM PST> &lt;Error> &lt;oracle.soa.adapter> &lt;BEA-000000> &lt;FTP Adapter ProcessName
BINDING.JCA-11445
The SSH API threw an exception.
The SSH API threw an exception.
The SSH API threw an exception.
Maverick has not been setup properly. Please correct the setup.

        at oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.setUpPasswordSocketConnection(SSHSessionImpl.java:206)
        at oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.&lt;init>(SSHSessionImpl.java:128)
        at oracle.tip.adapter.ftp.SshImpl.SshImplFactory.getSshImpl(SshImplFactory.java:26)
        at oracle.tip.adapter.ftp.SFTPManagedConnection.setupSftpConnection(SFTPManagedConnection.java:154)
        at oracle.tip.adapter.ftp.SFTPManagedConnection.&lt;init>(SFTPManagedConnection.java:66)
        at oracle.tip.adapter.ftp.FTPManagedConnectionFactory.createManagedConnection(FTPManagedConnectionFactory.java:180)
        at weblogic.connector.security.layer.AdapterLayer.createManagedConnection(AdapterLayer.java:803)
        at weblogic.connector.outbound.ConnectionFactory.createResource(ConnectionFactory.java:91)
        at weblogic.common.resourcepool.ResourcePoolImpl.makeResources(ResourcePoolImpl.java:1310)
        at weblogic.common.resourcepool.ResourcePoolImpl.reserveResourceInternal(ResourcePoolImpl.java:419)
        at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:343)
        at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:323)
        at weblogic.connector.outbound.ConnectionPool.reserveResource(ConnectionPool.java:620)
        at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:317)
        at weblogic.connector.outbound.ConnectionManagerImpl.getConnectionInfo(ConnectionManagerImpl.java:380)
        at weblogic.connector.outbound.ConnectionManagerImpl.getConnection(ConnectionManagerImpl.java:320)
        at weblogic.connector.outbound.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:129)
        at oracle.tip.adapter.ftp.FTPConnectionFactory.getConnection(FTPConnectionFactory.java:102)
        at oracle.tip.adapter.ftp.SFTPAgent.preCall(SFTPAgent.java:1192)
        at oracle.tip.adapter.ftp.SFTPAgent.validateInputDir(SFTPAgent.java:758)
        at oracle.tip.adapter.ftp.inbound.FTPSource.validateInputDir(FTPSource.java:1189)
        at oracle.tip.adapter.ftp.inbound.FTPSource.revalidatePollingError(FTPSource.java:1357)
        at oracle.tip.adapter.file.inbound.PollWork.onAlert(PollWork.java:475)
        at oracle.tip.adapter.file.inbound.PollWork.run(PollWork.java:357)
        at oracle.integration.platform.blocks.executor.WorkManagerExecutor$1.run(WorkManagerExecutor.java:120)
        at weblogic.work.j2ee.J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:184)
        at weblogic.work.DaemonWorkThread.run(DaemonWorkThread.java:30)
Caused by: com.maverick.ssh.SshException: Read timed out
        at com.maverick.ssh.SshConnector.A(Unknown Source)
        at com.maverick.ssh.SshConnector.connect(Unknown Source)
        at oracle.tip.adapter.ftp.SshImpl.SSHSessionImpl.setUpPasswordSocketConnection(SSHSessionImpl.java:194)
        ... 26 more
>
&lt;Jan 30, 2013 2:19:12 AM PST> &lt;Warning> &lt;Connector> &lt;BEA-190032> &lt;&lt; eis/Ftp/FTPAdapter > ResourceAllocationException thrown by

resource adapter on call to ManagedConnectionFactory.createManagedConnection(): "BINDING.JCA-11447
Error in establishing a session with SSH Server..
Error in establishing a session with SSH Server..
Unable to establish a session with the server.
Please ensure hostname and port specified to login to the server are correct.

Saturday, January 26, 2013

XAER_NOTA start() failed The XID is not valid

When you see error something like -'XAER_NOTA start() failed The XID is not valid', then do not worry because that is something to do with JTA transaction timeout.

This issue is because the default XA transaction timeout on the XA resource, in this case the XA datasource, is insufficient, causing a timeout. By default, the XA transaction timeout for 60 seconds. You can enable "Set XA Transaction Timeout" to true for the XA datasource and specify the "XA transaction timeout" value in seconds to increase this timeout.

Solution Steps
  • Log into WLS Administration Console
  • Click on Services -> Data Sources and then click on data source you wanted to modify
  • Click on Transaction Tab
  • Click on check box next to "Set XA Transaction Timeout"
  • Make sure that "XA Transaction Timeout" has a value of 0
  • Save the configuration.
  • Bounce SOA server 
When this 'XA Transaction Timeout"' is set to zero, the XAResource Session Timeout will be set to the global transaction timeout  which is of 600 secs
Please refer to Oracle note ID 1352715.1


Below error log may seen when this issue happens.


SOA error log:

[soa_server1] [ERROR] [] [oracle.soa.bpel.engine.dispatch] [tid: orabpel.invoke.pool-4.thread-9] [userId: ] [ecid: 097b277aaf0a827d:-

d54fd89:13c60a5bfc1:-8000-00000000001d808c,1:18696] [APP: soa-infra] database communication failure[[
java.sql.SQLException: Unexpected exception while enlisting XAConnection java.sql.SQLException: XA error: XAResource.XAER_NOTA start() failed on resource 'SOADataSource_>domain nsme&lt;':

XAER_NOTA : The XID is not valid
oracle.jdbc.xa.OracleXAException
        at oracle.jdbc.xa.OracleXAResource.checkError(OracleXAResource.java:1616)
        at oracle.jdbc.xa.client.OracleXAResource.start(OracleXAResource.java:336)
        at weblogic.jdbc.jta.DataSource.start(DataSource.java:790)
        at weblogic.transaction.internal.XAServerResourceInfo.start(XAServerResourceInfo.java:1247)
        at weblogic.transaction.internal.XAServerResourceInfo.xaStart(XAServerResourceInfo.java:1180)
        at weblogic.transaction.internal.XAServerResourceInfo.enlist(XAServerResourceInfo.java:300)
        at weblogic.transaction.internal.ServerTransactionImpl.enlistResource(ServerTransactionImpl.java:561)
        at weblogic.transaction.internal.ServerTransactionImpl.enlistResource(ServerTransactionImpl.java:488)
        at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1673)
        at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1577)
        at weblogic.jdbc.wrapper.JTAConnection.getXAConn(JTAConnection.java:215)
        at weblogic.jdbc.wrapper.JTAConnection.checkConnection(JTAConnection.java:84)
        at weblogic.jdbc.wrapper.JTAConnection.checkConnection(JTAConnection.java:74)
        at weblogic.jdbc.wrapper.Connection.preInvocationHandler(Connection.java:100)
        at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:545)
        at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.prepareStatement(DatabaseAccessor.java:1474)
        at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.prepareStatement(DatabaseAccessor.java:1423)
        at org.eclipse.persistence.internal.databaseaccess.DatabaseCall.prepareStatement(DatabaseCall.java:697)
        at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:585)
        at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:535)
        at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:1717)
        at org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:253)
        at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:207)
        at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:193)
        at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:342)
        at org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:162)
        at org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:177)
        at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:472)
        at org.eclipse.persistence.queries.InsertObjectQuery.executeCommit(InsertObjectQuery.java:80)
        at org.eclipse.persistence.queries.InsertObjectQuery.executeCommitWithChangeSet(InsertObjectQuery.java:90)
        at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:287)
        at org.eclipse.persistence.queries.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:58)
        at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:844)
        at org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:743)
        at org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:108)
        at org.eclipse.persistence.queries.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:85)
        at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2871)
        at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1516)
        at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1498)