First a picture of the WebLogic Cluster configuration.
On the left site I have an normal Apache http server version 2.2 with a weblogic plugin. This plugin does the http forwarding to one of WLS server instances. Then I have two servers where the WLS are running on. On the first WLS server I am running the admin server with one of the cluster server instances and on the second WLS server is the other Cluster server instance running. Every server instance has its own JMS server and the WLS cluster has one JMS module (in here you define the distributed queues).
The JMS messages of the JMS servers, HTTP session data and WLS security realm configuration are all stored in the MySQL cluster database. This memory database ( a bit like coherence then with tables instead of java objects) provides me a high available WLS data. Off course you can use Oracle RAC or a single database for this data, but I prefer the MySQL cluster because I just need a few tables and no PLSQL + the MySQL cluster is up and running in a few hours. Or you can use file persistence but then you need to do something with scripts or cluster file systems when there is a failover.
This cluster configuration has the following failover features.
- The Apache redirects the http session to one of the servers. Session data is stored in the database so each server can handle the http request.
- WLS Server instance failover. Each server has a dynamic ip address when a server instance fails, the nodemanager will add the ip address to the other server and start the WLS server instance so you always have 2 WLS server instances.
- WLS Service failover, when for example a JMS server fails then this JMS server will be started in the other WLS server instance so you always have two JMS servers.
- MySQL cluster node failure. WebLogic uses for all their configuration an multi jdbc datasource so when one MySQL cluster node fails the other datasource is used to connect to the other MySQL cluster node
Here some pics how to configure high availability in WebLogic. First we need to configure the cluster for failover. In my case I select all the servers as migratable servers and using Consensus as migration basis.
Every WLS server instance need to have its own migratable target. On this migratable target we add the JMS servers
Set automatic migrate recovery services on these migratable targets.
After this we need to set an dynamic ip address to every Weblogic Server instance. Just select an server and set the ip address. Now we need to configure the WLS nodemanager on every server and put in the netmask and interface parameter in nodemanager.properties.
Now the wlsifconfig script can add the ip address to the network device and start the WLS server instance.
The server configuration is ready we can go on with the JMS part.
Step 1 configure the JDBC persistence stores and target these to the migratable targets.
Step 2, create the JMS servers on a migratable target and every JMS server has its own jdbc persistence store
Create a JMS systemmodule
Create an connection factory and a distributed queue both targeted on the cluster.
That's all for the JMS part.
The last part is to configure Apache and the enterprise applications for High Availability.
Install apache 2.2 and download the weblogic module for apache here.
edit the httpd.conf add the following lines
LoadModule weblogic_module modules/mod_wl_22.so
Step 2 is to configure every enterprise application ( ADF app or a WS) with a weblogic deployment descriptor so it stores its session data in the database.
<?xml version="1.0" encoding="ISO-8859-1"?>
<weblogic-application xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bea.com/ns/weblogic/weblogic-application.xsd" xmlns="http://www.bea.com/ns/weblogic/weblogic-application">