I have the following setup
1.liferay cluster with 2 machines on AWS
2.unicast clustering replication with JGroups over tcp
I have the following parameters in the portal-ext.properties
#Setup hibernate
net.sf.ehcache.configurationResourceName=/myehcache/hibernate-clustered.xml
#Setup distributed ehcache
ehcache.multi.vm.config.location=/myehcache/liferay-multi-vm-clustered.xml
#
# Clustering settings
#
cluster.link.enabled=true
ehcache.cluster.link.replication.enabled=true
cluster.link.channel.properties.control=tcp.xml
cluster.link.channel.properties.transport.0=tcp.xml
lucene.replicate.write=true
#In order to make use of jgroups
ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
net.sf.ehcache.configurationResourceName.peerProviderProperties=file=/myehcache/tcp.xml
ehcache.multi.vm.config.location.peerProviderProperties=file=/myehcache/tcp.xml
cluster.executor.debug.enabled=true
ehcache.statistics.enabled=true
I am not able to get the cluster cache replication working. Can anybody point me the right direction? I can post more details if needed later. I was also trying to modify the hibernate-clustered.xml and liferay-multi-vm-clustered.xml, but nothing works.
After spending days reading countless blog posts, forum topics, and of course SO questions, I wanted to summarize here how we finally managed to configure cache replication in a Liferay 6.2 cluster, using unicast TCP to suit Amazon EC2.
Before configuring Liferay for cache replication, you must understand that Liferay relies on JGroups channels. Basically, JGroups allows to discover and communicate with remote instances. By default (at least in Liferay) it leverages multicast UDP to achieve these goals. See JGroups website for more.
To enable unicast TCP, you must first get JGroups’ TCP configuration file from jgroups.jar
in Liferay webapp (something like $LIFERAY_HOME/tomcat-7.0.42/webapps/ROOT/WEB-INF/lib/jgroups.jar
). Extract this file to a place available to Liferay webapp’s classpath. Say $LIFERAY_HOME/tomcat-7.0.42/webapps/ROOT/WEB-INF/classes/custom_jgroups/tcp.xml
. Take note of this path.
For this configuration to work in a Liferay cluster, you just need to add a singleton_name="liferay"
attribute to TCP
tag:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.1.xsd">
<TCP singleton_name="liferay"
bind_port="7800"
loopback="false"
...
You may have noticed that:
A. this configuration file does not specify a bind address on which to listen, and
B. that the initial hosts of the cluster must be set through a system property.
In fact, you need to modify $LIFERAY_HOME/tomcat-7.0.42/bin/setenv.sh
to add the following JVM system properties:
-Djava.net.preferIPv4Stack=true
-Djgroups.bind_addr=192.168.0.1
-Djgroups.tcpping.initial_hosts=192.168.0.1[7800],80.200.230.2[7800]
The bind address defines which network interface to listen to (JGroups port is set to 7800 in TCP configuration file). The initial hosts property must contain every single instance of the cluster (for more on this, see TCPPING and MERGE2 on JGroups docs), along with their listening ports. Remote instances may be referred to by their host names, local addresses or public addresses.
(Tip: if you are setting up a Liferay cluster on Amazon EC2, chances are the local IP address and host name of your instances are different after each reboot. To work around this, you may replace the local address in setenv.sh by the result of hostname command: `hostname`
-- notice the backticks here)
(Tip: if using security groups on EC2, you should also make sure to open port 7800 to all the instances in the same security group)
JGroups replication is enabled on Liferay by adding the following properties to your portal-ext.properties:
# Tells Liferay to enable Cluster Link. This sets up JGroups control and transport channels (necessary for indexes and cache replication)
cluster.link.enabled=true
# This external address is used to determine which network interface must be used. This typically points to the database shared between the instances.
cluster.link.autodetect.address=shareddatabase.eu-west-1.rds.amazonaws.com:5432
Configuring JGroups for unicast TCP is just a matter of pointing to the right file:
# Configures JGroups control channel for unicast TCP
cluster.link.channel.properties.control=/custom_jgroups/tcp.xml
# Configures JGroups transport channel for unicast TCP
cluster.link.channel.properties.transport.0=/custom_jgroups/tcp.xml
In the same file, Lucene index replication requires this single property:
# Enable Lucene indexes replication through Cluster Link
lucene.replicate.write=true
EhCache caches replication is more subtle. You must configure JGroups for both Hibernate cache and Liferay’s internal caches. To understand this configuration, you must know that since Liferay 6.2, the default EhCache configuration files are "clustered" (do not set these properties):
# Default hibernate cache configuration file
net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
# Default internal cache configuration file
ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml
These configuration files both rely on EhCache factories that must be set the enable JGroups:
# Enable EhCache caches replication through JGroups
ehcache.bootstrap.cache.loader.factory=com.liferay.portal.cache.ehcache.JGroupsBootstrapCacheLoaderFactory
ehcache.cache.event.listener.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheReplicatorFactory
ehcache.cache.manager.peer.provider.factory=net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
JGroups' cache manager peer provider factory expects a file
parameter containing the JGroups configuration. Specify the unicast TCP configuration file:
# Configure hibernate cache replication for unicast TCP
net.sf.ehcache.configurationResourceName.peerProviderProperties=file=/custom_jgroups/tcp.xml
# Configure internal caches replication for unicast TCP
ehcache.multi.vm.config.location.peerProviderProperties=file=/custom_jgroups/tcp.xml
(Tip: when in doubt, you should refer to the properties definitions and default values: https://docs.liferay.com/portal/6.2/propertiesdoc/portal.properties.html)
In addition, you can enable debugging traces with:
cluster.executor.debug.enabled=true
You can even tell Liferay to display on every pages the name of the node which processed the request:
web.server.display.node=true
Finally, JGroups channels expose a diagnostic service available through probe tool.
Please bear in mind this only covers indexes and cache replication. When setting up a Liferay cluster, you should also consider setting up: