Do you know any step by step tutorial about installing Liferay cluster in Glassfish ?
Liferay being a Spring / Hibernate application meant to be server agnostic, most of your clustering configuration should be the clustering sections of your portal(-ext).properties file : Hibernate, EHCache and JGroup configuration. The only app server specific configuration should be session failover, as in any web app deployed.
Google found me this writeup called how-to-install-and-configure-a-liferay-cluster
Enjoy!
I am working on the same problem, or very similar one -- deploying the Liferay WAR file to a glassfish cluster with two nodes. I don't have it configured completely correctly yet, but I do have it deployed successfully. Maybe this will help you, too, and we can compare notes.
Here's what I had to do.
First, the groundwork. GlassFish is a bit weird to me in the way it deploys the WAR. As I understand it, WAR files are exploded somewhere by the node-agent, but you don't get access to poke at the files once they are deployed. This means as you tweak your configuration files (portal-ext.properties) you are going to need to re-deploy every time -- and Liferay is pretty big at ~73MB. This is going to cause PermGen out-of-space exceptions periodically, and require you to reboot your cluster. So you'd be wise to set the JVM option to increase the size of the PermGen space in glassfish. There is a good explanation of the problem here:
http://www.freshblurbs.com/explaining-java-lang-outofmemoryerror-permgen-space
That JVM option won't solve the problem, but it will increase the delay between cluster reboots (glassfish console didn't work to reboot, btw; we had to do it by command-line).
The next question was: where do the dependency JAR files go? We're operating in a shared cluster running other services, so putting it in the domains/domain1/lib folder won't work. We stuck the dependency JAR files in the liferay war file, in WEB-INF/lib, and it seems to be happy with that.
Next: where does portal-ext.properties override file go? The answer is again in the liferay war file, in WEB-INF/classes. This is also a contributing reason why we need to re-deploy every time we modify a property as discussed above.
Next: context. By default, liferay tries to deploy to the root context "/". We're in a shared environment, so we deployed the WAR to the context /lr1. In portal-ext.properties we had to set the property
portal.ctx=/lr1
Next: It doesn't make much sense to use the embedded HSQL in a clustered environment; we set up a JNDI name for our databse connection pool using GlassFish. There are instructions on how to do this in the Liferay documentation guides. In the portal-ext.properties file, we were then able to put
jdbc.default.jndi.name=jdbc/LiferayPool
We also don't want to store Lucene indexes on the filesystem. We overrode these properties in the portal-ext.properties file to fix that:
lucene.store.type=jdbc
lucene.store.jdbc.auto.clean.up=true
lucene.store.jdbc.dialect.oracle=org.apache.lucene.store.jdbc.dialect.OracleDialect
Similar logic applies to JackRabbit repository; I currently have the following property set-up (i don't know if this is correct, but the document library is working):
jcr.jackrabbit.repository.root=WEB-INF/classes/
I had to put jackrabbit's repository.xml file in WEB-INF/classes too. That xml file tells jackrabbit what database connection parameters to use (see Apache's Jackrabbit configuration page for more details). Again, I'm not sure putting that in WEB-INF/classes was the right idea, but it probably has to go somewhere in the WAR file or be on some shared file-system for all nodes in your cluster to share the same data.
I have not messed with EHcache yet, but I did put in the the hibernate property:
hibernate.dialect=org.hibernate.dialect.Oracle10gDialect
for our oracle db. I believe it uses the default JDBC property above to reference our JNDI database connection.
The concept of "Liferay Home Directory" variable being "one folder above the server home" is something I'm still wrestling with, and it is causing me to have errors every time an HTTP request is sent relating to /opt/ee/license.
The user that liferay is running as does not have permission to modify /opt, and in any case that's a bad idea in a clustered environment. I'm not sure where the setting is, because when I look all I see is
liferay.home=${resource.repositories.root}
and
resource.repositories.root=${default.liferay.home}
I don't know where default.liferay.home is defined yet; still working on that.
Deploying liferay to a clustered environment is unfortunately not that well documented yet, but I hope sharing this helps you in some small way.
Good luck!