Forums / Install & configuration / Distributed/balanced installation?

Distributed/balanced installation?

Author Message

Nico Sabbi

Wednesday 25 June 2008 8:52:35 am

Hi,
I setup Ezpublish 4.0.0 in a distributed environment: EZ is installed on 4 different
web servers (Linux RH3) in a NFS directory imported from a Solaris server.
The DBMS used is a separate Linux server running Mysql.

Web requests are intercepted by a web balancer that dispatches them
to one of the four servers.

So far so good, until I realized that the access to the hundreds of files in the cache
(in NFS) was extremely slow. At this point I worked around the problem symlinking
the cache directory in var/<SITE> to a local directory in each of the four servers,
but apparently the balanced web site doesn't work correctly: an update to the
contents of the DB isn't immediately reflected by the four web servers.

Additionally the admin interface doesn't work correctly in my balanced
configuration: often, after having signed up, I'm suddenly logged off.

Is a configuration like this even supported?
What kind of balancement can I obtain with EZ?
Thanks,
Nico

Gaetano Giunta

Wednesday 25 June 2008 2:13:36 pm

You have basically a couple of choices when setting up an install using multiple frontends:

A - share the var dir between all the servers via any shared-storage filesystem
this usually means a "good" nfs server, eg. emc or netapp nas box, or a san box plus ocfs2 or gfs. Using a vanilla linux server with nfs is not going to cut it, as eZ Publish generates a massive amount of IO to files in the var directory. Most of the times, some nfs tuning is needed. You also have to make sure you are using nfs with locking.
If you set this up, I would recommend to keep the var/log and var/siteaccess/log directories separate for every front server (using symlinks)

B - set up the eZ Publish "cluster" mode.
this means setting up a separate db, or a couple of new tables in the existing db, and moving all the content of the var directory in the db. Make sure the db you will be using has enough spare power to handle the new load (depending on chosen db you can usually set up some replication / clustering within the db to spread the load within more boxes).
The script to move from standard to clusterized conf is very easy to run and pretty safe to test.

C - set up your own "replication" of the var/storage and var/siteaccess/storage directories, usually using rsync.
This way you will be creating binary content on one server, and it will propagate to the others. It gets tricky as soon as you have many files in the var dir and rsync takes a while to propagate changes. If the load balancers are not sticky, a user might see content on page a on server 1, then go to page a on server 2 and miss the content that has not yet been replicated. If the load balancer is sticky every user is kept going to the same physical server, so this risk is minimized.

In every case, it is better if the eZ Publish cronjobs are run on a single server, possibly a dedicated one, so that ram and cpu used for the cronjobs are not subtracted from the frontends. If you are using one frontend for admin siteaccess only, that would be the ideal candidate.

About the logging-in problem: eZ Publish stores session data in the db, so even when load balancers are not sticky you should not have any troubles. Can you check the traces of the lb to see if there is any misconfiguration?

Principal Consultant International Business
Member of the Community Project Board