Difference: XcacheInstall ( vs. 1)

Revision 12018/08/14 - Main.TerrenceMartin

Line: 1 to 1
Added:
>
>

UCSD CMS T2 Xcache Installation

Xroot Manager Install

To Be added....

Xroot Manager Configuration

To Be added....

XCache Node Install


Follow the Open Science Grid instructions for installation of the Xrootd software, preferably on a CentOS? 7 or similar system.

http://opensciencegrid.org/docs/data/install-xrootd/

XCache Node Configuration

Hardware Configuration

This document assumes that the hardware you have chosed for an xrootd cache has sufficient disk resources to be a cache host. In our case it is a machine with 12 2TB or 12 3TB disks and one OS disks. This allows for up to 24TB of raw storage space to be used by the cache on the server.

You will need to have made sure that all storage devices have been properly configured, formatted (eg XFS) and mounted.

The mount points will be used in the below configuration of Xrootd.


Here is an example fstab file from one of our Xcache hosts.

/dev/mapper/vg2-data1  /data1         xfs   defaults    0 0
/dev/mapper/vg11-data10 /data10       xfs   defaults    0 0
/dev/mapper/vg12-data11 /data11       xfs   defaults    0 0
/dev/mapper/vg13-data12 /data12       xfs   defaults    0 0
/dev/mapper/vg3-data2  /data2         xfs   defaults    0 0
/dev/mapper/vg4-data3  /data3         xfs   defaults    0 0
/dev/mapper/vg5-data4  /data4         xfs   defaults    0 0
/dev/mapper/vg6-data5  /data5         xfs   defaults    0 0
/dev/mapper/vg7-data6  /data6         xfs   defaults    0 0
/dev/mapper/vg8-data7  /data7         xfs   defaults    0 0
/dev/mapper/vg9-data8  /data8         xfs   defaults    0 0
/dev/mapper/vg10-data9 /data9         xfs   defaults    0 0

You will also have to make sure you create any required directories in these disks for xrootd to use, and make their ownership xrootd

eg.

mkdir /data3/xcache
chown xrootd:xrootd /data3/xcache
 

Software Configuration

Firewall

The firewall needs to allow for the access of a few ports for all of the local networks. In our case we permit our local IPv4 networks in addition to the IPv4 network at Caltech since we run a unified cache.

The port is configurable on the Xrootd redirector, but in our case we have chosen 1094. We configure our firewalld via puppet forge module, but you can configure firewalld however you prefer.

Host Certificate

Each of your Xcache hosts will require a host certificate. There are a few ways to get these certificates but the current documented method is via InCommon? . Your institution may have other instructions, but this is what Open Science Grid supplies

http://opensciencegrid.org/docs/security/host-certs/

Xcache Configuration

The two Xcache services xrootd, and cmsd, have configuration files for the service. In our case we have two configurations files, one for each service, but this may not be necessary. However the software does need to be told what configuration file to load on startup.


On CentOS? 7 this can be done by defining a override.conf file for the systemd startup.

/etc/systemd/system/xrootd@.service.d/override.conf

[Service] 
ExecStart= 
ExecStart=/usr/bin/xrootd -l /var/log/xrootd/xrootd.log -c /etc/xrootd/%i.cfg -k fifo -s /var/run/xrootd/xrootd-%i.pid -n %i 

/etc/systemd/system/cmsd@.service.d/override.conf

[Service]
ExecStart= 
ExecStart=/usr/bin/cmsd -l /var/log/xrootd/cmsd.log -c /etc/xrootd/%i-cmsd.cfg -k fifo -s /var/run/xrootd/cmsd-%i.pid -n %i 


After you create these files systemd will likely tell you to run something like

systemctl daemon-reload

The configuration files themselves contain details such as the location of the data disks, the xrootd manager host and port etc.

xcache-cmsd.cfg

all.role    server
# XXMT all.manager xrootd.t2.ucsd.edu:2051
all.manager xrootd.t2.ucsd.edu:2041

all.export /store stage r/o

oss.localroot /xcache-root

# Following probably not needed ... try or ask Andy.
oss.space data /data1/xcache
oss.space data /data2/xcache
oss.space data /data3/xcache
oss.space data /data4/xcache
oss.space data /data5/xcache
oss.space data /data6/xcache
oss.space data /data7/xcache
oss.space data /data8/xcache
oss.space data /data9/xcache
oss.space data /data10/xcache
oss.space data /data11/xcache
oss.space data /data12/xcache

all.sitename UCSD-XCACHE

all.adminpath /var/spool/xrootd
all.pidpath   /var/run/xrootd

xrd.allow host *
sec.protocol  host
sec.protbind  * none

xrootd.trace emsg login stall
xrd.trace    conn
ofs.trace    delay
cms.trace    defer files redirect stage

Proxy Refresh

The systems need to use a proxy refreshed periodically based on a cert that is trusted, in this case trusted in CMS. This certificate is then used to generate a proxy via a cron job.

The recommended way to do this in OSG is documented here.

http://opensciencegrid.org/docs/data/stashcache/install-cache/#rhel7_1

Core Files

Xrootd is a bit notorious for creating core files. There are a few ways to deal with this but the easiest, if you do not care to keep them, is to just reduce the core file size to 0 or make it so that the most recent core file overwrites the previous.

in /etc/sysctl.conf or similar add

# Core Pattern 
fs.suid_dumpable = 0 
kernel.core_pattern=/cores/core 

in /etc/limits.conf or similar add

* soft core 0 
* hard core 0 

It is also recommended to create the follow limits.d file for xrootd

/etc/security/limits.d/50-xrootd.conf

xrootd     soft    nproc     20000 
xrootd     hard    nproc     21000 
xrootd     soft    nofile    99000 
xrootd     hard    nofile    100000 

Starting the Services and configuring for start on boot

To start the required services and configure them on boot run the following

systemctl enable  xrootd@xcache systemctl enable  cmsd@xcache systemctl start xrootd@xcache cmsd@xcache

To ensure the services are running

systemctl status xrootd@xcache cmsd@xcache

-- TerrenceMartin - 2018/08/14

 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback