TWiki> UCSDTier2 Web>UAFNodeConfig (revision 15)EditAttach

Configuration of a new UAF Node

Table of Contents

Create an Logical Volume from all spare disk space

First partition the disks accordinging and reboot so the machine picks up the current tables.

Create the Physical Volumes

pvcreate /dev/sda4 /dev/sdb4 /dev/sdc1 /dev/sdd1

Create the volume groups

vgcreate uaf_vg /dev/sda4 /dev/sdb4 /dev/sdc1 /dev/sdd1

Create the logical Volume

lvcreate -L 3550G -i 4 -I 256 uaf_vg -n uaf_lv

Create the file system

mkfs.ext3 /dev/uaf_vg/uaf_lv
tune2fs -m0 /dev/uaf_vg/uaf_lv

Add the file system to fstab

/dev/uaf_vg/uaf_lv      /data                   ext3    defaults        1 3

Mount the file system

mkdir /data
mount /data

Files to copy from an existing UAF Node

scp* /root/.ssh
scp* /etc/
scp /etc/
scp /etc/profile.d/
scp /etc/init.d/
scp /etc/

Autofs configuration

chkconfig autofs on
/etc/init.d/autofs restart

Condor Configuration

Copy the existing uaf configuration for the current node

Note: This step must be done on the central manager

cd /condor/release/etc
cp uaf-X.local uaf-NEW.local

Edit the uaf-NEW.local file and make the appropriate changes.

Create the following links

mkdir /etc/condor
ln -s /condor/release/etc/condor_config /etc/condor/
ln -s /condor/release/bin/* /usr/bin/
ln -s /condor/release/sbin/* /usr/sbin/

Create the following directory tree

mkdir -p /state/data/condor_local

Start Condor

chkconfig condor on
/etc/rc.d/init.d/condor start

Install the OSG Client

cd /data
export http_proxy=
mkdir vdt
mkdir pacman
cd pacman
tar zxvf pacman-3.29.tar.gz
cd pacman-3.29
cd /data/vdt
export VDTSETUP_CONDOR_LOCATION=/condor/release
export VDTSETUP_CONDOR_CONFIG=/etc/condor/condor_config

pacman -get

Say yes to the questions

Install the Gridftp server

cd  /data/pacman/pacman-3.29
cd /data/vdt
pacman -get

Say yes to the questions

Enable the gftp server

source /data/vdt/
vdt-control --on gsiftp

Configure the authentication by copying over /etc/grid-security/grid-mapfile, add a host key using certify

Setup the NTP server

scp /etc/
/etc/init.d/ntpd restart
chkconfig ntpd on

Create the data tmp area

mkdir /data/tmp
chmod 1777 /data/tmp

--++ Add the TMPFS Area

Add the following to rc.local, create the swapfile1 if it does not exist, 20GB in size.

mkdir -p /data/tmpfs/
swapon /data/swap/swapfile1
mount -t tmpfs -osize=10G  mode=1777 /data/tmpfs/

for i in c d; do echo "32" > /sys/block/sd${i}/queue/iosched/quantum; done
for i in c d; do echo 256 > /sys/block/sd${i}/queue/read_ahead_kb; done

Adding the Compat packages

yum -y install compat* 

Add the i386 glibc

 yum -y install glibc-devel*

Install Java

Grab the JDK and install it

Add packages for CMS

yum -y install glibc coreutils bash tcsh zsh perl tcl tk readline openssl ncurses 
yum -y install e2fsprogs krb5-libs freetype fontconfig compat-libstdc++-33 
yum -y install libidn libX11 libXmu libSM libICE libXcursor libXrender libXpm mesa-libGLU

Adding iptables

scp /etc/sysconfig/iptables 
/etc/init.d/iptables restart

install Hadoop

rpm -ivh
yum install hadoop hadoop-fuse

Add the following to the /etc/rc.local file

modprobe fuse
/usr/bin/hdfs -o,port=9000,rdbuffer=131072,allow_other /hadoop/

Install i386 and x64 versions of libz

yum -y install zlib-devel

KRB Config

Copy from another working UAF



Grab bwm-ng RPM for DAG and install it⊂=bwm-ng

Install some X applications

yum -y install xemacs xclock xterm
yum -y install  ImageMagick ggv xpdf gpdf  
 wget ; rpm -ivh acroread-5.0.10-1.2.el4.rf.i386.rpm


Versions 1.0 is preferred as it uses highlight, at least for RHEL4

Limits.conf and Ulimit

Add the following to /etc/security/limits.conf

*       soft    rss 33554432
*       hard    rss 41943040
*       soft    nofile 2048
*       hard    nofile 2048

Add the following to /etc/pam.d/login

session    required

Add Gfortran

yum -y install gcc4-gfortran

Setup the server server

  • Edit /etc/httpd/conf/httpd.conf
  • Comment out UserDir? disable * Uncomment UserDir? public_html
  • Uncomment
<Directory /home/*/public_html>
    AllowOverride FileInfo AuthConfig Limit
    Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
        Order allow,deny
        Allow from all
    <LimitExcept GET POST OPTIONS>
        Order deny,allow
        Deny from all

  • Add the Option Indexes to Directory /

Setup of XRootd HDFS

Make sure hadoop fs -ls works, and add the following (or confirm it) to /etc/sysconfig/hadoop


Install the osg hadoop repo

 rpm -Uvh

Currently you need to install xrootd from unstable

yum --enablerepo=hadoop-unstable install xrootd

-- TerrenceMartin - 31 Jul 2008

Edit | Attach | Print version | History: r19 | r17 < r16 < r15 < r14 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r15 - 2010/04/06 - 19:39:22 - TerrenceMartin
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback