General Support

We have two listservs for support, one for announcements from the Tier-2 admins to the users, and one for complaints and questions from the users to the admins. Every user of the Tier-2 should subscribe to the announcement listserv.

Announcements from admins to users: cmst2 at physics dot ucsd dot edu*
*The archive for this list is here: cmst2 archive

Questions and complaints from users to admins: t2support at physics dot ucsd dot edu

Login Platforms

The Tier-2 center supports multiple computers for interactive login. Those are called uaf-X.t2.ucsd.edu with X running from 1 to 9. Said that, uaf-1 is effectively decomissioned and uaf-2 is the glidein manager node, so don't use them.uaf-3 has a special config, so avoid it unless you know what you are doing.

To get login access, send email with your ssh key and hypernews account name to t2support. To get write access to dCache into your own /store/user area, send email with your hypernews account name and the output from "voms-proxy-info" to t2support.

We support 1TB of space in /store/user for every person from UCSB, UCR, UCSD who is in CMS.

dedicated groups on uaf

To share directories on the uaf between multiple people in a group, we define groups and use ACLs. If you need this functionality, do the following:
  • Request a group from t2support
  • Once you have a group, you need the following commands to make a directory and define it as group writeable.
mkdir bla getfacl bla setfacl -R -m g:cms1:rwx bla setfacl -R -d -m g:cms1:rwx getfacl bla 

This sets the default for all files in this directory, and does so recursively. Only the person who wns the file or directory can execute the command on it.

Send email to t2support if you have problems.

Software Deployment

codefs.t2.ucsd.edu is used to centrally deploy the CMS software and tools that provide most of necessary CMSSW and grid environment for the user level physics analysis and data operation.

The CMSSW is deployed via Tier-2 software distribution across the whole USCMS tier-2 (and some tier-3 sites). In general only standard release of CMSSW will be deployed. The analysis and test based on pre-release will not be supported unless the specific request is made or the deployment of the software is available under the standard procedure.

To make a desktop machine similar to the tier-2 interactive analysis machine, for example uaf-1.t2.ucsd.edu, the codefs.t2.ucsd.edu:/code/osgcode needs to be mounted to the local directory /code/osgcode

CMSSW Environment

Access to CMSSW repository
      export CMS_PATH=/code/osgcode/cmssoft/cms        
      export SCRAM_ARCH=slc5_ia32_gcc434        
      source ${CMS_PATH}/cmsset_default.sh       
       or         
      setenv CMS_PATH /code/osgcode/cmssoft/cms        
      setenv SCRAM_ARCH slc5_ia32_gcc434        
      source ${CMS_PATH}/cmsset_default.csh 

Create CMSSW project area and set up environment

cd your-work-directory
cmsrel CMSSW_3_8_4
cd CMSSW_3_8_4/src
cmsenv

If you don't like waiting for your code to compile, try out compiling in parallel on our 8ways:

 scramv1 b -j 8 

Grid Environment and Tools

Grid Environement

The Grid enviroment is automatically in the path for all jobs. No additional steps needed.

Note: If you ever put any Grid customizations in your own .bashrc (or similar), you may want to clean them out.

(HT)Condor

Condor is in the path on uaf's 4-9, so the users can use it without any special setup.

Again, please make sure you don't have any old setup in your .bashrc (or similar).

As a reminder, you can submit vanilla jobs to use the glidein system which in in place. Condor-G jobs are of course still supported, but are not recommended.

CRAB

To run the CRAB client, after setting up your CMSSW environment, you need only source the gLite IU and the crab set up file:

GLITE_VERSION="gLite-3.2.11-1"
source /code/osgcode/ucsdt2/${GLITE_VERSION}/etc/profile.d/grid-env.sh  
export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170
export GLOBUS_TCP_PORT_RANGE=20000,25000
source /code/osgcode/ucsdt2/Crab/etc/crab.[c]sh 

Old instructions

The following instructions are old, and should be disarded... leaving them here temporarily, just as a reminder of the past.

Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc.

  • Setup Glite

Before initiating the glite environment, please make sure no other grid environment exists, especially by checking no VDT environment is sourced (the VDT environment is set up with "source /setup.(c)sh").

To setup the glite environment, using Crab client >= 2.7.2 associated with a Crabserver both on SLC4 and SLC5 mode.

 source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh  
        export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170
        export GLOBUS_TCP_PORT_RANGE=20000,25000 

To setup the glite environment, using Crab client >= 2.7.2 WITHOUT Crabserver

  a) On SLC4 and SLC5 (glite 3.1)  source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh  
  b) On SLC5 (glite 3.2)  source /code/osgcode/ucsdt2/gLite32/etc/profile.d/grid_env.[c]sh 
       export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170  
       export GLOBUS_TCP_PORT_RANGE=20000,25000 

The glite environment should allow you to get the proxy and proper role in order to run your grid jobs

       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 

  • Setup Condor and VDT

In uaf machines, the condor environment is already in the PATH of uaf machines. Combining glite and condor environment, you can send grid jobs (e.g. crab jobs) via condor_g.

If VDT is chosen to bring the grid environment to your analysis instead of glite in the uaf machines,

       source /date/tmp/vdt/setup.(c)sh  

Never mix VDT with glite environment.

  • Setup CRAB

There are primarily two submission methods to send crab jobs, condor_g and glitecoll, which determines how crab is set up and used.

     1. setup CMSSW environment as described above      
     2. setup glite or condor environment as described above      
     3. source /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh 

To check which crab version is actually used by "ls -l /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh"

To publish the datasets to the DBS (here is an example of local DBS deployed at UCSD), in the [USER] section of crab.cfg, following configuration needs to be added

      publish_data = 1        
      publish_data_name = "Njet_test1"       
      dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 

To run analysis to the local DBS (other than the global CMS DBS), in the [USER] section of crab.cfg, following configuration needs to be added

      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 

Job Monitoring

Locally submitted jobs to the T2 Condor batch system can be monitored using:

Jobs submitted to the Grid via Crabserver can be found at:

Note: In some cases one need to have the grid certificate loaded into the browser

Moving data to/from UCSD

Data Request via PhEDex

We encourage anybody to make data replication requests via the PhEDEx pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.

To keep track of all the data at the UCSD Tier-2, we have developed a simple accounting system. For this to work, you need to pick an account you want to charge your request to. This is done by adding the following to the comment field when making the PhEDEx request:

|| acc = ucsb || 

The above would charge the request to the UCSB account. An account is an arbitrary string. It might be easiest if you simply pick one of the accounts that already exist in the accounting system.

Absolute path in the HDFS (hadoop-distributed file system) system at UCSD

The interactive login nodes at UCSD allow you to do an ls on the directories in hdfs for both the official as well as user data:

#for official data: ls /hadoop/cms/phedex/store 
#for private user data: ls /hadoop/cms/store/user   

Moving Data by Users

The srm endpoint for data transfer is srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the file>

Here are a few examples:

copy a local file to SE via srmcp

   srmcp -2 file://localhost/<path to the src file> srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the destination file>

copy a file at SE to local via lcg-cp

   lcg-cp -b -D srmv2  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the src file> <path to destination file>

An example copy from your local directory into our srm would thus look like:

lcg-cp -v -b -D srmv2 file:/home/users/tmartin/testfile.zero  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/testfile-2.zero  srmcp -2 -debug=true -delegate=false file:////home/users/tmartin/smallfile.zero srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/testfile.root 

To just do ls via the srm would look like:

 lcg-ls -l -b -D srmv2 srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/  or   srmls -2 -delegate=false  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin 

-- HaifengPi - 02 Sep 2008

-- SanjayPadhi - 2009/03/08

-- FkW - 2009/09/07

-- JamesLetts - 2013/05/02

Topic revision: r33 - 2013/05/03 - 15:26:47 - JamesLetts
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback