TWiki
>
UCSDTier2 Web
>
PhysicsAndMCproduction
(2013/05/03,
JamesLetts
)
(raw view)
E
dit
A
ttach
%TOC% ---++ %MAKETEXT{"General Support"}% We have two listservs for support, one for announcements from the Tier-2 admins to the users, and one for complaints and questions from the users to the admins. Every user of the Tier-2 should subscribe to the announcement listserv. *Announcements from admins to users: cmst2 at physics dot ucsd dot edu*<br /> *The archive for this list is here: [[https://physics-mail.ucsd.edu/mailman/private/cmst2/][cmst2 archive]]* *Questions and complaints from users to admins: t2support at physics dot ucsd dot edu* ---++ %MAKETEXT{"Login Platforms"}% The Tier-2 center supports multiple computers for interactive login. Those are called uaf-X.t2.ucsd.edu with X running from 1 to 9. Said that, uaf-1 is effectively decomissioned and uaf-2 is the glidein manager node, so don't use them.uaf-3 has a special config, so avoid it unless you know what you are doing. To get login access, send email with your ssh key and hypernews account name to t2support. To get write access to dCache into your own /store/user area, send email with your hypernews account name and the output from "voms-proxy-info" to t2support. We support 1TB of space in /store/user for every person from UCSB, UCR, UCSD who is in CMS. ---+++ %MAKETEXT{"dedicated groups on uaf"}% To share directories on the uaf between multiple people in a group, we define groups and use ACLs. If you need this functionality, do the following: * Request a group from t2support * Once you have a group, you need the following commands to make a directory and define it as group writeable. <pre>mkdir bla getfacl bla setfacl -R -m g:cms1:rwx bla setfacl -R -d -m g:cms1:rwx getfacl bla </pre> This sets the default for all files in this directory, and does so recursively. Only the person who wns the file or directory can execute the command on it. Send email to t2support if you have problems. ---++ %MAKETEXT{"Software Deployment"}% codefs.t2.ucsd.edu is used to centrally deploy the CMS software and tools that provide most of necessary CMSSW and grid environment for the user level physics analysis and data operation. The CMSSW is deployed via Tier-2 software distribution across the whole USCMS tier-2 (and some tier-3 sites). In general only standard release of CMSSW will be deployed. The analysis and test based on pre-release will not be supported unless the specific request is made or the deployment of the software is available under the standard procedure. To make a desktop machine similar to the tier-2 interactive analysis machine, for example uaf-1.t2.ucsd.edu, the codefs.t2.ucsd.edu:/code/osgcode needs to be mounted to the local directory /code/osgcode ---++ CMSSW Environment Access to CMSSW repository <pre> export CMS_PATH=/code/osgcode/cmssoft/cms </pre><pre> export SCRAM_ARCH=slc5_ia32_gcc434 </pre><pre> source ${CMS_PATH}/cmsset_default.sh </pre><pre> or </pre><pre> setenv CMS_PATH /code/osgcode/cmssoft/cms </pre><pre> setenv SCRAM_ARCH slc5_ia32_gcc434 </pre><pre> source ${CMS_PATH}/cmsset_default.csh </pre> Create CMSSW project area and set up environment <verbatim> cd your-work-directory cmsrel CMSSW_3_8_4 cd CMSSW_3_8_4/src cmsenv </verbatim> If you don't like waiting for your code to compile, try out compiling in parallel on our 8ways: <pre> scramv1 b -j 8 </pre> ---++ Grid Environment and Tools ---+++ Grid Environement The Grid enviroment is automatically in the path for all jobs. No additional steps needed. Note: If you ever put any Grid customizations in your own .bashrc (or similar), you may want to clean them out. ---+++ (HT)Condor Condor is in the path on uaf's 4-9, so the users can use it without any special setup. Again, please make sure you don't have any old setup in your .bashrc (or similar). As a reminder, you can submit vanilla jobs to use the glidein system which in in place. Condor-G jobs are of course still supported, but are not recommended. ---+++ CRAB To run the CRAB client, after setting up your CMSSW environment, you need only source the gLite IU and the crab set up file: <verbatim> GLITE_VERSION="gLite-3.2.11-1" source /code/osgcode/ucsdt2/${GLITE_VERSION}/etc/profile.d/grid-env.sh export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170 export GLOBUS_TCP_PORT_RANGE=20000,25000 source /code/osgcode/ucsdt2/Crab/etc/crab.[c]sh </verbatim> ---+++ Old instructions %MAROON%The following instructions are old, and should be disarded... leaving them here temporarily, just as a reminder of the past.%ENDCOLOR% Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc. * *Setup Glite* Before initiating the glite environment, please make sure no other grid environment exists, especially by checking no VDT environment is sourced (the VDT environment is set up with "source <VDT Location>/setup.(c)sh"). To setup the glite environment, using Crab client >= 2.7.2 associated with a Crabserver both on SLC4 and SLC5 mode. <pre> source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh </pre><pre> export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170</pre><pre> export GLOBUS_TCP_PORT_RANGE=20000,25000 </pre> To setup the glite environment, using Crab client >= 2.7.2 WITHOUT Crabserver <pre> a) On SLC4 and SLC5 (glite 3.1) source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh </pre><pre> b) On SLC5 (glite 3.2) source /code/osgcode/ucsdt2/gLite32/etc/profile.d/grid_env.[c]sh </pre><pre> export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170 </pre><pre> export GLOBUS_TCP_PORT_RANGE=20000,25000 </pre> The glite environment should allow you to get the proxy and proper role in order to run your grid jobs <pre> voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser </pre> * *Setup Condor and VDT* In uaf machines, the condor environment is already in the PATH of uaf machines. Combining glite and condor environment, you can send grid jobs (e.g. crab jobs) via condor_g. If VDT is chosen to bring the grid environment to your analysis instead of glite in the uaf machines, <pre> source /date/tmp/vdt/setup.(c)sh </pre> Never mix VDT with glite environment. * *Setup CRAB* There are primarily two submission methods to send crab jobs, condor_g and glitecoll, which determines how crab is set up and used. <pre> 1. setup CMSSW environment as described above </pre><pre> 2. setup glite or condor environment as described above </pre><pre> 3. source /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh </pre> To check which crab version is actually used by "ls -l /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh" To publish the datasets to the DBS (here is an example of local DBS deployed at UCSD), in the [USER] section of crab.cfg, following configuration needs to be added <pre> publish_data = 1 </pre><pre> publish_data_name = "Njet_test1" </pre><pre> dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet </pre> To run analysis to the local DBS (other than the global CMS DBS), in the [USER] section of crab.cfg, following configuration needs to be added <pre> dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet </pre> ---++ Job Monitoring Locally submitted jobs to the T2 Condor batch system can be monitored using: * [[http://glidein-mon.t2.ucsd.edu/ucsd/overview.html][Overview of jobs at UCSD Tier-2]] * [[https://glidein-mon.t2.ucsd.edu/jobmon/ucsd/][Detailed job monitoring for UCSD Tier-2]] Jobs submitted to the Grid via Crabserver can be found at: * [[http://glidein-mon.t2.ucsd.edu:8080/dashboard//][http://glidein-mon.t2.ucsd.edu:8080/dashboard/]]. Note: In some cases one need to have the grid certificate loaded into the browser ---++ Moving data to/from UCSD ---+++ Data Request via [[https://cmsweb.cern.ch/phedex/prod/Info::Main][PhEDex]] We encourage anybody to make data replication requests via the [[https://cmsweb.cern.ch/phedex/prod/Info::Main][PhEDEx]] pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request. To keep track of all the data at the UCSD Tier-2, we have developed a simple [[https://uaf-2.t2.ucsd.edu/datarequests/datareq.php?show=act&account=all&sort=&dir=0][accounting system]]. For this to work, you need to pick an account you want to charge your request to. This is done by adding the following to the comment field when making the !PhEDEx request: <pre>|| acc = ucsb || </pre> The above would charge the request to the UCSB account. An account is an arbitrary string. It might be easiest if you simply pick one of the accounts that already exist in the [[https://uaf-2.t2.ucsd.edu/datarequests/datareq.php?show=act&account=all&sort=&dir=0][accounting system]]. ---+++ Absolute path in the HDFS (hadoop-distributed file system) system at UCSD The interactive login nodes at UCSD allow you to do an ls on the directories in hdfs for both the official as well as user data: <pre>#for official data: ls /hadoop/cms/phedex/store </pre><pre>#for private user data: ls /hadoop/cms/store/user </pre> ---+++ Moving Data by Users The srm endpoint for data transfer is srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the file> Here are a few examples: copy a local file to SE via srmcp <pre> srmcp -2 file://localhost/<path to the src file> srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the destination file></pre> copy a file at SE to local via lcg-cp <pre> lcg-cp -b -D srmv2 srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the src file> <path to destination file></pre> An example copy from your local directory into our srm would thus look like: <pre>lcg-cp -v -b -D srmv2 file:/home/users/tmartin/testfile.zero srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/testfile-2.zero srmcp -2 -debug=true -delegate=false file:////home/users/tmartin/smallfile.zero srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/testfile.root </pre> To just do ls via the srm would look like: <pre> lcg-ls -l -b -D srmv2 srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/ or srmls -2 -delegate=false srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin </pre> -- Main.HaifengPi - 02 Sep 2008 -- Main.SanjayPadhi - 2009/03/08 -- Main.FkW - 2009/09/07 -- Main.JamesLetts - 2013/05/02
E
dit
|
A
ttach
|
P
rint version
|
H
istory
: r33
<
r32
<
r31
<
r30
<
r29
|
B
acklinks
|
V
iew topic
|
Ra
w
edit
|
M
ore topic actions
Topic revision: r33 - 2013/05/03 - 15:26:47 -
JamesLetts
UCSDTier2
Log In
UCSDTier2 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
Webs
CMSBrownBag
CMSUCSD
HEPProjects
Main
Sandbox
TWiki
UCLHCWeb
UCSDHepBrownBag
UCSDScaleTests
UCSDTier2
USCMSWeb
Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback