Difference: PhysicsAndMCproduction (1 vs. 33)

Revision 332013/05/03 - Main.JamesLetts

Line: 1 to 1
 

General Support

Line: 64 to 64
 As a reminder, you can submit vanilla jobs to use the glidein system which in in place. Condor-G jobs are of course still supported, but are not recommended.

CRAB

Changed:
<
<
To run the CRAB client, after setting up your CMSSW environment, you need only source the crab set up file:
>
>
To run the CRAB client, after setting up your CMSSW environment, you need only source the gLite IU and the crab set up file:
 
Added:
>
>
GLITE_VERSION="gLite-3.2.11-1" source /code/osgcode/ucsdt2/${GLITE_VERSION}/etc/profile.d/grid-env.sh export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170 export GLOBUS_TCP_PORT_RANGE=20000,25000
 source /code/osgcode/ucsdt2/Crab/etc/crab.[c]sh

Revision 322013/05/02 - Main.JamesLetts

Line: 1 to 1
 

General Support

Line: 64 to 64
 As a reminder, you can submit vanilla jobs to use the glidein system which in in place. Condor-G jobs are of course still supported, but are not recommended.

CRAB

Changed:
<
<
TBW
>
>
To run the CRAB client, after setting up your CMSSW environment, you need only source the crab set up file:
source /code/osgcode/ucsdt2/Crab/etc/crab.[c]sh 
 

Old instructions

The following instructions are old, and should be disarded... leaving them here temporarily, just as a reminder of the past.

Line: 107 to 111
 To run analysis to the local DBS (other than the global CMS DBS), in the [USER] section of crab.cfg, following configuration needs to be added
      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
Deleted:
<
<
  • CRAB Server

A crab server is deployed at UCSD, which you can use for your crab submission. Following is the configuration in the [CRAB] section of crab.cfg file to specify the crab server and scheduler.

      scheduler = glite     
 use_server = 1 

Late binding based Crabserver can be used as defined here Essentially:

      scheduler =  glidein    
 use_server = 1 
 

Job Monitoring

Locally submitted jobs to the T2 Condor batch system can be monitored using:

Line: 159 to 155
 To just do ls via the srm would look like:
 lcg-ls -l -b -D srmv2 srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/  or   srmls -2 -delegate=false  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin 
Deleted:
<
<

Validation of Local CRAB Client Installation

Updated instructions on this page using the CMS Workbook.

Set up of environment:

export CMS_PATH=/code/osgcode/cmssoft/cms        
export SCRAM_ARCH=slc5_ia32_gcc434
source ${CMS_PATH}/cmsset_default.sh       
cmsrel CMSSW_3_8_4
cd CMSSW_3_8_4/src
cmsenv
scram b
source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.sh  
export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170
export GLOBUS_TCP_PORT_RANGE=20000,25000
voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 
source /code/osgcode/ucsdt2/Crab/CRAB_2_7_5/crab.sh 

crab.cfg:

[CMSSW]
total_number_of_events=1
number_of_jobs=1
pset=tutorial.py
datasetpath=/Wmunu/Summer09-MC_31X_V3_7TeV-v1/GEN-SIM-RECO
output_file=out_test1.root

[USER]
return_data=0
email=jletts@ucsd.edu

copy_data = 1
storage_element = T2_US_UCSD

publish_data = 0
publish_data_name = jletts_Data
dbs_url_for_publication = https://cmsdbsprod.cern.ch:8443/cms_dbs_prod_local_09_writer/servlet/DBSServlet

[GRID]
SE_white_list=ucsd

[CRAB]
scheduler=glidein
jobtype=cmssw
server_name=ucsd

tutorial.py:

import FWCore.ParameterSet.Config as cms
process = cms.Process('Tutorial')
process.source = cms.Source("PoolSource", fileNames = cms.untracked.vstring())
process.maxEvents = cms.untracked.PSet( input       = cms.untracked.int32(10) )
process.options   = cms.untracked.PSet( wantSummary = cms.untracked.bool(True) )
process.output = cms.OutputModule("PoolOutputModule",
    outputCommands = cms.untracked.vstring("drop *", "keep recoTracks_*_*_*"),
    fileName = cms.untracked.string('out_test1.root'),
)
process.out_step = cms.EndPath(process.output)

Submit CRAB job:

crab -create
crab -validateCfg
crab -submit 1
 -- HaifengPi - 02 Sep 2008

-- SanjayPadhi - 2009/03/08

-- FkW - 2009/09/07

Changed:
<
<
-- JamesLetts - 2010/11/05
>
>
-- JamesLetts - 2013/05/02

Revision 312013/05/02 - Main.IgorSfiligoi

Line: 1 to 1
 

General Support

Line: 11 to 11
 

Login Platforms

Changed:
<
<
The Tier-2 center supports multiple computers for interactive login. Those are called uaf-X.t2.ucsd.edu with X running from 1 to 6. The numbers 3,4,5,6 are modern 8ways with loads of memory. The 1,2 are older machines. I'd stay away from 1 if I was you.
>
>
The Tier-2 center supports multiple computers for interactive login. Those are called uaf-X.t2.ucsd.edu with X running from 1 to 9. Said that, uaf-1 is effectively decomissioned and uaf-2 is the glidein manager node, so don't use them.uaf-3 has a special config, so avoid it unless you know what you are doing.
  To get login access, send email with your ssh key and hypernews account name to t2support. To get write access to dCache into your own /store/user area, send email with your hypernews account name and the output from "voms-proxy-info" to t2support.
Line: 50 to 50
 
 scramv1 b -j 8 

Grid Environment and Tools

Added:
>
>

Grid Environement

The Grid enviroment is automatically in the path for all jobs. No additional steps needed.

Note: If you ever put any Grid customizations in your own .bashrc (or similar), you may want to clean them out.

(HT)Condor

Condor is in the path on uaf's 4-9, so the users can use it without any special setup.

Again, please make sure you don't have any old setup in your .bashrc (or similar).

As a reminder, you can submit vanilla jobs to use the glidein system which in in place. Condor-G jobs are of course still supported, but are not recommended.

CRAB

TBW

Old instructions

The following instructions are old, and should be disarded... leaving them here temporarily, just as a reminder of the past.

  Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc.
Line: 92 to 110
 
  • CRAB Server

A crab server is deployed at UCSD, which you can use for your crab submission. Following is the configuration in the [CRAB] section of crab.cfg file to specify the crab server and scheduler.

Changed:
<
<
      scheduler = glite     
 use_server = 1
>
>
      scheduler = glite     
 use_server = 1 
  Late binding based Crabserver can be used as defined here Essentially:
Changed:
<
<
      scheduler =  glidein    
 use_server = 1
>
>
      scheduler =  glidein    
 use_server = 1 
 

Job Monitoring

Revision 302013/05/02 - Main.FkW

Line: 1 to 1
 

General Support

Line: 143 to 143
 To just do ls via the srm would look like:
 lcg-ls -l -b -D srmv2 srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/  or   srmls -2 -delegate=false  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin 
Deleted:
<
<

MC Production and Local Scope DBS

  • Production User Portal

For running user-level production, the URL to the portal is https://yuan.ucsd.edu/production_request. The x509 certificate needs to be stored in the web browser to enable the access. Normally you import PKCS#12 file of the certificate to the browser. If you can't find the PKCS#12 file when you first received the x509 certificate, you can use following command to create one (named MyCert? .p12) based on your x509 certficate the key

     openssl pkcs12 -export -in usercert.pem -inkey userkey.pem -out MyCert.p12 -name "my x509 cert" 

The portal allows to run small- and mid-scale production based on ProdAgent? and GlideinWMS system with full detector simulation and reconstruction. The MC production will be run on USCMS Tier-2 and a few Tier-3 sites. The output will be stored at UCSD storage system and published at local DBS deployed at UCSD.

The data discovery of the local DBS is http://ming.ucsd.edu/data_discovery. The instance to be published is "dbs_2009". To access the datasets published at this local DBS by crab, the crab configuration needs to refer to the local DBS interface, http://ming.ucsd.edu:8080/DBS1/servlet/DBSServlet.

  • Local DBS

The local DBS is implemented to support data publication via Crab or ProdAgent? . For Crab, the publication or access the local DBS can be set up by adding following to the [USER] section in the crab.cfg

For old version of DBS (DBS_1_0_8)

        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 

For new version of DBS (DBS_2_0_4)

        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS1/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS1/servlet/DBSServlet 

Data discovery for above two DBS instances is

http://ming.ucsd.edu/data_discovery_old

To look at the datasets bookept by DBS, you need to choose "instance" in the selection manual, "dbs_2008" corresponds to the old DBS, "dbs_2009" corresponds to the new DBS.

For newest version of DBS (DBS_2_0_8) or later

        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS2/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS2/servlet/DBSServlet 

The data discovery of the newest DBS is

http://ming.ucsd.edu/data_discovery 

Currently please use DBS_2_0_8 for analysis and dataset publication. Any user dataset in the old DBS can be migrated to the latest instance. Please contact us if you need migration.

      
 

Validation of Local CRAB Client Installation

Updated instructions on this page using the CMS Workbook.

Revision 292010/11/05 - Main.JamesLetts

Line: 1 to 1
 

General Support

Line: 36 to 36
 To make a desktop machine similar to the tier-2 interactive analysis machine, for example uaf-1.t2.ucsd.edu, the codefs.t2.ucsd.edu:/code/osgcode needs to be mounted to the local directory /code/osgcode

CMSSW Environment

Changed:
<
<
Access to CMSSW repository
      export CMS_PATH=/code/osgcode/cmssoft/cms        
      export SCRAM_ARCH=slc4_ia32_gcc345        
      source ${CMS_PATH}/cmsset_default.sh       
       or         
      setenv CMS_PATH /code/osgcode/cmssoft/cms        
      setenv SCRAM_ARCH slc4_ia32_gcc345        
      source ${CMS_PATH}/cmsset_default.csh 
>
>
Access to CMSSW repository
      export CMS_PATH=/code/osgcode/cmssoft/cms        
      export SCRAM_ARCH=slc5_ia32_gcc434        
      source ${CMS_PATH}/cmsset_default.sh       
       or         
      setenv CMS_PATH /code/osgcode/cmssoft/cms        
      setenv SCRAM_ARCH slc5_ia32_gcc434        
      source ${CMS_PATH}/cmsset_default.csh 
 
Changed:
<
<
Create CMSSW project area and setp environment
       cd your-work-directory        
       scramv1 project CMSSW CMSSW_1_6_12        
       eval `scramv1 runtime -(c)sh` 
>
>
Create CMSSW project area and set up environment
cd your-work-directory
cmsrel CMSSW_3_8_4
cd CMSSW_3_8_4/src
cmsenv
  If you don't like waiting for your code to compile, try out compiling in parallel on our 8ways:
 scramv1 b -j 8 
Line: 106 to 111
 Note: In some cases one need to have the grid certificate loaded into the browser

Moving data to/from UCSD

Changed:
<
<

Data Request via PhEDex?

>
>

Data Request via PhEDex

 
Changed:
<
<
We encourage anybody to make data replication requests via the PhEDEx? pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.
>
>
We encourage anybody to make data replication requests via the PhEDEx pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.
 
Changed:
<
<
To keep track of all the data at the UCSD Tier-2, we have developed a simple accounting system. For this to work, you need to pick an account you want to charge your request to. This is done by adding the following to the comment field when making the PhEDEx? request:
>
>
To keep track of all the data at the UCSD Tier-2, we have developed a simple accounting system. For this to work, you need to pick an account you want to charge your request to. This is done by adding the following to the comment field when making the PhEDEx request:
 
|| acc = ucsb || 

The above would charge the request to the UCSB account. An account is an arbitrary string. It might be easiest if you simply pick one of the accounts that already exist in the accounting system.

Line: 175 to 180
 Currently please use DBS_2_0_8 for analysis and dataset publication. Any user dataset in the old DBS can be migrated to the latest instance. Please contact us if you need migration.
      
Added:
>
>

Validation of Local CRAB Client Installation

Updated instructions on this page using the CMS Workbook.

Set up of environment:

export CMS_PATH=/code/osgcode/cmssoft/cms        
export SCRAM_ARCH=slc5_ia32_gcc434
source ${CMS_PATH}/cmsset_default.sh       
cmsrel CMSSW_3_8_4
cd CMSSW_3_8_4/src
cmsenv
scram b
source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.sh  
export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170
export GLOBUS_TCP_PORT_RANGE=20000,25000
voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 
source /code/osgcode/ucsdt2/Crab/CRAB_2_7_5/crab.sh 

crab.cfg:

[CMSSW]
total_number_of_events=1
number_of_jobs=1
pset=tutorial.py
datasetpath=/Wmunu/Summer09-MC_31X_V3_7TeV-v1/GEN-SIM-RECO
output_file=out_test1.root

[USER]
return_data=0
email=jletts@ucsd.edu

copy_data = 1
storage_element = T2_US_UCSD

publish_data = 0
publish_data_name = jletts_Data
dbs_url_for_publication = https://cmsdbsprod.cern.ch:8443/cms_dbs_prod_local_09_writer/servlet/DBSServlet

[GRID]
SE_white_list=ucsd

[CRAB]
scheduler=glidein
jobtype=cmssw
server_name=ucsd

tutorial.py:

import FWCore.ParameterSet.Config as cms
process = cms.Process('Tutorial')
process.source = cms.Source("PoolSource", fileNames = cms.untracked.vstring())
process.maxEvents = cms.untracked.PSet( input       = cms.untracked.int32(10) )
process.options   = cms.untracked.PSet( wantSummary = cms.untracked.bool(True) )
process.output = cms.OutputModule("PoolOutputModule",
    outputCommands = cms.untracked.vstring("drop *", "keep recoTracks_*_*_*"),
    fileName = cms.untracked.string('out_test1.root'),
)
process.out_step = cms.EndPath(process.output)

Submit CRAB job:

crab -create
crab -validateCfg
crab -submit 1
 -- HaifengPi - 02 Sep 2008

-- SanjayPadhi - 2009/03/08

-- FkW - 2009/09/07

Added:
>
>
-- JamesLetts - 2010/11/05

Revision 282010/05/10 - Main.SanjayPadhi

Line: 1 to 1
 

General Support

Line: 52 to 52
  Before initiating the glite environment, please make sure no other grid environment exists, especially by checking no VDT environment is sourced (the VDT environment is set up with "source /setup.(c)sh").
Changed:
<
<
To setup the glite environment, using Crab client 2.6.6 associated with a Crabserver both on SLC4 and SLC5 mode.
>
>
To setup the glite environment, using Crab client >= 2.7.2 associated with a Crabserver both on SLC4 and SLC5 mode.
 
Changed:
<
<
 source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh  
        export LCG_GFAL_INFOSYS=uscmsbd2.fnal.gov:2170         
        export GLOBUS_TCP_PORT_RANGE=20000,25000 
>
>
 source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh  
        export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170
        export GLOBUS_TCP_PORT_RANGE=20000,25000 
 
Changed:
<
<
To setup the glite environment, using Crab client 2.6.6 WITHOUT Crabserver Or with Crab client >= 2.7.x series
  a) On SLC4 and SLC5 (glite 3.1)  source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh  
  b) On SLC5 (glite 3.2)  source /code/osgcode/ucsdt2/gLite32/etc/profile.d/grid_env.[c]sh 
       export LCG_GFAL_INFOSYS=uscmsbd2.fnal.gov:2170         
       export GLOBUS_TCP_PORT_RANGE=20000,25000 
>
>
To setup the glite environment, using Crab client >= 2.7.2 WITHOUT Crabserver
  a) On SLC4 and SLC5 (glite 3.1)  source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh  
  b) On SLC5 (glite 3.2)  source /code/osgcode/ucsdt2/gLite32/etc/profile.d/grid_env.[c]sh 
       export LCG_GFAL_INFOSYS=lcg-bdii.cern.ch:2170  
       export GLOBUS_TCP_PORT_RANGE=20000,25000 
  The glite environment should allow you to get the proxy and proper role in order to run your grid jobs
       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 
Line: 87 to 87
 
  • CRAB Server

A crab server is deployed at UCSD, which you can use for your crab submission. Following is the configuration in the [CRAB] section of crab.cfg file to specify the crab server and scheduler.

Changed:
<
<
      server_name = ucsd 
      scheduler = glite         
>
>
      scheduler = glite     
 use_server = 1
  Late binding based Crabserver can be used as defined here Essentially:
Changed:
<
<
      server_name = ucsd 
      scheduler =  glidein    
>
>
      scheduler =  glidein    
 use_server = 1
 

Job Monitoring

Line: 160 to 162
  Data discovery for above two DBS instances is
Changed:
<
<
http://ming.ucsd.edu/data_discovery_old
>
>
http://ming.ucsd.edu/data_discovery_old
  To look at the datasets bookept by DBS, you need to choose "instance" in the selection manual, "dbs_2008" corresponds to the old DBS, "dbs_2009" corresponds to the new DBS.

For newest version of DBS (DBS_2_0_8) or later

Changed:
<
<
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS2/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS2/servlet/DBSServlet 
>
>
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS2/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS2/servlet/DBSServlet 
  The data discovery of the newest DBS is
http://ming.ucsd.edu/data_discovery 

Revision 272010/03/15 - Main.HaifengPi

Line: 1 to 1
 

General Support

Line: 153 to 153
 The local DBS is implemented to support data publication via Crab or ProdAgent? . For Crab, the publication or access the local DBS can be set up by adding following to the [USER] section in the crab.cfg

For old version of DBS (DBS_1_0_8)

Changed:
<
<
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
 
>
>
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
  For new version of DBS (DBS_2_0_4)
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS1/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS1/servlet/DBSServlet 
Changed:
<
<
The data discovery of the local DBS is
http://ming.ucsd.edu/data_discovery 
>
>
Data discovery for above two DBS instances is

http://ming.ucsd.edu/data_discovery_old
  To look at the datasets bookept by DBS, you need to choose "instance" in the selection manual, "dbs_2008" corresponds to the old DBS, "dbs_2009" corresponds to the new DBS.
Changed:
<
<
      
  
>
>
For newest version of DBS (DBS_2_0_8) or later
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS2/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS2/servlet/DBSServlet 

The data discovery of the newest DBS is

http://ming.ucsd.edu/data_discovery 

Currently please use DBS_2_0_8 for analysis and dataset publication. Any user dataset in the old DBS can be migrated to the latest instance. Please contact us if you need migration.

      
  -- HaifengPi - 02 Sep 2008

Revision 262010/02/08 - Main.SanjayPadhi

Line: 1 to 1
 

General Support

Line: 52 to 52
  Before initiating the glite environment, please make sure no other grid environment exists, especially by checking no VDT environment is sourced (the VDT environment is set up with "source /setup.(c)sh").
Changed:
<
<
To setup the glite environment,
>
>
To setup the glite environment, using Crab client 2.6.6 associated with a Crabserver both on SLC4 and SLC5 mode.
 
Changed:
<
<
 a) On SLC4 (glite 3.1)  source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh  
 b) On SLC5 (glite 3.2)  source /code/osgcode/ucsdt2/gLite32/etc/profile.d/grid_env.[c]sh 
       export LCG_GFAL_INFOSYS=uscmsbd2.fnal.gov:2170         
       export GLOBUS_TCP_PORT_RANGE=20000,25000 
>
>
 source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh  
        export LCG_GFAL_INFOSYS=uscmsbd2.fnal.gov:2170         
        export GLOBUS_TCP_PORT_RANGE=20000,25000 

To setup the glite environment, using Crab client 2.6.6 WITHOUT Crabserver Or with Crab client >= 2.7.x series

  a) On SLC4 and SLC5 (glite 3.1)  source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh  
  b) On SLC5 (glite 3.2)  source /code/osgcode/ucsdt2/gLite32/etc/profile.d/grid_env.[c]sh 
       export LCG_GFAL_INFOSYS=uscmsbd2.fnal.gov:2170         
       export GLOBUS_TCP_PORT_RANGE=20000,25000 
  The glite environment should allow you to get the proxy and proper role in order to run your grid jobs
       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 

Revision 252010/02/08 - Main.SanjayPadhi

Line: 1 to 1
 

General Support

Line: 54 to 54
  To setup the glite environment,
Changed:
<
<
 a) On SLC4 (glite 3.1)
 source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh

 b) On SLC5 (glite 3.2)
 source /code/osgcode/ucsdt2/gLite32/etc/profile.d/grid_env.[c]sh
>
>
 a) On SLC4 (glite 3.1)  source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh  
 b) On SLC5 (glite 3.2)  source /code/osgcode/ucsdt2/gLite32/etc/profile.d/grid_env.[c]sh 
       export LCG_GFAL_INFOSYS=uscmsbd2.fnal.gov:2170         
       export GLOBUS_TCP_PORT_RANGE=20000,25000 
  The glite environment should allow you to get the proxy and proper role in order to run your grid jobs
       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 
Line: 91 to 84
 
  • CRAB Server

A crab server is deployed at UCSD, which you can use for your crab submission. Following is the configuration in the [CRAB] section of crab.cfg file to specify the crab server and scheduler.

Changed:
<
<
      server_name = ucsd 
      scheduler = glitecoll         
>
>
      server_name = ucsd 
      scheduler = glite         
  Late binding based Crabserver can be used as defined here Essentially:
      server_name = ucsd 
      scheduler =  glidein    
Line: 135 to 128
 
   lcg-cp -b -D srmv2  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the src file> <path to destination file>

An example copy from your local directory into our srm would thus look like:

Changed:
<
<
lcg-cp -v -b -D srmv2 file:/home/users/tmartin/testfile.zero  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/testfile-2.zero

srmcp -2 -debug=true -delegate=false file:////home/users/tmartin/smallfile.zero srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/testfile.root
>
>
lcg-cp -v -b -D srmv2 file:/home/users/tmartin/testfile.zero  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/testfile-2.zero  srmcp -2 -debug=true -delegate=false file:////home/users/tmartin/smallfile.zero srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/testfile.root 
  To just do ls via the srm would look like:
Changed:
<
<
 lcg-ls -l -b -D srmv2 srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/

or 
 srmls -2 -delegate=false  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin
>
>
 lcg-ls -l -b -D srmv2 srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/  or   srmls -2 -delegate=false  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin 
 

MC Production and Local Scope DBS

Revision 242010/02/08 - Main.FkW

Line: 1 to 1
 

General Support

Line: 53 to 53
 Before initiating the glite environment, please make sure no other grid environment exists, especially by checking no VDT environment is sourced (the VDT environment is set up with "source /setup.(c)sh").

To setup the glite environment,

Changed:
<
<
       source /code/osgcode/ucsdt2/gLite/etc/profile.d/grid_env.(c)sh 
>
>
 a) On SLC4 (glite 3.1)
 source /code/osgcode/ucsdt2/gLite31/etc/profile.d/grid_env.[c]sh

 b) On SLC5 (glite 3.2)
 source /code/osgcode/ucsdt2/gLite32/etc/profile.d/grid_env.[c]sh
  The glite environment should allow you to get the proxy and proper role in order to run your grid jobs
       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 
Deleted:
<
<
As an aside, in case you need to install gLite client at your home institution, i.e. anywhere outside the login environment of the Tier-2 at UCSD, you can do so following the instructions at FkwInstallgLite .
 
  • Setup Condor and VDT

Revision 232009/12/09 - Main.FkW

Line: 1 to 1
 

General Support

Line: 128 to 128
 copy a file at SE to local via lcg-cp
   lcg-cp -b -D srmv2  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the src file> <path to destination file>
Added:
>
>
An example copy from your local directory into our srm would thus look like:
lcg-cp -v -b -D srmv2 file:/home/users/tmartin/testfile.zero  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/testfile-2.zero

srmcp -2 -debug=true -delegate=false file:////home/users/tmartin/smallfile.zero srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/testfile.root

To just do ls via the srm would look like:

 lcg-ls -l -b -D srmv2 srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin/

or 
 srmls -2 -delegate=false  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=/hadoop/cms/store/user/tmartin
 

MC Production and Local Scope DBS

  • Production User Portal

Revision 222009/12/04 - Main.HaifengPi

Line: 1 to 1
 

General Support

Line: 101 to 101
  Note: In some cases one need to have the grid certificate loaded into the browser
Changed:
<
<

Moving data to UCSD

>
>

Moving data to/from UCSD

Data Request via PhEDex?

  We encourage anybody to make data replication requests via the PhEDEx? pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.

To keep track of all the data at the UCSD Tier-2, we have developed a simple accounting system. For this to work, you need to pick an account you want to charge your request to. This is done by adding the following to the comment field when making the PhEDEx? request:

Line: 114 to 116
 The interactive login nodes at UCSD allow you to do an ls on the directories in hdfs for both the official as well as user data:
#for official data: ls /hadoop/cms/phedex/store 
#for private user data: ls /hadoop/cms/store/user 
Changed:
<
<
The srm endpoint for data transfer is srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the destination file>
>
>

Moving Data by Users

The srm endpoint for data transfer is srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the file>

Here are a few examples:

copy a local file to SE via srmcp

   srmcp -2 file://localhost/<path to the src file> srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the destination file>
 
Added:
>
>
copy a file at SE to local via lcg-cp
   lcg-cp -b -D srmv2  srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the src file> <path to destination file>
 

MC Production and Local Scope DBS

Revision 212009/12/04 - Main.FkW

Line: 1 to 1
 

General Support

Line: 116 to 116
  The srm endpoint for data transfer is srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the destination file>
Deleted:
<
<

General Support

We have two listservs for support, one for announcements from the Tier-2 admins to the users, and one for complaints and questions from the users to the admins. Every user of the Tier-2 should subscribe to the announcement listserv.

Announcements from admins to users: cmst2 at physics dot ucsd dot edu*
*The archive for this list is here: cmst2 archive

Questions and complaints from users to admins: t2support at physics dot ucsd dot edu

Login Platforms

The Tier-2 center supports multiple computers for interactive login. Those are called uaf-X.t2.ucsd.edu with X running from 1 to 6. The numbers 3,4,5,6 are modern 8ways with loads of memory. The 1,2 are older machines. I'd stay away from 1 if I was you.

To get login access, send email with your ssh key and hypernews account name to t2support. To get write access to dCache into your own /store/user area, send email with your hypernews account name and the output from "voms-proxy-info" to t2support.

We support 1TB of space in /store/user for every person from UCSB, UCR, UCSD who is in CMS.

dedicated groups on uaf

To share directories on the uaf between multiple people in a group, we define groups and use ACLs. If you need this functionality, do the following:
  • Request a group from t2support
  • Once you have a group, you need the following commands to make a directory and define it as group writeable.
mkdir bla getfacl bla setfacl -R -m g:cms1:rwx bla setfacl -R -d -m g:cms1:rwx getfacl bla 

This sets the default for all files in this directory, and does so recursively. Only the person who wns the file or directory can execute the command on it.

Send email to t2support if you have problems.

Software Deployment

codefs.t2.ucsd.edu is used to centrally deploy the CMS software and tools that provide most of necessary CMSSW and grid environment for the user level physics analysis and data operation.

The CMSSW is deployed via Tier-2 software distribution across the whole USCMS tier-2 (and some tier-3 sites). In general only standard release of CMSSW will be deployed. The analysis and test based on pre-release will not be supported unless the specific request is made or the deployment of the software is available under the standard procedure.

To make a desktop machine similar to the tier-2 interactive analysis machine, for example uaf-1.t2.ucsd.edu, the codefs.t2.ucsd.edu:/code/osgcode needs to be mounted to the local directory /code/osgcode

CMSSW Environment

Access to CMSSW repository
      export CMS_PATH=/code/osgcode/cmssoft/cms        
      export SCRAM_ARCH=slc4_ia32_gcc345        
      source ${CMS_PATH}/cmsset_default.sh       
       or         
      setenv CMS_PATH /code/osgcode/cmssoft/cms        
      setenv SCRAM_ARCH slc4_ia32_gcc345        
      source ${CMS_PATH}/cmsset_default.csh 

Create CMSSW project area and setp environment

       cd your-work-directory        
       scramv1 project CMSSW CMSSW_1_6_12        
       eval `scramv1 runtime -(c)sh` 

If you don't like waiting for your code to compile, try out compiling in parallel on our 8ways:

 scramv1 b -j 8 

Grid Environment and Tools

Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc.

  • Setup Glite

Before initiating the glite environment, please make sure no other grid environment exists, especially by checking no VDT environment is sourced (the VDT environment is set up with "source /setup.(c)sh").

To setup the glite environment,

       source /code/osgcode/ucsdt2/gLite/etc/profile.d/grid_env.(c)sh 

The glite environment should allow you to get the proxy and proper role in order to run your grid jobs

       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 

As an aside, in case you need to install gLite client at your home institution, i.e. anywhere outside the login environment of the Tier-2 at UCSD, you can do so following the instructions at FkwInstallgLite .

  • Setup Condor and VDT

In uaf machines, the condor environment is already in the PATH of uaf machines. Combining glite and condor environment, you can send grid jobs (e.g. crab jobs) via condor_g.

If VDT is chosen to bring the grid environment to your analysis instead of glite in the uaf machines,

       source /date/tmp/vdt/setup.(c)sh  

Never mix VDT with glite environment.

  • Setup CRAB

There are primarily two submission methods to send crab jobs, condor_g and glitecoll, which determines how crab is set up and used.

     1. setup CMSSW environment as described above      
     2. setup glite or condor environment as described above      
     3. source /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh 

To check which crab version is actually used by "ls -l /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh"

To publish the datasets to the DBS (here is an example of local DBS deployed at UCSD), in the [USER] section of crab.cfg, following configuration needs to be added

      publish_data = 1        
      publish_data_name = "Njet_test1"       
      dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 

To run analysis to the local DBS (other than the global CMS DBS), in the [USER] section of crab.cfg, following configuration needs to be added

      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 

  • CRAB Server

A crab server is deployed at UCSD, which you can use for your crab submission. Following is the configuration in the [CRAB] section of crab.cfg file to specify the crab server and scheduler.

      server_name = ucsd 
      scheduler = glitecoll         

Late binding based Crabserver can be used as defined here Essentially:

      server_name = ucsd 
      scheduler =  glidein    

Job Monitoring

Locally submitted jobs to the T2 Condor batch system can be monitored using:

Jobs submitted to the Grid via Crabserver can be found at:

Note: In some cases one need to have the grid certificate loaded into the browser

Moving data to UCSD

We encourage anybody to make data replication requests via the PhEDEx? pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.

To keep track of all the data at the UCSD Tier-2, we have developed a simple accounting system. For this to work, you need to pick an account you want to charge your request to. This is done by adding the following to the comment field when making the PhEDEx? request:

|| acc = ucsb || 

The above would charge the request to the UCSB account. An account is an arbitrary string. It might be easiest if you simply pick one of the accounts that already exist in the accounting system.

Absolute path in the HDFS (hadoop-distributed file system) system at UCSD

To move personal data to the storage via SRM, the endpoint of UCSD HDFS is srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<path to the destination file>

The interactive login nodes at UCSD allow you to do an ls on the directories in hdfs for both the official as well as user data:

#for official data: ls /hadoop/cms/phedex/store 
#for private user data: ls /hadoop/cms/store/user  
 

MC Production and Local Scope DBS

Revision 202009/12/03 - Main.HaifengPi

Line: 1 to 1
 

General Support

Line: 58 to 58
 The glite environment should allow you to get the proxy and proper role in order to run your grid jobs
       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 
Changed:
<
<
As an aside, in case you need to install gLite client at your home institution, i.e. anywhere outside the login environment of the Tier-2 at UCSD, you can do so following the instructions at FkwInstallgLite .
>
>
As an aside, in case you need to install gLite client at your home institution, i.e. anywhere outside the login environment of the Tier-2 at UCSD, you can do so following the instructions at FkwInstallgLite .
 
  • Setup Condor and VDT
Line: 110 to 109
  The above would charge the request to the UCSB account. An account is an arbitrary string. It might be easiest if you simply pick one of the accounts that already exist in the accounting system.
Changed:
<
<

Absolute path in the dcache system at UCSD

>
>

Absolute path in the HDFS (hadoop-distributed file system) system at UCSD

 
Changed:
<
<
The interactive login nodes at UCSD allow you to do an ls on the directories in dCache for both the official as well as user data:
#for official data: ls /pnfs/t2.ucsd.edu/data3/cms/phedex/store #for private user data: ls /pnfs/t2.ucsd.edu/data4/cms/store/user 
>
>
The interactive login nodes at UCSD allow you to do an ls on the directories in hdfs for both the official as well as user data:
#for official data: ls /hadoop/cms/phedex/store 
#for private user data: ls /hadoop/cms/store/user 
 
Changed:
<
<
To get the host and port for srm and dcap, please check out http://dcache.ucsd.edu
This page has a wealth of monitoring information in addition to listing the host and port for srm, dccp, etc.
>
>
The srm endpoint for data transfer is srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<PATH to the destination file>

General Support

We have two listservs for support, one for announcements from the Tier-2 admins to the users, and one for complaints and questions from the users to the admins. Every user of the Tier-2 should subscribe to the announcement listserv.

Announcements from admins to users: cmst2 at physics dot ucsd dot edu*
*The archive for this list is here: cmst2 archive

Questions and complaints from users to admins: t2support at physics dot ucsd dot edu

Login Platforms

The Tier-2 center supports multiple computers for interactive login. Those are called uaf-X.t2.ucsd.edu with X running from 1 to 6. The numbers 3,4,5,6 are modern 8ways with loads of memory. The 1,2 are older machines. I'd stay away from 1 if I was you.

To get login access, send email with your ssh key and hypernews account name to t2support. To get write access to dCache into your own /store/user area, send email with your hypernews account name and the output from "voms-proxy-info" to t2support.

We support 1TB of space in /store/user for every person from UCSB, UCR, UCSD who is in CMS.

dedicated groups on uaf

To share directories on the uaf between multiple people in a group, we define groups and use ACLs. If you need this functionality, do the following:
  • Request a group from t2support
  • Once you have a group, you need the following commands to make a directory and define it as group writeable.
mkdir bla getfacl bla setfacl -R -m g:cms1:rwx bla setfacl -R -d -m g:cms1:rwx getfacl bla 

This sets the default for all files in this directory, and does so recursively. Only the person who wns the file or directory can execute the command on it.

Send email to t2support if you have problems.

Software Deployment

codefs.t2.ucsd.edu is used to centrally deploy the CMS software and tools that provide most of necessary CMSSW and grid environment for the user level physics analysis and data operation.

The CMSSW is deployed via Tier-2 software distribution across the whole USCMS tier-2 (and some tier-3 sites). In general only standard release of CMSSW will be deployed. The analysis and test based on pre-release will not be supported unless the specific request is made or the deployment of the software is available under the standard procedure.

To make a desktop machine similar to the tier-2 interactive analysis machine, for example uaf-1.t2.ucsd.edu, the codefs.t2.ucsd.edu:/code/osgcode needs to be mounted to the local directory /code/osgcode

CMSSW Environment

Access to CMSSW repository
      export CMS_PATH=/code/osgcode/cmssoft/cms        
      export SCRAM_ARCH=slc4_ia32_gcc345        
      source ${CMS_PATH}/cmsset_default.sh       
       or         
      setenv CMS_PATH /code/osgcode/cmssoft/cms        
      setenv SCRAM_ARCH slc4_ia32_gcc345        
      source ${CMS_PATH}/cmsset_default.csh 

Create CMSSW project area and setp environment

       cd your-work-directory        
       scramv1 project CMSSW CMSSW_1_6_12        
       eval `scramv1 runtime -(c)sh` 

If you don't like waiting for your code to compile, try out compiling in parallel on our 8ways:

 scramv1 b -j 8 

Grid Environment and Tools

Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc.

  • Setup Glite

Before initiating the glite environment, please make sure no other grid environment exists, especially by checking no VDT environment is sourced (the VDT environment is set up with "source /setup.(c)sh").

To setup the glite environment,

       source /code/osgcode/ucsdt2/gLite/etc/profile.d/grid_env.(c)sh 

The glite environment should allow you to get the proxy and proper role in order to run your grid jobs

       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 

As an aside, in case you need to install gLite client at your home institution, i.e. anywhere outside the login environment of the Tier-2 at UCSD, you can do so following the instructions at FkwInstallgLite .

  • Setup Condor and VDT

In uaf machines, the condor environment is already in the PATH of uaf machines. Combining glite and condor environment, you can send grid jobs (e.g. crab jobs) via condor_g.

If VDT is chosen to bring the grid environment to your analysis instead of glite in the uaf machines,

       source /date/tmp/vdt/setup.(c)sh  

Never mix VDT with glite environment.

  • Setup CRAB

There are primarily two submission methods to send crab jobs, condor_g and glitecoll, which determines how crab is set up and used.

     1. setup CMSSW environment as described above      
     2. setup glite or condor environment as described above      
     3. source /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh 

To check which crab version is actually used by "ls -l /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh"

To publish the datasets to the DBS (here is an example of local DBS deployed at UCSD), in the [USER] section of crab.cfg, following configuration needs to be added

      publish_data = 1        
      publish_data_name = "Njet_test1"       
      dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 

To run analysis to the local DBS (other than the global CMS DBS), in the [USER] section of crab.cfg, following configuration needs to be added

      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 

  • CRAB Server

A crab server is deployed at UCSD, which you can use for your crab submission. Following is the configuration in the [CRAB] section of crab.cfg file to specify the crab server and scheduler.

      server_name = ucsd 
      scheduler = glitecoll         

Late binding based Crabserver can be used as defined here Essentially:

      server_name = ucsd 
      scheduler =  glidein    

Job Monitoring

Locally submitted jobs to the T2 Condor batch system can be monitored using:

Jobs submitted to the Grid via Crabserver can be found at:

Note: In some cases one need to have the grid certificate loaded into the browser

Moving data to UCSD

We encourage anybody to make data replication requests via the PhEDEx? pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.

To keep track of all the data at the UCSD Tier-2, we have developed a simple accounting system. For this to work, you need to pick an account you want to charge your request to. This is done by adding the following to the comment field when making the PhEDEx? request:

|| acc = ucsb || 

The above would charge the request to the UCSB account. An account is an arbitrary string. It might be easiest if you simply pick one of the accounts that already exist in the accounting system.

Absolute path in the HDFS (hadoop-distributed file system) system at UCSD

To move personal data to the storage via SRM, the endpoint of UCSD HDFS is srm://bsrm-1.t2.ucsd.edu:8443/srm/v2/server?SFN=<path to the destination file>

The interactive login nodes at UCSD allow you to do an ls on the directories in hdfs for both the official as well as user data:

#for official data: ls /hadoop/cms/phedex/store 
#for private user data: ls /hadoop/cms/store/user  
 

MC Production and Local Scope DBS

Revision 192009/09/07 - Main.FkW

Line: 1 to 1
 

General Support

Line: 58 to 58
 The glite environment should allow you to get the proxy and proper role in order to run your grid jobs
       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 
Added:
>
>
As an aside, in case you need to install gLite client at your home institution, i.e. anywhere outside the login environment of the Tier-2 at UCSD, you can do so following the instructions at FkwInstallgLite .
 
  • Setup Condor and VDT

In uaf machines, the condor environment is already in the PATH of uaf machines. Combining glite and condor environment, you can send grid jobs (e.g. crab jobs) via condor_g.

Line: 145 to 148
 -- HaifengPi - 02 Sep 2008

-- SanjayPadhi - 2009/03/08

Added:
>
>
-- FkW - 2009/09/07

Revision 182009/08/25 - Main.SanjayPadhi

Line: 1 to 1
 

General Support

We have two listservs for support, one for announcements from the Tier-2 admins to the users, and one for complaints and questions from the users to the admins. Every user of the Tier-2 should subscribe to the announcement listserv.

Changed:
<
<
*Announcements from admins to users: cmst2 at physics dot ucsd dot edu*
The archive for this list is here: cmst2 archive
>
>
Announcements from admins to users: cmst2 at physics dot ucsd dot edu*
*The archive for this list is here: cmst2 archive
  Questions and complaints from users to admins: t2support at physics dot ucsd dot edu
Line: 92 to 91
 

Job Monitoring

Locally submitted jobs to the T2 Condor batch system can be monitored using:

Changed:
<
<
>
>
 
Changed:
<
<
Jobs submitted to the Grid via glideinWMS based Crabserver can be found at:
>
>
Jobs submitted to the Grid via Crabserver can be found at:
 
Changed:
<
<
Note: One need to have the grid certificate loaded into the browser
>
>
Note: In some cases one need to have the grid certificate loaded into the browser
 

Moving data to UCSD

We encourage anybody to make data replication requests via the PhEDEx? pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.

Revision 172009/08/22 - Main.FkW

Line: 1 to 1
 

General Support

We have two listservs for support, one for announcements from the Tier-2 admins to the users, and one for complaints and questions from the users to the admins. Every user of the Tier-2 should subscribe to the announcement listserv.

Changed:
<
<
Announcements from admins to users: cmst2 at physics dot ucsd dot edu
>
>
*Announcements from admins to users: cmst2 at physics dot ucsd dot edu*
The archive for this list is here: cmst2 archive
  Questions and complaints from users to admins: t2support at physics dot ucsd dot edu

Revision 162009/08/11 - Main.SanjayPadhi

Line: 1 to 1
 

General Support

Line: 94 to 94
 

Jobs submitted to the Grid via glideinWMS based Crabserver can be found at:

Changed:
<
<
>
>
  Note: One need to have the grid certificate loaded into the browser

Revision 152009/04/29 - Main.FkW

Line: 1 to 1
 

General Support

We have two listservs for support, one for announcements from the Tier-2 admins to the users, and one for complaints and questions from the users to the admins. Every user of the Tier-2 should subscribe to the announcement listserv.

Changed:
<
<
Announcements from admins to users: t2user at physics dot ucsd dot edu
>
>
Announcements from admins to users: cmst2 at physics dot ucsd dot edu
  Questions and complaints from users to admins: t2support at physics dot ucsd dot edu

Revision 142009/03/16 - Main.HaifengPi

Line: 1 to 1
 

General Support

Revision 132009/03/09 - Main.HaifengPi

Line: 1 to 1
 

General Support

Line: 123 to 123
  The portal allows to run small- and mid-scale production based on ProdAgent? and GlideinWMS system with full detector simulation and reconstruction. The MC production will be run on USCMS Tier-2 and a few Tier-3 sites. The output will be stored at UCSD storage system and published at local DBS deployed at UCSD.
Changed:
<
<
The data discovery of the local DBS is http://ming.ucsd.edu/data_discovery. To access the datasets published at this local DBS by crab, the crab configuration needs to refer to the local DBS interface, http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet.
>
>
The data discovery of the local DBS is http://ming.ucsd.edu/data_discovery. The instance to be published is "dbs_2009". To access the datasets published at this local DBS by crab, the crab configuration needs to refer to the local DBS interface, http://ming.ucsd.edu:8080/DBS1/servlet/DBSServlet.
 
  • Local DBS

Revision 122009/03/08 - Main.SanjayPadhi

Line: 1 to 1
 

General Support

Line: 88 to 88
 Late binding based Crabserver can be used as defined here Essentially:
      server_name = ucsd 
      scheduler =  glidein    
Changed:
<
<

JobMon? at UCSD

>
>

Job Monitoring

  Locally submitted jobs to the T2 Condor batch system can be monitored using:

Revision 112009/03/08 - Main.SanjayPadhi

Line: 1 to 1
 

General Support

Line: 82 to 82
 
  • CRAB Server
Changed:
<
<
A crab server is deployed at UCSD, which you can use for your crab submission. Following is the configuration in the [CRAB] section of crab.cfg file to specify the crab server and scheduler. Currently another scheduler "glidein" is under testing and will be available soon.
>
>
A crab server is deployed at UCSD, which you can use for your crab submission. Following is the configuration in the [CRAB] section of crab.cfg file to specify the crab server and scheduler.
 
      server_name = ucsd 
      scheduler = glitecoll       
Added:
>
>
Late binding based Crabserver can be used as defined here Essentially:
      server_name = ucsd 
      scheduler =  glidein    

JobMon? at UCSD

Locally submitted jobs to the T2 Condor batch system can be monitored using:

Jobs submitted to the Grid via glideinWMS based Crabserver can be found at:

Note: One need to have the grid certificate loaded into the browser

 

Moving data to UCSD

We encourage anybody to make data replication requests via the PhEDEx? pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.
Line: 129 to 142
 
      
  

-- HaifengPi - 02 Sep 2008

Added:
>
>
-- SanjayPadhi - 2009/03/08

Revision 102009/03/03 - Main.HaifengPi

Line: 1 to 1
 

General Support

Line: 21 to 21
 To share directories on the uaf between multiple people in a group, we define groups and use ACLs. If you need this functionality, do the following:
  • Request a group from t2support
  • Once you have a group, you need the following commands to make a directory and define it as group writeable.
Changed:
<
<
mkdir bla
getfacl bla
setfacl -R -m g:cms1:rwx bla
setfacl -R -d -m g:cms1:rwx
getfacl bla
>
>
mkdir bla getfacl bla setfacl -R -m g:cms1:rwx bla setfacl -R -d -m g:cms1:rwx getfacl bla 
 
Changed:
<
<
This sets the default for all files in this directory, and does so recursively. Only the person who wns the file or directory can execute the command on it.
>
>
This sets the default for all files in this directory, and does so recursively. Only the person who wns the file or directory can execute the command on it.
  Send email to t2support if you have problems.
Line: 87 to 80
 To run analysis to the local DBS (other than the global CMS DBS), in the [USER] section of crab.cfg, following configuration needs to be added
      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
Added:
>
>
  • CRAB Server

A crab server is deployed at UCSD, which you can use for your crab submission. Following is the configuration in the [CRAB] section of crab.cfg file to specify the crab server and scheduler. Currently another scheduler "glidein" is under testing and will be available soon.

      server_name = ucsd 
      scheduler = glitecoll       
 

Moving data to UCSD

We encourage anybody to make data replication requests via the PhEDEx? pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.

Revision 92009/01/30 - Main.FkW

Line: 1 to 1
 

General Support

Line: 17 to 17
  We support 1TB of space in /store/user for every person from UCSB, UCR, UCSD who is in CMS.
Added:
>
>

dedicated groups on uaf

To share directories on the uaf between multiple people in a group, we define groups and use ACLs. If you need this functionality, do the following:
  • Request a group from t2support
  • Once you have a group, you need the following commands to make a directory and define it as group writeable.
mkdir bla
getfacl bla
setfacl -R -m g:cms1:rwx bla
setfacl -R -d -m g:cms1:rwx
getfacl bla

This sets the default for all files in this directory, and does so recursively. Only the person who wns the file or directory can execute the command on it.

Send email to t2support if you have problems.

 

Software Deployment

codefs.t2.ucsd.edu is used to centrally deploy the CMS software and tools that provide most of necessary CMSSW and grid environment for the user level physics analysis and data operation.

Revision 82008/12/19 - Main.HaifengPi

Line: 1 to 1
 

General Support

Line: 32 to 32
 
       cd your-work-directory        
       scramv1 project CMSSW CMSSW_1_6_12        
       eval `scramv1 runtime -(c)sh` 

If you don't like waiting for your code to compile, try out compiling in parallel on our 8ways:

Changed:
<
<
 scramv1 b -j 8
>
>
 scramv1 b -j 8 
 

Grid Environment and Tools

Line: 73 to 71
 
      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 

Moving data to UCSD

Changed:
<
<
We encourage anybody to make data replication requests via the PhEDEx? pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.

To keep track of all the data at the UCSD Tier-2, we have developed a simple accounting system. For this to work, you need to pick an account you want to charge your request to. This is done by adding the following to the comment field when making the PhEDEx? request:

|| acc = ucsb ||
>
>
We encourage anybody to make data replication requests via the PhEDEx? pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.
 
Changed:
<
<
The above would charge the request to the UCSB account. An account is an arbitrary string. It might be easiest if you simply pick one of the accounts that already exist in the accounting system.
>
>
To keep track of all the data at the UCSD Tier-2, we have developed a simple accounting system. For this to work, you need to pick an account you want to charge your request to. This is done by adding the following to the comment field when making the PhEDEx? request:
|| acc = ucsb || 

The above would charge the request to the UCSB account. An account is an arbitrary string. It might be easiest if you simply pick one of the accounts that already exist in the accounting system.

 

Absolute path in the dcache system at UCSD

The interactive login nodes at UCSD allow you to do an ls on the directories in dCache for both the official as well as user data:

Changed:
<
<
#for official data:
ls /pnfs/t2.ucsd.edu/data3/cms/phedex/store
#for private user data:
ls /pnfs/t2.ucsd.edu/data4/cms/store/user
>
>
#for official data: ls /pnfs/t2.ucsd.edu/data3/cms/phedex/store #for private user data: ls /pnfs/t2.ucsd.edu/data4/cms/store/user 
 
Changed:
<
<
To get the host and port for srm and dcap, please check out http://dcache.ucsd.edu
This page has a wealth of monitoring information in addition to listing the host and port for srm, dccp, etc.
>
>
To get the host and port for srm and dcap, please check out http://dcache.ucsd.edu
This page has a wealth of monitoring information in addition to listing the host and port for srm, dccp, etc.
 

MC Production and Local Scope DBS

Line: 114 to 100
 
  • Local DBS

The local DBS is implemented to support data publication via Crab or ProdAgent? . For Crab, the publication or access the local DBS can be set up by adding following to the [USER] section in the crab.cfg

Changed:
<
<
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
>
>
For old version of DBS (DBS_1_0_8)
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
 

For new version of DBS (DBS_2_0_4)

        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS1/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS1/servlet/DBSServlet 
  The data discovery of the local DBS is
      http://ming.ucsd.edu/data_discovery 
Added:
>
>
To look at the datasets bookept by DBS, you need to choose "instance" in the selection manual, "dbs_2008" corresponds to the old DBS, "dbs_2009" corresponds to the new DBS.
      
  
 -- HaifengPi - 02 Sep 2008

Revision 72008/11/04 - Main.FkW

Line: 1 to 1
 

General Support

Line: 72 to 72
 To run analysis to the local DBS (other than the global CMS DBS), in the [USER] section of crab.cfg, following configuration needs to be added
      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
Changed:
<
<

MC Production and Data Management

>
>

Moving data to UCSD

We encourage anybody to make data replication requests via the PhEDEx? pages. If you make a request, James Letts and fkw receive an email. One of them will approve the request as long as there is disk space at the Tier-2 available. When they approve it, you receive an email back acknowledging the approved request.

To keep track of all the data at the UCSD Tier-2, we have developed a simple accounting system. For this to work, you need to pick an account you want to charge your request to. This is done by adding the following to the comment field when making the PhEDEx? request:

|| acc = ucsb ||

The above would charge the request to the UCSB account. An account is an arbitrary string. It might be easiest if you simply pick one of the accounts that already exist in the accounting system.

Absolute path in the dcache system at UCSD

The interactive login nodes at UCSD allow you to do an ls on the directories in dCache for both the official as well as user data:

#for official data:
ls /pnfs/t2.ucsd.edu/data3/cms/phedex/store
#for private user data:
ls /pnfs/t2.ucsd.edu/data4/cms/store/user

To get the host and port for srm and dcap, please check out http://dcache.ucsd.edu
This page has a wealth of monitoring information in addition to listing the host and port for srm, dccp, etc.

MC Production and Local Scope DBS

 
  • Production User Portal

Revision 62008/09/07 - Main.FkW

Line: 1 to 1
 
Added:
>
>

General Support

We have two listservs for support, one for announcements from the Tier-2 admins to the users, and one for complaints and questions from the users to the admins. Every user of the Tier-2 should subscribe to the announcement listserv.

Announcements from admins to users: t2user at physics dot ucsd dot edu

Questions and complaints from users to admins: t2support at physics dot ucsd dot edu

Login Platforms

The Tier-2 center supports multiple computers for interactive login. Those are called uaf-X.t2.ucsd.edu with X running from 1 to 6. The numbers 3,4,5,6 are modern 8ways with loads of memory. The 1,2 are older machines. I'd stay away from 1 if I was you.

To get login access, send email with your ssh key and hypernews account name to t2support. To get write access to dCache into your own /store/user area, send email with your hypernews account name and the output from "voms-proxy-info" to t2support.

We support 1TB of space in /store/user for every person from UCSB, UCR, UCSD who is in CMS.

 

Software Deployment

codefs.t2.ucsd.edu is used to centrally deploy the CMS software and tools that provide most of necessary CMSSW and grid environment for the user level physics analysis and data operation.

Line: 15 to 31
 Create CMSSW project area and setp environment
       cd your-work-directory        
       scramv1 project CMSSW CMSSW_1_6_12        
       eval `scramv1 runtime -(c)sh` 
Added:
>
>
If you don't like waiting for your code to compile, try out compiling in parallel on our 8ways:
 scramv1 b -j 8
 

Grid Environment and Tools

Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc.

Revision 52008/09/03 - Main.HaifengPi

Line: 1 to 1
 

Software Deployment

Line: 7 to 7
  The CMSSW is deployed via Tier-2 software distribution across the whole USCMS tier-2 (and some tier-3 sites). In general only standard release of CMSSW will be deployed. The analysis and test based on pre-release will not be supported unless the specific request is made or the deployment of the software is available under the standard procedure.
Changed:
<
<
To make a desktop machine similar to the tier-2 interactive analysis machine, for example uaf-1.t2.ucsd.edu, the codefs.t2.ucsd.edu:/code/osgcode needs to be mounted to the local directory /code/osgcode.
>
>
To make a desktop machine similar to the tier-2 interactive analysis machine, for example uaf-1.t2.ucsd.edu, the codefs.t2.ucsd.edu:/code/osgcode needs to be mounted to the local directory /code/osgcode
 

CMSSW Environment

Changed:
<
<
Access to CMSSW repository
       export CMS_PATH=/code/osgcode/cmssoft/cms        export SCRAM_ARCH=slc4_ia32_gcc345        source ${CMS_PATH}/cmsset_default.sh        or         setenv CMS_PATH /code/osgcode/cmssoft/cms        setenv SCRAM_ARCH slc4_ia32_gcc345        source ${CMS_PATH}/cmsset_default.csh 
>
>
Access to CMSSW repository
      export CMS_PATH=/code/osgcode/cmssoft/cms        
      export SCRAM_ARCH=slc4_ia32_gcc345        
      source ${CMS_PATH}/cmsset_default.sh       
       or         
      setenv CMS_PATH /code/osgcode/cmssoft/cms        
      setenv SCRAM_ARCH slc4_ia32_gcc345        
      source ${CMS_PATH}/cmsset_default.csh 
  Create CMSSW project area and setp environment
Changed:
<
<
       cd your-work-directory        scramv1 project CMSSW CMSSW_1_6_12        eval `scramv1 runtime -(c)sh` 
>
>
       cd your-work-directory        
       scramv1 project CMSSW CMSSW_1_6_12        
       eval `scramv1 runtime -(c)sh` 
 

Grid Environment and Tools

Changed:
<
<
Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc
>
>
Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc.
 
  • Setup Glite
Line: 41 to 41
 
  • Setup CRAB

There are primarily two submission methods to send crab jobs, condor_g and glitecoll, which determines how crab is set up and used.

Changed:
<
<
     1. setup CMSSW environment as described above      2. setup glite or condor environment as described above      3. source /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh 
>
>
     1. setup CMSSW environment as described above      
     2. setup glite or condor environment as described above      
     3. source /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh 
  To check which crab version is actually used by "ls -l /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh"

To publish the datasets to the DBS (here is an example of local DBS deployed at UCSD), in the [USER] section of crab.cfg, following configuration needs to be added

Changed:
<
<
      publish_data = 1        publish_data_name = "Njet_test1"       dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
>
>
      publish_data = 1        
      publish_data_name = "Njet_test1"       
      dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
  To run analysis to the local DBS (other than the global CMS DBS), in the [USER] section of crab.cfg, following configuration needs to be added
      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
Line: 66 to 66
 
  • Local DBS

The local DBS is implemented to support data publication via Crab or ProdAgent? . For Crab, the publication or access the local DBS can be set up by adding following to the [USER] section in the crab.cfg

Changed:
<
<
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet       dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
>
>
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet       
        dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
  The data discovery of the local DBS is
      http://ming.ucsd.edu/data_discovery 

Revision 42008/09/03 - Main.TerrenceMartin

Line: 1 to 1
 

Software Deployment

Revision 32008/09/03 - Main.HaifengPi

Line: 1 to 1
 
Deleted:
<
<
 

Software Deployment

codefs.t2.ucsd.edu is used to centrally deploy the CMS software and tools that provide most of necessary CMSSW and grid environment for the user level physics analysis and data operation.

Line: 8 to 7
  The CMSSW is deployed via Tier-2 software distribution across the whole USCMS tier-2 (and some tier-3 sites). In general only standard release of CMSSW will be deployed. The analysis and test based on pre-release will not be supported unless the specific request is made or the deployment of the software is available under the standard procedure.
Changed:
<
<
To make a desktop machine similar to the tier-2 interactive analysis machines, for example uaf-1.t2.ucsd.edu, the codefs.t2.ucsd.edu:/code/osgcode needs to be mounted to the local directory /code/osgcode.
>
>
To make a desktop machine similar to the tier-2 interactive analysis machine, for example uaf-1.t2.ucsd.edu, the codefs.t2.ucsd.edu:/code/osgcode needs to be mounted to the local directory /code/osgcode.
 

CMSSW Environment

Changed:
<
<
Access to CMSSW repository
       export CMS_PATH=/code/osgcode/cmssoft/cms
       export SCRAM_ARCH=slc4_ia32_gcc345
       source ${CMS_PATH}/cmsset_default.sh
       or 
       setenv CMS_PATH /code/osgcode/cmssoft/cms
       setenv SCRAM_ARCH slc4_ia32_gcc345
       source ${CMS_PATH}/cmsset_default.csh
>
>
Access to CMSSW repository
       export CMS_PATH=/code/osgcode/cmssoft/cms        export SCRAM_ARCH=slc4_ia32_gcc345        source ${CMS_PATH}/cmsset_default.sh        or         setenv CMS_PATH /code/osgcode/cmssoft/cms        setenv SCRAM_ARCH slc4_ia32_gcc345        source ${CMS_PATH}/cmsset_default.csh 
  Create CMSSW project area and setp environment
Changed:
<
<
       cd your-work-directory
       scramv1 project CMSSW CMSSW_1_6_12
       eval `scramv1 runtime -(c)sh`
>
>
       cd your-work-directory        scramv1 project CMSSW CMSSW_1_6_12        eval `scramv1 runtime -(c)sh` 
 

Grid Environment and Tools

Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc

Changed:
<
<
  • Setup Glite
>
>
  • Setup Glite
  Before initiating the glite environment, please make sure no other grid environment exists, especially by checking no VDT environment is sourced (the VDT environment is set up with "source /setup.(c)sh").

To setup the glite environment,

Changed:
<
<
       source /code/osgcode/ucsdt2/gLite/etc/profile.d/grid_env.(c)sh
>
>
       source /code/osgcode/ucsdt2/gLite/etc/profile.d/grid_env.(c)sh 
  The glite environment should allow you to get the proxy and proper role in order to run your grid jobs
Changed:
<
<
       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser
>
>
       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser 
 
Changed:
<
<
  • Setup Condor and VDT
>
>
  • Setup Condor and VDT
  In uaf machines, the condor environment is already in the PATH of uaf machines. Combining glite and condor environment, you can send grid jobs (e.g. crab jobs) via condor_g.

If VDT is chosen to bring the grid environment to your analysis instead of glite in the uaf machines,

Changed:
<
<
       source /date/tmp/vdt/setup.(c)sh 
>
>
       source /date/tmp/vdt/setup.(c)sh  
  Never mix VDT with glite environment.
Changed:
<
<
  • Setup CRAB
>
>
  • Setup CRAB
  There are primarily two submission methods to send crab jobs, condor_g and glitecoll, which determines how crab is set up and used.
Changed:
<
<
     1. setup CMSSW environment as described above
     2. setup glite or condor environment as described above
     3. source /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh
>
>
     1. setup CMSSW environment as described above      2. setup glite or condor environment as described above      3. source /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh 
  To check which crab version is actually used by "ls -l /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh"

To publish the datasets to the DBS (here is an example of local DBS deployed at UCSD), in the [USER] section of crab.cfg, following configuration needs to be added

Changed:
<
<
      publish_data = 1 
      publish_data_name = "Njet_test1"
      dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet
>
>
      publish_data = 1        publish_data_name = "Njet_test1"       dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
  To run analysis to the local DBS (other than the global CMS DBS), in the [USER] section of crab.cfg, following configuration needs to be added
Changed:
<
<
      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet
>
>
      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
 

MC Production and Data Management

Changed:
<
<
  • Production User Portal
>
>
  • Production User Portal
  For running user-level production, the URL to the portal is https://yuan.ucsd.edu/production_request. The x509 certificate needs to be stored in the web browser to enable the access. Normally you import PKCS#12 file of the certificate to the browser. If you can't find the PKCS#12 file when you first received the x509 certificate, you can use following command to create one (named MyCert? .p12) based on your x509 certficate the key
Changed:
<
<
     openssl pkcs12 -export -in usercert.pem -inkey userkey.pem -out MyCert.p12 -name "my x509 cert"
>
>
     openssl pkcs12 -export -in usercert.pem -inkey userkey.pem -out MyCert.p12 -name "my x509 cert" 
  The portal allows to run small- and mid-scale production based on ProdAgent? and GlideinWMS system with full detector simulation and reconstruction. The MC production will be run on USCMS Tier-2 and a few Tier-3 sites. The output will be stored at UCSD storage system and published at local DBS deployed at UCSD.

The data discovery of the local DBS is http://ming.ucsd.edu/data_discovery. To access the datasets published at this local DBS by crab, the crab configuration needs to refer to the local DBS interface, http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet.

Changed:
<
<
  • Local DBS
>
>
  • Local DBS
  The local DBS is implemented to support data publication via Crab or ProdAgent? . For Crab, the publication or access the local DBS can be set up by adding following to the [USER] section in the crab.cfg
Changed:
<
<
 
      dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet
      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet
>
>
        dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet       dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet 
  The data discovery of the local DBS is
Changed:
<
<
      http://ming.ucsd.edu/data_discovery
>
>
      http://ming.ucsd.edu/data_discovery 
  -- HaifengPi - 02 Sep 2008

Revision 22008/09/02 - Main.HaifengPi

Line: 1 to 1
Changed:
<
<

Software Deployment

>
>

Software Deployment

  codefs.t2.ucsd.edu is used to centrally deploy the CMS software and tools that provide most of necessary CMSSW and grid environment for the user level physics analysis and data operation.
Line: 30 to 33
  Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc
Changed:
<
<
  • Setup Glite
>
>
  • Setup Glite
  Before initiating the glite environment, please make sure no other grid environment exists, especially by checking no VDT environment is sourced (the VDT environment is set up with "source /setup.(c)sh").
Line: 44 to 47
  voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser
Changed:
<
<
  • Setup Condor
>
>
  • Setup Condor and VDT

In uaf machines, the condor environment is already in the PATH of uaf machines. Combining glite and condor environment, you can send grid jobs (e.g. crab jobs) via condor_g.

If VDT is chosen to bring the grid environment to your analysis instead of glite in the uaf machines,

       source /date/tmp/vdt/setup.(c)sh 

Never mix VDT with glite environment.

 
Changed:
<
<
  • Setup CRAB
>
>
  • Setup CRAB
 
Changed:
<
<
There are primarily two submission methods to send crab jobs, condor_g and glitecoll, which determins how crab environment is set up.
>
>
There are primarily two submission methods to send crab jobs, condor_g and glitecoll, which determines how crab is set up and used.
 
     1. setup CMSSW environment as described above
     2. setup glite or condor environment as described above

Line: 57 to 69
  To check which crab version is actually used by "ls -l /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh"
Changed:
<
<
To publish the datasets to local DBS,
>
>
To publish the datasets to the DBS (here is an example of local DBS deployed at UCSD), in the [USER] section of crab.cfg, following configuration needs to be added
      publish_data = 1 
      publish_data_name = "Njet_test1"
      dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet

To run analysis to the local DBS (other than the global CMS DBS), in the [USER] section of crab.cfg, following configuration needs to be added

      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet
 

MC Production and Data Management

Changed:
<
<
  • Production User Portal
>
>
  • Production User Portal
 
Changed:
<
<
  • Local DBS
>
>
For running user-level production, the URL to the portal is https://yuan.ucsd.edu/production_request. The x509 certificate needs to be stored in the web browser to enable the access. Normally you import PKCS#12 file of the certificate to the browser. If you can't find the PKCS#12 file when you first received the x509 certificate, you can use following command to create one (named MyCert? .p12) based on your x509 certficate the key

     openssl pkcs12 -export -in usercert.pem -inkey userkey.pem -out MyCert.p12 -name "my x509 cert"

The portal allows to run small- and mid-scale production based on ProdAgent? and GlideinWMS system with full detector simulation and reconstruction. The MC production will be run on USCMS Tier-2 and a few Tier-3 sites. The output will be stored at UCSD storage system and published at local DBS deployed at UCSD.

The data discovery of the local DBS is http://ming.ucsd.edu/data_discovery. To access the datasets published at this local DBS by crab, the crab configuration needs to refer to the local DBS interface, http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet.

  • Local DBS

The local DBS is implemented to support data publication via Crab or ProdAgent? . For Crab, the publication or access the local DBS can be set up by adding following to the [USER] section in the crab.cfg

 
      dbs_url_for_publication = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet
      dbs_url = http://ming.ucsd.edu:8080/DBS/servlet/DBSServlet

The data discovery of the local DBS is

      http://ming.ucsd.edu/data_discovery
  -- HaifengPi - 02 Sep 2008

Revision 12008/09/02 - Main.HaifengPi

Line: 1 to 1
Added:
>
>

Software Deployment

codefs.t2.ucsd.edu is used to centrally deploy the CMS software and tools that provide most of necessary CMSSW and grid environment for the user level physics analysis and data operation.

The CMSSW is deployed via Tier-2 software distribution across the whole USCMS tier-2 (and some tier-3 sites). In general only standard release of CMSSW will be deployed. The analysis and test based on pre-release will not be supported unless the specific request is made or the deployment of the software is available under the standard procedure.

To make a desktop machine similar to the tier-2 interactive analysis machines, for example uaf-1.t2.ucsd.edu, the codefs.t2.ucsd.edu:/code/osgcode needs to be mounted to the local directory /code/osgcode.

CMSSW Environment

Access to CMSSW repository
       export CMS_PATH=/code/osgcode/cmssoft/cms
       export SCRAM_ARCH=slc4_ia32_gcc345
       source ${CMS_PATH}/cmsset_default.sh
       or 
       setenv CMS_PATH /code/osgcode/cmssoft/cms
       setenv SCRAM_ARCH slc4_ia32_gcc345
       source ${CMS_PATH}/cmsset_default.csh

Create CMSSW project area and setp environment

       cd your-work-directory
       scramv1 project CMSSW CMSSW_1_6_12
       eval `scramv1 runtime -(c)sh`

Grid Environment and Tools

Make sure you have .globus and .glite directories in the home directory. In the .glite, there is a file, vomses, needs to be there. You can get one from /code/osgcode/ucsdt2/etc

  • Setup Glite

Before initiating the glite environment, please make sure no other grid environment exists, especially by checking no VDT environment is sourced (the VDT environment is set up with "source /setup.(c)sh").

To setup the glite environment,

       source /code/osgcode/ucsdt2/gLite/etc/profile.d/grid_env.(c)sh

The glite environment should allow you to get the proxy and proper role in order to run your grid jobs

       voms-proxy-init -valid 120:00 --voms cms:/cms/uscms/Role=cmsuser

  • Setup Condor

  • Setup CRAB

There are primarily two submission methods to send crab jobs, condor_g and glitecoll, which determins how crab environment is set up.

     1. setup CMSSW environment as described above
     2. setup glite or condor environment as described above
     3. source /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh

To check which crab version is actually used by "ls -l /code/osgcode/ucsdt2/Crab/etc/crab.(c)sh"

To publish the datasets to local DBS,

MC Production and Data Management

  • Production User Portal

  • Local DBS

-- HaifengPi - 02 Sep 2008

 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback