Benchmark Tests Using the CMS PhEDEx LoadTest Infrastructure


The LHC Open Network Environment

The objective of LHCONE is to provide a collection of access locations that are effectively entry points into a network that is private to the LHC T1/2/3 sites. LHCONE is not intended to replace the LHCOPN but rather to complement it. Up until now, T1-T2, T2-T2, and T3 data movements have been using the shared General Purpose Network infrastructure. LHCONE is a robust and scalable solution for a global system serving LHC’s T1, T2 and T3 sites’ needs and fits the new less-hierarchical computing models.


CMS would like to measure the effect (if any) of moving sites to the LHCONE network compared to current operations. To this end, we need measurable benchmarks of data transfer performance. Using the LoadTest infrastructure of PhEDEx. several benchmarks are easy to calculate from information stored in TMDB and available through the PhEDEx web interface or data service:

  • Maximum transfer rate over a time interval
  • Latency of transferring a complete "block" of files (seconds for n files)
  • Transfer success percentage ("transfer quality") over a time interval at a particular injection rate

If we are to test the performance of a data transfer link between two SRM endpoints, it would be best to perform the test under a consistent networking load. For example, at a Tier-2 site there are Production data transfers to and from many possible Tier-1, Tier-2 and Tier-3 sites, Debug LoadTest data transfers, as well as production and user analysis job traffic both to and from the site. Production and other Debug PhEDEx data transfers are easy to deal with and can be suspended during the testing period. The latter is particularly impossible to control without shutting down the site, and is not measurable with the PhEDEx infrastructure or usually with any publicly available interface. However, if the tests are run over a sufficiently long time period (several hours), then any bursts in user or production traffic should still allow a good measurement of the maximum transfer rate.

List of CMS Tier-2 Sites in First Stage of LHCONE

A list of 13 sites which will be tested in the first phase is given in the table.

Site Name Subscriptions Injections Activity (1h) Transfer Queue NREN/OPN (Gbps)
T2_BE_IIHE Debug Prod Debug Debug Prod Debug Prod 1.0
T2_DE_DESY Debug Prod Debug Debug Prod Debug Prod 4.0
T2_DE_RWTH Debug Prod Debug Debug Prod Debug Prod 10.0
T2_ES_IFCA Debug Prod Debug Debug Prod Debug Prod 2.0
T2_FR_GRIF_LLR Debug Prod Debug Debug Prod Debug Prod 5.0/10.0
T2_IN_TIFR         1.0
T2_IT_Legnaro Debug Prod Debug Debug Prod Debug Prod 2.0
T2_IT_Pisa Debug Prod Debug Debug Prod Debug Prod 2.0
T2_RU_RRC_KI Debug Prod Debug Debug Prod Debug Prod 1.0
T2_UK_London_IC Debug Prod Debug Debug Prod Debug Prod 10.0
T2_US_MIT Debug Prod Debug Debug Prod Debug Prod 10.0
T2_US_Purdue Debug Prod Debug Debug Prod Debug Prod 20.0
T2_US_Wisconsin Debug Prod Debug Debug Prod Debug Prod 10.0

Status of Data Transfer Links

  • The current LoadTest activity between these sites is given here.
  • Current DDT Commissioning status in the Production instance of PhEDEx between these sites can be found here. Note that Tier-2 links to and from T2_IN_TIFR and T2_FR_GRIF_IRFU are in general not commissioned.
    • Links to and from T2_FR_GRIF_IRFU cannot be commissioned at this time due to network bandwidth limitations. See the savannah ticket.
    • Commissioning activity for T2_IN_TIFR can be found here.
  • Current Debug link status between these sites can be found here. Links will show red if the agents are down, e.g.

Site Issues

  • T2_US_Wisconsin dealing with storage element instabilities 2011/05/25
  • T2_IN_TIFR upgrading dpm 2011/05/25
  • T2_US_Purdue has a huge (>10K files) transfer queue in prod 2011/05/25
  • T2_DE_RWTH all transfers from site fail with AsyncWait? error. 2011/05/26
  • T2_US_MIT proxy expired, all transfers to site fail. 2011/05/26

Detailed Procedures

All data transfer links should transfer a few files per day in the LoadTest. Check the link in question to see if it is actually working by looking at the Debug instance activity by filling in the appropriate "To" and "From" sites here. There is no point to test a broken link, or a link that has more transfer errors than successes.

If there are no transfer attempts, then one will have to investigate why not before benchmark testing. Common causes include sites or links which are down (check here), suspended subscriptions (check here) or stopped file injections (check here).

Suspend other transfers

In general, we would like a stable transfer environment to run the test. This means that other Debug transfers to and from the sites being tested are suspended, and any significant production transfers are not taking place. Using the PhEDEx web interface via the links above:

  • STOP all LoadTest file injections from the two sites, except the one between them in the direction you wish to test (with the exception of critical links such as to the Tier-1 sites).
  • SUSPEND all LoadTest subscriptions to the two sites, except the one between them in the direction you wish to test (with the exception of critical links such as from the Tier-1 sites).
  • CHECK if there are significant unfinished Production or Debug transfers to the two sites. You can tell if a subscription is incomplete if the value under the "% Bytes" column is red and is not "100.0%". Its not really possible to suspend transfers once the files are in the transfer queue. If there are significant Production (or Debug) transfers, consider doing this test later.

Inject LoadTest files

Next we inject files for the benchmark test, and record the testing information:

  • After things have quieted down and there is little residual PhEDEx traffic in or out of either site, a block of LoadTest files at the source site using the link above. For a transfer link fully over a 10Gbps network connection end-to-end, 1000 files is sufficient for a few hours of transfers. For sites connected to 1Gbps links or less, a few hundred files should be sufficient. Note that the DDT commissioning metric, which all links passed at some point, was to transfer in less than 24 hours a total of 421GiB, or approximately 168 files of 2.5GiB size.
  • Record in the results twiki the number of files injected over the link and the start time (UNIX time since epoch is most useful).

The time to complete the transfer of the files will give a latency number, and the maximum sustained transfer rate over one hour will give the maximum transfer rate benchmark. There is a python [[][script] under development to extract these values from the PhEDEx data service. Record the results in the results twiki page.

Cleaning up

After the benchmark test, return the sites to the status quo ante.

  • Un-suspend any Production or Debug transfers at both sites.
  • Start all injections from the sites.
  • On the tested link, set the injection rate to 0.5MB/s for FTS-based tests to be done later.


Summary table of results of the tests are given here.

Daniele's log is here.


Are OSG Tier-2 sites still using srm copy clients instead of FTS?

  • 7 out of 10 sites are using only FTS for file transfers in the Debug instance of PhEDEx: T2_BR_UERJ, T2_US_Caltech, T2_US_MIT, T2_US_Nebraska, T2_US_UCSD, T2_US_Vanderbilt, T2_US_Wisconsin.
  • 3 sites are not using the FTS backend of the PhEDEx download agent for some transfers:
    • T2_BR_SPRACE and T2_US_Purdue: for transfers from T2_US sites only
    • T2_US_Florida: Site is running 7 download agents. Uses FTS backend for transfers from most Tier-1 sites, but uses SRM backend with srm-copy client for OSG sites, Vienna, and the Tier-1 in Taiwan. They use SRM backend with lcg-cp for all other source sites including the German Tier-1. Somewhat complicated by personnel issues.

-- JamesLetts - 2011/06/01

Topic revision: r17 - 2011/06/02 - 16:17:11 - JamesLetts
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback