Test Results
Sites marked by an asterisk are connected to the LHCONE network when the test took place.
Our script calculates the maximum rate by taking a moving hourly average over 12 x 5 minute bins. Note that 1280 MiB/s corresponds to 10Gbps and 128 MiB/s to 1Gbps. The binning may make it appear that the the maximum transfer rate is higher than the line speed in some cases, due to rounding.
SOURCE SITE |
DESTINATION SITE |
Files Injected |
Start Time |
Max. Rate in 1h (MiB/s) |
Latency (h) |
Quality (%) |
Comment |
T2_US_Wisconsin |
T2_US_MIT |
1000 |
1305328040 |
859.1 |
4.9 |
97.5% |
|
T2_US_Wisconsin |
T2_US_MIT |
2000 |
1305344878 |
1455.1 |
7.2 |
96.8% |
|
T2_US_Wisconsin |
T2_US_MIT |
2000 |
1305396650 |
1379.4 |
12.8 |
88.8% |
Did not suspend other debug transfers. |
T2_US_Wisconsin |
T2_US_MIT |
2000 |
1305440850 |
1455.1 |
6.5 |
96.3% |
Did not suspend other debug transfers. |
T2_US_Wisconsin |
T2_US_MIT |
2000 |
1305614014 |
1050.4 |
217.4 |
62.3% |
Did not suspend other debug transfers. |
T2_UK_London_IC |
T2_US_Purdue |
1000 |
1305655527 |
526.9 |
287.9 |
94.9% |
Did not suspend significant production transfers. Purdue site went down. |
T2_FR_GRIF_LLR |
T2_DE_RWTH |
1000 |
1305921458 |
654.4 |
3.0 |
96.7% |
|
T2_DE_RWTH |
T2_FR_GRIF_LLR |
1000 |
1305937471 |
640.8 |
6.6 |
100.0% |
|
T2_DE_RWTH |
T2_US_MIT |
1000 |
1306351647 |
601.6 |
90.5 |
88.3% |
After 989 files, all transfers from RWTH fail with [GENERAL_FAILURE] AsyncWait and it appears the proxy at MIT has expired. |
T2_IT_Pisa |
T2_IT_Legnaro |
500 |
1306447372 |
303.3 |
|
95.5% |
Halted after 80% completed |
T2_FR_GRIF_LLR |
T2_UK_London_IC |
1000 |
1306758701 |
319.9 |
7.0 |
84.0% |
|
T2_ES_IFCA |
T2_IT_Pisa |
1000 |
1306762972 |
116.5 |
19.5 |
97.6% |
|
T2_US_MIT |
T2_IT_Legnaro |
1000 |
1306840432 |
136.7 |
9.4 |
97.9% |
|
T2_IT_Pisa |
T2_US_Wisconsin |
1000 |
1306843798 |
161.2 |
28.8 |
60.4% |
|
T2_IT_Legnaro |
T2_US_MIT |
1000 |
1306876968 |
179.4 |
5.0 |
100.0% |
|
T2_US_MIT |
T2_DE_RWTH |
1000 |
1306898705 |
438.7 |
3.4 |
92.4% |
|
T2_UK_London_IC |
T2_FR_GRIF_LLR |
1000 |
1306919225 |
93.9 |
18.6 |
99.0% |
|
T2_IT_Pisa |
T2_ES_IFCA |
1000 |
1306923212 |
80.6 |
20.1 |
98.6% |
|
T2_DE_RWTH |
T2_US_MIT |
1000 |
1306915439 |
263.3 |
5.6 |
98.7% |
|
T2_US_MIT |
T2_DE_DESY |
1000 |
1306958130 |
89.5 |
10.5 |
99.9% |
|
T2_DE_DESY |
T2_US_MIT |
1000 |
1306996242 |
277.8 |
6.7 |
100.0% |
|
T2_US_Wisconsin |
T2_DE_RWTH |
1000 |
1306999306 |
472.9 |
6.5 |
93.9% |
|
T2_US_Purdue |
T2_IT_Legnaro |
1000 |
1306998314 |
78.2 |
21.7 |
76.4% |
|
T2_RU_RRC_KI |
T2_IT_Pisa |
500 |
1306996437 |
50.5 |
17.9 |
96.7% |
|
T2_DE_RWTH |
T2_US_Wisconsin |
1000 |
1307032216 |
205.0 |
4.3 |
94.5% |
|
T2_US_Purdue |
T2_US_MIT |
2000 |
1307057388 |
330.7 |
6.5 |
98.2% |
|
T2_US_MIT |
T2_IT_Pisa |
1000 |
1307115239 |
|
|
|
No files transferred in first 12h |
T2_DE_RWTH |
T2_UK_London_IC |
1000 |
1307114126 |
248.6 |
6.8 |
83.8% |
|
T2_ES_IFCA |
T2_DE_DESY |
1000 |
1307113378 |
85.7 |
|
100.0% |
83% complete |
T2_US_Wisconsin |
T2_IT_Legnaro |
1000 |
1307125203 |
138.8 |
|
98.7% |
97% complete |
Observations:
There are three fundamental limitations to PhEDEx transfer rates, any of which may be the limiting factor in the maximum hourly transfer rate:
- FTS channel configuration (number of simultaneous file transfers, streams per transfer, queue length)
- PhEDEx download agent configuration (number of simultaneous file transfers)
- Network bandwidth (1-20 Gbps)
When the results above are ordered by destination site, then the limitation may become evident.
For example, MIT can pull data at wire speed from Wisconsin but not from Europe, where network speed may be the limiting factor.
Action Item List:
- Developed a PhEDEx data service-based tool to get the metric numbers, especially the transfer latency. Script can be found here.
--
JamesLetts - 2011/05/13
Topic revision: r35 - 2011/06/04 - 03:05:19 -
JamesLetts