Difference: StoragePerformanceTest (1 vs. 4)

Revision 42011/02/08 - Main.HaifengPi

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Scalability Test on BeStMan? and BeStMan2?

Line: 6 to 6
  The work of 2010 was documented in Measurement of BeStMan Scalability
Changed:
<
<
In the last 3 months (from Nov. 2010 to Jan. 2011), the testing work continued for the new release of BeStMan? . The scalability tool, glideTester, was used primarily, which was very efficient and easy to operate. The lcg clients were running at UCSD. We achieved similar performance in the newly deployed instance of BeStMan? at FNAL as that of 2010. Follow shows the preliminary results for the latest BeStMan? release (2.0.5):
>
>
In the last 3 months (from Nov. 2010 to Jan. 2011), the testing work continued for the new release of BeStMan? . The scalability tool, glideTester, was used primarily, which was very efficient and easy to operate. The lcg clients were running at UCSD. We achieved similar performance in the newly deployed instance of BeStMan? at FNAL as that of 2010. The file system is the local drive at the BeStMan? server. Follow shows the preliminary results done in Jan. 2011 for the latest BeStMan? release (2.0.5):
 
Number of lcg clients Server Processing Rate (Hz)
100 78.3
Line: 16 to 16
 
900 73.2
1100 70.1
Changed:
<
<
With a single BeStMan? server mounted to a standard HDFS system, we expect the server processing rate for a small meta data operation (e.g. listing all the items in the directories with ~10s of files or subdirectories) is ~50 Hz. The large requests that involves thousands of files or subdirectories, the processing time for finishing the request and number of requests that server can finish simultaneously will be significantly longer and smaller respectively.
>
>
With a single BeStMan? server mounted to a standard HDFS system, we measured the server processing rate for a small meta data operation (e.g. listing all the items in the directories with ~10s of files or subdirectories) is ~40-50 Hz. The large requests that involves thousands of files or subdirectories, the processing time for finishing the request and number of requests that server can finish simultaneously will be significantly longer and smaller respectively.
  From the scalability point of view, the BeStMan? is less likely a bottleneck of the storage system of a typical Tier-2 center at LHC. More work will be on optimizing the configuration and better understanding the dependency between server processing rate and concurrency of clients.

Scalability of HDFS and System I/O architecture

Revision 32011/02/08 - Main.HaifengPi

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Scalability Test on BeStMan? and BeStMan2?

Line: 6 to 6
  The work of 2010 was documented in Measurement of BeStMan Scalability
Changed:
<
<
In last 3 months, the testing work continued for the new release of BeStMan? . The scalability tool, glideTester, was used primarily, which was very efficient and easy to operate. The lcg clients were running at UCSD. We achieved similar performance in the newly deployed instance of BeStMan? at FNAL as that of 2010.
>
>
In the last 3 months (from Nov. 2010 to Jan. 2011), the testing work continued for the new release of BeStMan? . The scalability tool, glideTester, was used primarily, which was very efficient and easy to operate. The lcg clients were running at UCSD. We achieved similar performance in the newly deployed instance of BeStMan? at FNAL as that of 2010. Follow shows the preliminary results for the latest BeStMan? release (2.0.5):

Number of lcg clients Server Processing Rate (Hz)
100 78.3
300 77.5
500 79.2
700 73.9
900 73.2
1100 70.1
  With a single BeStMan? server mounted to a standard HDFS system, we expect the server processing rate for a small meta data operation (e.g. listing all the items in the directories with ~10s of files or subdirectories) is ~50 Hz. The large requests that involves thousands of files or subdirectories, the processing time for finishing the request and number of requests that server can finish simultaneously will be significantly longer and smaller respectively.

Revision 22011/02/07 - Main.HaifengPi

Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Added:
>
>

Scalability Test on BeStMan? and BeStMan2?

Since spring 2010 the study of SRM (BeStMan? and BeStMan2? ) has been one of the major work for the test of performance of storage system. We measured and studied the scalability of BeStMan? with the latest software release based on Java Jetty container. As compared to previous release based on Globus container, new BeStMan? shows very high scalability. Various efforts have been taken to improve the configuration and deployment procedure. The BeStMan? is demonstrated to be seamlessly integrated with Hadoop-based storage element.

The work of 2010 was documented in Measurement of BeStMan Scalability

In last 3 months, the testing work continued for the new release of BeStMan? . The scalability tool, glideTester, was used primarily, which was very efficient and easy to operate. The lcg clients were running at UCSD. We achieved similar performance in the newly deployed instance of BeStMan? at FNAL as that of 2010.

With a single BeStMan? server mounted to a standard HDFS system, we expect the server processing rate for a small meta data operation (e.g. listing all the items in the directories with ~10s of files or subdirectories) is ~50 Hz. The large requests that involves thousands of files or subdirectories, the processing time for finishing the request and number of requests that server can finish simultaneously will be significantly longer and smaller respectively.

From the scalability point of view, the BeStMan? is less likely a bottleneck of the storage system of a typical Tier-2 center at LHC. More work will be on optimizing the configuration and better understanding the dependency between server processing rate and concurrency of clients.

Scalability of HDFS and System I/O architecture

This work is ongoing. We expect to have some milestone in the mid 2011.

1. HDFS Namenode I/O scalability

2. Architecture of DataNode?

3. Impact of FUSE

4. Performance w.r.t. block size

5. Latency of HDFS

 -- HaifengPi - 2010/02/18

Revision 12010/02/18 - Main.HaifengPi

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="WebHome"
-- HaifengPi - 2010/02/18
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback