Difference: WorkerNodeReinstall (1 vs. 3)

Revision 32012/07/25 - Main.BruceThayre

Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Overview:
Line: 11 to 11
 
  1. Run patches

Replace any faulty hardware (usually a bad disk)

Changed:
<
<
1u & 2u: ( each [] is a physical hard drive in the node, assuming you are facing the node from the front )

The numbers correspond to stickers that may or may not be on the front of the hard drives

1 & 2u:
>
>
( each [] is a physical hard drive in the node, assuming you are facing the node from the front )

The numbers correspond to stickers that may or may not be on the front of the hard drives

1u:
 
Changed:
<
<
[ 0 /dev/sda/ ] [ 1 /dev/sdb ] [ 2 /dev/sdc ] [ 3 /dev/sdd ]

[ 4 /dev/sde/ ] [ 5 /dev/sdf ] [ 6 /dev/sdg ] [ 4 /dev/sdh ]
>
>
[ 0 /dev/sda/ ] [ 1 /dev/sdb/ ] [ 2 /dev/sdc/ ] [ 3 /dev/sdd/ ]

2u:

[ 4 /dev/sde/ ] [ 5 /dev/sdf/ ] [ 6 /dev/sdg/ ] [ 7 /dev/sdh/ ]

[ 0 /dev/sda/ ] [ 1 /dev/sdb/ ] [ 2 /dev/sdc/ ] [ 3 /dev/sdd/ ]

  3u:
Changed:
<
<

[ /dev/sde ] [ /dev/sdd ] [ /dev/sdc ] [ /dev/sdb ]

[ /dev/sdi ] [ /dev/sdh ] [ /dev/sdg ] [ /dev/sdf ]

[ /dev/sdm ] [ /dev/sdl ] [ /dev/sdk ] [ /dev/sdj ]
>
>

[ /dev/sdm/ ] [ /dev/sdj/ ] [ /dev/sdg/ ] [ /dev/sdd/ ]

[ /dev/sdl/ ] [ /dev/sdi/ ] [ /dev/sdf/ ] [ /dev/sdc/ ]

[ /dev/sdk/ ] [ /dev/sdh/ ] [ /dev/sde/ ] [ /dev/sdb/ ]
 
Changed:
<
<
GOTCHA: For the 3u systems an SSD is used for the system install and swap. They also hook the drives up to the SATA controller channels incrementally from right to left. This is opposite of our 1 and 2u nodes.
>
>
GOTCHA: For the 3u systems an SSD is used for the system install and swap, the platter drives accessible at the front start at sdb.
  For these systems you can follow the general instructions. Replace sda if it has failed/is failing; SSD installed systems need to be pulled from the rack and opened up to replace the SSD, whereas sda is available via the front panel on the platter drive only nodes.

Revision 22012/07/19 - Main.BruceThayre

Line: 1 to 1
 
META TOPICPARENT name="WebHome"
Overview:
Line: 6 to 6
 
  1. Shutdown node and replace any faulty hardware
  2. Reboot with rocks 5 boot disk
  3. Don't enter any options, and allow the disk to boot in its default mode
Changed:
<
<
  1. The boot disk will send a DHCP request with the client's MAC address
  2. The head node will look up the MAC address, and if the MAC is found, any present configuration is used, otherwise the current default config is used
>
>
  1. If the worker node's NIC MAC address is found in the head node database, the existing configuration for that node is used for reinstallatio.
 
  1. Node installs
  2. Run patches

Replace any faulty hardware (usually a bad disk)

Line: 29 to 28
  As of now (7/19/2012) there are two types of worker node reinstalls:
Changed:
<
<
  1. Worker nodes with non-raid partition scheme (new)
  2. Workernodes with soft-raid partition scheme (old)
>
>
  1. Worker nodes with non-raid partition scheme (new)(good)
  2. Workernodes with soft-raid partition scheme (old)(bad)
  1. Non-raid partition scheme. An example of the newest partition scheme used (1u & 2u node) :
Line: 64 to 63
  This is an example of output for cabinet-4-4-12 with the raid setup; that's bad.
Changed:
<
<
  1. Delete the node partition table using the "rocks remove host partition $CABINET-#-#-#" command
>
>
  1. Delete the node partition table using the "rocks remove host partition $CABINET-#-#-#" command
 
Changed:
<
<
  1. Verify that the partition table is gone using "rocks list host partition $CABINET-#-#-#" command
>
>
  1. Verify that the partition table is gone using "rocks list host partition $CABINET-#-#-#" command
 

Reboot with rocks boot disk

Rocks 5 Net install or kernel disk is fine.

Line: 76 to 75
 
  • Node installs

Run patches

Added:
>
>
There are shell scripts located in a NFS folder that we use to patch each worker node before it can securely be used in our cluster. These cripts should be run ( at the very least update-packages.sh ) ASAP after installing a worker node.

The scripts can be found in /share/apps/setup/osg-rpm/. There is also a README file indicated the order to run the scripts in. As 7/19/2012 the order to run is:

# For nodes with 4 1TB - 2TB disks
/share/apps/setup/osg-rpm/update-packages.sh;/share/apps/setup/osg-rpm/install-packages.sh;/share/apps/setup/osg-rpm/setup-compute.sh


# For nodes with 1 SSD disk configured ie. non storage nodes
/share/apps/setup/osg-rpm/update-packages.sh;/share/apps/setup/osg-rpm/install-packages.sh;/share/apps/setup/osg-rpm/part-format.sh 1;/share/apps/setup/osg-rpm/setup-compute.sh

 -- BruceThayre - 2012/07/19

Revision 12012/07/19 - Main.BruceThayre

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="WebHome"
Overview:

When a head node is fully configured, reinstalling a worker node is a simple process with a few gotchas depending on the type of node being reinstalled. Without changing the node's configuration the process is straightforward:

  1. Shutdown node and replace any faulty hardware
  2. Reboot with rocks 5 boot disk
  3. Don't enter any options, and allow the disk to boot in its default mode
  4. The boot disk will send a DHCP request with the client's MAC address
  5. The head node will look up the MAC address, and if the MAC is found, any present configuration is used, otherwise the current default config is used
  6. Node installs
  7. Run patches

Replace any faulty hardware (usually a bad disk)

1u & 2u: ( each [] is a physical hard drive in the node, assuming you are facing the node from the front )

The numbers correspond to stickers that may or may not be on the front of the hard drives

1 & 2u:

[ 0 /dev/sda/ ] [ 1 /dev/sdb ] [ 2 /dev/sdc ] [ 3 /dev/sdd ]

[ 4 /dev/sde/ ] [ 5 /dev/sdf ] [ 6 /dev/sdg ] [ 4 /dev/sdh ]

3u:


[ /dev/sde ] [ /dev/sdd ] [ /dev/sdc ] [ /dev/sdb ]

[ /dev/sdi ] [ /dev/sdh ] [ /dev/sdg ] [ /dev/sdf ]

[ /dev/sdm ] [ /dev/sdl ] [ /dev/sdk ] [ /dev/sdj ]

GOTCHA: For the 3u systems an SSD is used for the system install and swap. They also hook the drives up to the SATA controller channels incrementally from right to left. This is opposite of our 1 and 2u nodes.

For these systems you can follow the general instructions. Replace sda if it has failed/is failing; SSD installed systems need to be pulled from the rack and opened up to replace the SSD, whereas sda is available via the front panel on the platter drive only nodes.

Alter node config

If the existing configuration is good, skip this section.

As of now (7/19/2012) there are two types of worker node reinstalls:

  1. Worker nodes with non-raid partition scheme (new)
  2. Workernodes with soft-raid partition scheme (old)

1. Non-raid partition scheme. An example of the newest partition scheme used (1u & 2u node) :

~# ssh cabinet-4-4-0 'fdisk -l'
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 1275 10241406 83 Linux
/dev/sda2 1276 2550 10241437+ 82 Linux swap / Solaris
/dev/sda3 2551 121601 956277157+ 83 Linux

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 1275 10241406 82 Linux swap / Solaris
/dev/sdb2 1276 121601 966518595 83 Linux

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 1275 10241406 82 Linux swap / Solaris
/dev/sdc2 1276 121601 966518595 83 Linux

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 1275 10241406 82 Linux swap / Solaris
/dev/sdd2 1276 121601 966518595 83 Linux

In this configuration, every disk except the system disk will have two partitions: one swap and one ext3 (data). The system disk will have 3: one ext3 (install), one swap, and another ext3 (data). Unless something has been configured incorrectly, /dev/sda will always be your system disk.

An example of the newest partition scheme (2u+ node, and middle disks omitted to save space):

~# ssh cabinet-0-0-4 'fdisk -l'

Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 3824 30716248+ 83 Linux
/dev/sda2 3825 9729 47431912+ 82 Linux swap / Solaris


Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 243201 1953512001 83 Linux

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 * 1 243201 1953512001 83 Linux

... Disk /dev/sdm: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdm1 * 1 243201 1953512001 83 Linux

Note that in the example above /dev/sda is 80GB; this is our aformentioned SSD.

2. Soft-raid partition scheme. An example of the old partition scheme (only 1u):

~# ssh cabinet-4-4-12 'fdisk -l'
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 1275 10241406 83 Linux
/dev/sda2 1276 4462 25599577+ fd Linux raid autodetect
/dev/sda3 4463 4717 2048287+ 82 Linux swap / Solaris
/dev/sda4 4718 121601 938870730 5 Extended
/dev/sda5 4718 121601 938870698+ fd Linux raid autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 3187 25599546 fd Linux raid autodetect
/dev/sdb2 3188 3442 2048287+ 82 Linux swap / Solaris
/dev/sdb3 3443 121601 949112167+ fd Linux raid autodetect

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 * 1 3187 25599546 fd Linux raid autodetect
/dev/sdc2 3188 3442 2048287+ 82 Linux swap / Solaris
/dev/sdc3 3443 121601 949112167+ fd Linux raid autodetect

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 * 1 3187 25599546 fd Linux raid autodetect
/dev/sdd2 3188 3442 2048287+ 82 Linux swap / Solaris
/dev/sdd3 3443 121601 949112167+ fd Linux raid autodetect

Disk /dev/md0: 104.8 GB, 104854716416 bytes
2 heads, 4 sectors/track, 25599296 cylinders
Units = cylinders of 8 * 512 = 4096 bytes


Disk /dev/md1: 3877.0 GB, 3877075681280 bytes
2 heads, 4 sectors/track, 946551680 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

In this configuration every disk except the install disk will have two soft-raid partitions (dcache and data), and one swap partition. The system disk will have the two soft-raid paritions, a swap partition, and one ext3 partition ( install ). Unless something has been configured incorrectly, /dev/sda will always be your system disk.

If the node is using the old partition scheme ( raid, and space saved for dcache, our old DFS ), the partition config will need to be changed on the headnode. Root privileges on the headnode are needed for this.

  1. Log in to the headnode
  2. Look at node partition table using the "rocks list host partition $CABINET-#-#-#" command
root@t2gw02 ~# rocks list host partition cabinet-4-4-12
DEVICE MOUNTPOINT START SIZE ID TYPE FLAGS FORMATFLAGS
sda1 / 32.3kB 10.5GB -- ext3 boot -----------
sda2 raid.sda2 10.5GB 26.2GB -- ext3 raid -----------
sda3 swap 36.7GB 2097MB -- linux-swap --------- -----------
sda4 ---------- 38.8GB 961GB -- ---------- --------- -----------
sda5 raid.sda5 38.8GB 961GB -- ---------- raid -----------
sdb1 raid.sdb1 32.3kB 26.2GB -- ---------- boot raid -----------
sdb2 swap 26.2GB 2097MB -- linux-swap --------- -----------
sdb3 raid.sdb3 28.3GB 972GB -- ---------- raid -----------
sdc1 raid.sdc1 32.3kB 26.2GB -- ---------- boot raid -----------
sdc2 swap 26.2GB 2097MB -- linux-swap --------- -----------
sdc3 raid.sdc3 28.3GB 972GB -- ---------- raid -----------
sdd1 raid.sdd1 32.3kB 26.2GB -- ---------- boot raid -----------
sdd2 swap 26.2GB 2097MB -- linux-swap --------- -----------
sdd3 raid.sdd3 28.3GB 972GB -- ---------- raid -----------

This is an example of output for cabinet-4-4-12 with the raid setup; that's bad.

  1. Delete the node partition table using the "rocks remove host partition $CABINET-#-#-#" command

  1. Verify that the partition table is gone using "rocks list host partition $CABINET-#-#-#" command

Reboot with rocks boot disk

Rocks 5 Net install or kernel disk is fine.

Don't enter any options, and allow the disk to boot in its default mode

  • The boot disk will send a DHCP request with the client's MAC address
  • The head node will look up the MAC address, and if the MAC is found, any present configuration is used, otherwise the current default config is used
  • Node installs

Run patches

-- BruceThayre - 2012/07/19

 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback