That is now how you restart the mount point on uaf.
The restart is to unmount and then remount hadoop. The command used is in /etc/rc.local.
What you are trying to do there is start hadoop as a data node and it fails because the UAF machine is not configured to accept that.
On nodes where you can mount hadoop with a script (ie the worker nodes) the mount is done by /etc/init.d/mount-hdfs.
I got it back by first trying a lazy unmount is unmount and unmount -f did not work...
umount -l /hadoop
Then I remounted the FS from the /etc/rc.local directory
/usr/bin/hdfs -o server=proxy-1.t2.ucsd.edu,port=9000,rdbuffer=131072,allow_other /hadoop/
I can see the mount again now.
Terrence
--
FkW - 2012/04/13
Topic revision: r1 - 2012/04/13 - 22:47:10 -
FkW