Difference: GlideinWMSCrabSSC6 (4 vs. 5)

Revision 52012/08/29 - Main.JamesLetts

Line: 1 to 1
 
META TOPICPARENT name="GlideinWMSCrab"

PROCEDURES FOR GLIDEINWMS CRAB SERVER DURING THE CMS SECURITY CHALLENGE SSC6

Changed:
<
<

Banning Users

>
>

Banning Users

  During the CMS Security Challenge, glideinWMS CRAB SERVER operators may be asked to ban a particular DN and provide certain information about the "attack". In particular, given a particular user DN, admins may be asked to take action to:
Line: 17 to 17
 

Detailed Procedures

The procedure for banning a user starts with mapping the certificate DN to a local userid on the UCSD CRAB Servers. This can be done by looking at the list of mappings. The command condor_q can also give you the same information, but only if the user still has jobs pending or running.

Changed:
<
<
The local UNIX userid typically has the form uscmsxxx.
>
>
The local UNIX userid typically has the form uscmsxxx. However, if a priority user's DN is compromised then it will have the form cmspaxxx.
 
condor_q -format '%s ' Owner -format '%s\n' x509userproxysubject | sort | uniq -c
Line: 27 to 27
 condor_hold uscmsxxx
Changed:
<
<
Remove the local userid from the /etc/passwd file on all submitter nodes. This will block any further submissions.
>
>
As root, block the local userid in the /etc/passwd file on all submitter nodes by appending something to the userid like uscmsxxxBLOCKED. This will help in cleanup later. Effectively this will block any further submissions by denying the ability of the compromised DN to use gridftp on the server.
 

Collecting Information

Line: 38 to 38
 

Detailed Procedures

Changed:
<
<
Information on which sites jobs ran at is (or soon will be) in the condor logs on the submission nodes in the file /opt/glidecondor/condor_local/log/EventLog. Eventually we will have a tool to parse this log, but the information is available in terms of condor Cluster ID (the same information you get from condor_q or condor_history) and JOB_Site:
>
>
Information on which sites jobs ran at is in the condor logs on the submission nodes in the file /opt/glidecondor/condor_local/log/EventLog. The log contains event information in terms of condor cluster IDs, GLIDEIN_Site, time stamps etc.

We now have a tool to parse this log, courtesy of I. Sfiligoi. To use the tool (currently only installed on submit-4):

 
Changed:
<
<
letts@submit-4 /opt/glidecondor/condor_local/log$ tail -100 /opt/glidecondor/condor_local/log/EventLog |egrep '^Cluster|^JOB_Site' Cluster = 31675 JOB_Site = "CERN" Cluster = 31675 JOB_Site = "CERN" Cluster = 31607 JOB_Site = "JINR"
>
>
source /opt/condor_igor_3214/condor.sh condor_userlog -rotated -fullname -attr Owner,JOB_GLIDEIN_Site /opt/glidecondor/condor_local/log/EventLog Job Host Start Time Evict Time Wall Time Good Time CPU Usage 31805.0 uscms3649,IFCA 8/28 20:43 8/28 20:44 0+00:01 0+00:01 0+00:00 31142.51 uscms4150,Louvain 8/28 20:29 8/28 23:01 0+02:32 0+00:00 0+00:00 31142.12 uscms4150,Louvain 8/28 20:59 8/28 23:01 0+02:02 0+00:00 0+00:00 31142.46 uscms4150,Louvain 8/28 20:58 8/28 23:02 0+02:03 0+00:00 0+00:00 31142.27 uscms4150,Louvain 8/28 21:01 8/28 23:02 0+02:01 0+00:00 0+00:00 31142.13 uscms4150,Louvain 8/28 21:02 8/28 23:02 0+02:00 0+00:00 0+00:00 ...

To query a particular user:

condor_userlog -rotated -const 'Owner=="uscms2330"' -fullname -attr Owner,JOB_GLIDEIN_Site /opt/glidecondor/condor_local/log/EventLog
Job      Host            Start Time  Evict Time  Wall Time Good Time CPU Usage
31869.4  uscms2330,IFCA   8/29 00:20  8/29 00:21   0+00:01   0+00:01   0+00:00
31869.8  uscms2330,IFCA   8/29 00:20  8/29 00:21   0+00:01   0+00:01   0+00:00
31869.7  uscms2330,IFCA   8/29 00:20  8/29 00:21   0+00:01   0+00:01   0+00:00
31869.5  uscms2330,IFCA   8/29 00:20  8/29 00:21   0+00:01   0+00:01   0+00:00
31869.6  uscms2330,IFCA   8/29 00:20  8/29 00:21   0+00:01   0+00:01   0+00:00
...
 
Changed:
<
<
Pilot names are also available in the EventLog. From this information it should be possible to determine which other jobs ran on the same pilots that may have been compromised, if any.
>
>
Pilot startd names are also available in the EventLog. From this information it should be possible to determine which other jobs ran on the same pilots that may have been compromised, if any.
  IP address from which jobs were submitted are more difficult to determine. In principle, this info is in two logs in $PRODAGENT_WORKDIR/CommandManager
  • ComponentLog says that there was e.g. a request to submit a new task.
  • FrontendLog says that IP n connected at time t.
Changed:
<
<
However, there is no guaranteed relationship.

FrontendLog is written by $CRABSERVER_ROOT/src/python/CommandManager/server_side/server2.c. S.B. looked a bit if it was obvious how to change to add the task name to the IP connection message (user's DN does not seem there, but task name would do), but it looks too complicted for understanding in a short time and we should not make very extensive changes to CRAB2 at this time.

>
>
However, there is no guaranteed relationship. FrontendLog is written by $CRABSERVER_ROOT/src/python/CommandManager/server_side/server2.c. S. Belforte looked a bit if it was obvious how to change to add the task name to the IP connection message (user's DN does not seem there, but task name would do), but it looks too complicted for understanding in a short time and we should not make very extensive changes to CRAB2 at this time.
 

Other actions based on information collected:

Changed:
<
<
  • Notify sites where jobs ran. Note that individual jobs could have run on more than one site!
  • Report the results to CMS Security Contacts (Ian and Mine)
>
>
  • Notify sites where jobs ran. (Note that individual jobs could have run on more than one site! The new EventLog gives you this information, since it tracks condor events, not clusters.)
  • Report the results to CMS Security Contacts (Ian and Mine, and the cms-comp-security mailing list).
 
Changed:
<
<

Compromised Pilot Certificate

>
>

Compromised Pilot Certificate

 
Changed:
<
<
The compromise of a pilot certificate is much more complicated than the case of a compromised user certificate, since there are only O(10) pilot certificates which are cycled round-robin to run glideinWMS pilots. User jobs will then connect to startd's run by these pilots for executing the user jobs. If a pilot certificate is compromised, then potentially every site and every user of glideinWMS for CMS analysis during the time since the compromise can be affected.
>
>
The compromise of a pilot certificate is much more complicated than the case of a compromised user certificate, since there are only O(10) pilot certificates which are cycled round-robin to run glideinWMS pilots. User jobs will then connect to startd's run by these pilots for executing the user jobs. If a pilot certificate is compromised, then potentially every site and every user of glideinWMS for CMS analysis during the time since the compromise can be affected. The time and effort to determine which, if any, proxies were not compromised might be prohibitive. In this case, it may be more efficient to shut down the entire system, clean up, and restart with un-compromised proxies. However, for the purposes of SSC6, we will not halt glidein CRAB operations or kill pilots. Simply make sure that such information that would be needed to carry out such an operation is obtainable and communication lines are working.
 
Changed:
<
<
How do you know a pilot proxy was compromised? GOOD QUESTION!
>
>
How do you know a pilot proxy was compromised? While this is a good question, for the purposes of SCC6 we will simply be told.
 

Initial Actions

If a glideinWMS pilot DN is compromised, admins will have to:

Changed:
<
<
  • Remove the particular pilot proxy from the rotation in the glideinWMS frontend and replace it with another of the 50 we have available.
  • Kill any running pilots with the banned proxy
  • BAN COMPROMISED PILOT DN AT THE COLLECTOR
  • Contact Factory Ops to kill any queued pilots
>
>
  • Remove the particular pilot proxy from the rotation in the glideinWMS frontend and replace it with another of the 50 we have available. (N.B. As of Wednesday August 29, 2012 the additional proxies are not yet registered with the CMS VO.)
  • Ask Factory Ops to kill any running pilots with the banned proxy and remove any queued pilots.
  • Ban the compromised pilot DN at on the condor collector
 

Detailed Procedures

Line: 95 to 109
 

Remove the compromised proxy from the list and replace it with another that is not being used already in this frontend or in any other running frontend on the machine.

Changed:
<
<
Other certificates can be found in ~/.globus.
>
>
Other certificates can be found in ~/.globus (but they are not yet registed with the CMS VO).
  Reconfigure the frontend:
./frontend_startup reconfig ../instance_v5_4.cfg/frontend.xml
Changed:
<
<
To remove all running and queued pilots with a particular DN, it is necessary to contact the Factory Operators (osg-gfactory-support@physics.ucsd.edu). Also ask them for a history of pilots that ran at sites with that DN since the time of the incident (pilot name of the form "glidein_15640@node20-9.wn.iihe.ac.be", site).
>
>
To remove all running and queued pilots with a particular DN, it is necessary to contact the Factory Operators (osg-gfactory-support@physics.ucsd.edu).
 

Collecting Information

  • find out which sites pilot jobs ran on using this proxy (above) and notify them
Line: 110 to 124
 

Detailed Procedures

Changed:
<
<
Given the large number of pilots running at any given time O(10000) and the small number of proxies O(10), every site and every user who ran a job in the glideinWMS analysis system since the time of the compromise of a pilot certificate will have been affected. To make this point, look at every site where pilots are currently running using one certificate:
>
>
Given the large number of pilots running at any given time O(10000) and the small number of proxies O(10), every site and every user who ran a job in the glideinWMS analysis system since the time of the compromise of a pilot certificate may have been affected. To make this point, look at every site where pilots are currently running using one certificate:
 
letts@submit-4 ~$ condor_status -const '(GLIDEIN_X509_GRIDMAP_DNS=?="/DC=org/DC=doegrids/OU=Services/CN=glidein-collector.t2.ucsd.edu,/DC=org/DC=doegrids/OU=Services/CN=glidein-frontend.t2.ucsd.edu,/DC=org/DC=doegrids/OU=Services/CN=uscmspilot05/glidein-1.t2.ucsd.edu")' -l | grep ^GLIDEIN_CMSSite | sort | uniq -c
     20 GLIDEIN_CMSSite = "T1_CH_CERN"
Line: 147 to 161
 
    1. GLIDEIN_CMSSite = "T3_US_TTU"
    2. GLIDEIN_CMSSite = "T3_US_UMD"
Changed:
<
<
This is 33 out of 39 sites running glideins at this time.

To get detailed information about pilot history, ask Factory Ops for a history of pilots that ran at sites with that DN since the time of the incident (pilot name of the form "glidein_15640@node20-9.wn.iihe.ac.be", site). Then you can cross-reference the list of pilots against the condor_history (potentially a lot of work).

>
>
This is 33 out of 39 sites running glideins at this time. Over the course of a few hours, this would quickly encompass all sites. Therefore, it is likely that every site running a glidein since the time of the compromise has been affected.
 

Other Actions

Changed:
<
<
  • Notify the sites and users whose jobs ran with pilots with a compromised credential
  • Report the results to CMS Security Contacts (Ian and Mine)
>
>
  • Notify the sites where jobs ran with pilots with a compromised credential (effectively all sites).
  • Report the results to CMS Security Contacts (Ian and Mine, and the cms-comp-security mailing list)
 

General Observation

Changed:
<
<
Note that if a compromise is thought to spread from pilot to user DN and vice-versa, the entire system could be considered compromised on short order, given that user tasks have of order O(1000) jobs and there are only 10 pilot proxies. The probability that any task of 1000 jobs that have already run or started avoided a particular pilot proxy is very very small (1.7 x 10^-46).

To consider (Igor, Stefano, Lola, James):

  • Step to renew all the pilot proxies?
  • Are the pilot credentials themselves compromised or just the proxy hijacked (simpler situation)?
>
>
Note that if a compromise is thought to spread from pilot to user DN and vice-versa, the entire system could be considered compromised on short order, given that user tasks have of order O(1000) jobs and there are only 10 pilot proxies. The probability that any task of 1000 jobs that have already run or started have avoided using pilot with a particular pilot proxy is very very small (1.7 x 10^-46). Therefore, in case of this kind of attack, there may be nothing to do other than holding all user jobs, removing all running and queued pilots, banning the compromised pilot certificates as above, and starting over. (What about compromised user proxies? When would it be safe to let user jobs run again?)
 
Deleted:
<
<
-- JamesLetts - 2012/08/27
 \ No newline at end of file
Added:
>
>
-- JamesLetts - 2012/08/29
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback