“Security group knowledge “Attack group knowledge “Access “Vulnerabilities “Damage potential “Detection “Recovery”Detailed definitions of the terms can be found in the Primer (pg 2), but most of the dimensions are relatively easy to decipher just from their titles. The two ‘group knowledge’ dimensions are a little less clear than the others. The ‘Security group knowledge’ looks at looks at how easy it is for changes to be made to the ICS without the knowledge of those responsible for the security (the ‘Security Group’) of the ICS; supposing that such surreptitious changes could be used to gain control of the system. The ‘Attack group knowledge’ dimension looks at how easy it would be for an attacker to gain information about details of the system that would allow for a deliberate, knowledgeable attack on the system. ICS Security Metrics The security ‘dimensions’ help to define the framework for ICS security measures, but they do not provide any direct measures that will allow a facility to track the ‘performance’ of their security systems. The Primer notes that there are a number of different metrics that have been developed by people in the industry and provide the following list of references (pg 24) as examples:
The Primer authors note that to be effective there should be at least one metric for each of the ICS security dimensions listed in the Primer. They recommend (pg 5) that the following 10 metrics (and associated dimension) should be used to track changes in the security posture of ICS:“E. Chew, A. Clay, J. Hash, N. Bartol, and A. Brown, Guide for Developing Performance Metrics for Information Security, NIST Special Publication 800-80, May 2006.
“R. Ross, S. Katzke, A. Johnson, M. Swanson, and G. Rogers, “System Questionnaire with NIST SP 800-53: Recommended Security Controls for Federal Information Systems,” Technical Report, NIST, References and Associated Security Control Mappings, Gaithersburg, Maryland, March 2006.
“M. Swanson, N. Bartol, J. Sabato, J. Hash, and L. Graffo, Security Metrics Guide for Information Technology Systems, NIST Special Publication 800-55, National Institute of Standards and Technology (NIST), Gaithersburg, Maryland, July 2003.
“M. McQueen, W. Boyer, S. McBride, M. Farrar, and Z. Tudor, "Measurable Control System Security through Ideal Driven Technical Metrics", S4: SCADA Security Scientific Symposium, January 23, 2008”
“Rogue Change Days (Security Group Knowledge); “Security Evaluation Deficiency Count (Security Group Knowledge); “Data Transmission Exposure (Attack Group Knowledge); “Reachability Count (Access); “Attack Path Depth (Access); “Known Vulnerability Days (Vulnerabilities); “Password Crack Time (Vulnerabilities); “Worst Case Loss (Damage Potential); “Detection Mechanism Deficiency Count (Detection); and “Restoration Time” (Recovery).According to the authors, each of the metrics listed above is an answer to one basic security question: “What can be objectively measured on the system that is a reasonable representation of how nearly the system approaches the ideal of its associated control systems cyber security dimension?” The Primer provides a detailed technical discussion of each metric including a range of possible values and an ‘ideal’ value. While ‘ideal’ value may not be achievable, the measurable values do provide a tool for tracking progress or assessing the efficacy of security measures. Case Studies The Primer does provide two examples of how the security dimensions and metrics were used in actual practice in two different case studies. The first such study will be of primary interest to the chemical security community since it took place at a chemical facility with a DCS ICS. Unfortunately the details associated with the case studies are minimal and there is little discussion of how the metrics were applied. There is more detail for the second case study which looks at the more politically sensitive case of an electric power distribution SCADA system. Standards for Metrics What is missing from the discussion of the metrics is the definition of an ‘acceptable’ value for the metric signifying that the security measures are ‘adequate’. There is a brief mention of ‘target’ values in the two case studies. The authors describe the suggested target value as “the value that could be obtained by changing the system configuration to improve cyber security while retaining required functionality” (pg 20), but provide no information on how the value was set and use different values in the two different case studies. While not stated in the Primer, the reason for the lack of definition of acceptable values is at least partially based on the fact that such a decision is at its most basic a management evaluation of the risk and the cost of security. Where legal standards are set (and that has not yet been done for ICS) for acceptable values then achieving those standards becomes a matter of compliance not security. Recommendation The CSSP should be commended for developing this document. It is a valuable addition to the cyber security debate. It is, however, probably not an adequate amount of information to develop and implement a metric based ICS security monitoring program. It assumes a relatively high level of knowledge about industrial control systems and computer networks. If CSSP provides some training sessions (on-line and on-site) to support this framework, it will be much more accessible. Having said that, facilities that use an industrial control system need to get a copy of this document and review it in detail. High-risk chemical facilities in particular should pay special attention to these metrics in developing their cyber security programs.
1 comment:
this blog can be viewed again and again.
Post a Comment