As I promised earlier I would like to address the second half of Dale Peterson’s comment posted to yesterday’s blog about pay for patch. In the second half of his comment Dale said:
Side thought - we are going to need to stop treating ICS as a single category. Not all ICS is used in critical infrastructure. We shouldn't act as if every HMI vuln has a big consequence. The C3-Ilex is actually used in the electric sector and a few others so that does affect what almost all would label critical infrastructure. I'll have a blog [DigitalBond.com] on a different (not patching) aspect of this up later today.
This is not the first time or venue that Dale has made this point, and from time to time we all need to be reminded that we shouldn’t take all reported vulnerabilities with equal seriousness. Nor should we consider every control system to be equally vulnerable.
ICS Vulnerability Reports
Dale has mentioned on a number of occasions that he is concerned with the fact that it appears that ICS-CERT pays as much attention to vulnerabilities in obscure human machine interface (HMI) applications as it does to PLC vulnerabilities in widely deployed systems. It is certainly true that the vast majority of alerts and advisories concern systems that are not widely used in critical infrastructure in the United States. What isn’t as clear is how much of the limited resources of ICS-CERT is taken up with these lesser used system. That would be an interesting study for the DHS IG or the Congressional Research Service.
Having said that, I think that ICS-CERT is still providing a valuable service in the reports that it provides on these systems. These would be even more useful if they maintained an easily searchable database with these vulnerabilities and responses listed in a way that would allow a small company to investigate the security background of these vendors.
Besides, do we really want a government agency like this picking and choosing which researcher reports it is going to address? Unless some outside agency sets very strict guidelines on how those decisions are made (and who else in DHS has the technical background to set those guidelines), such a system of picking and choosing will quickly devolve into a political process that will serve no one well.
Dale in a blog today, also points to large scale problems that ICS-CERT is under-sourced to handle. He give the example of the Project Shine report of 500,000 internet facing control systems that were recently reported to ICS-CERT. He notes that identifying and notifying the owners of those IPs will take a great deal of time and effort. He also questions if it is worth the time of ICS-CERT to complete that herculean task since it is unlike that more than a small fraction will be associated with critical infrastructure facilities. I will note that I am expecting to see a public announcement in the next week or so from a private group that they are going to accept responsibility for identifying and contacting those IP owners, relieving ICS-CERT of that burden.
No, the real problem here is the ‘limited resource’ side of the equation, not that ICS-CERT takes on the vulnerability communications task on little used systems. I think that the doubling of the budget and manpower for this office would have a minimal impact on the size of the federal budget, and it would provide a substantial improvement in the capabilities of the organization.
The other side of Dale’s comment concerns where these vulnerabilities are found in the field. Two systems with control systems that only differ in the number of PLCs connected to the system will have the system vulnerability, but will have different levels of risk associated with them. And that risk is not determined by size alone; a whole host of other factors including socio-economic and political go into determining the risk of an attack on the system.
This is something that has generally been missing from the discussion of ICS Security. Systems that control the stability of the electrical grid are more at risk of attack than those that control the operation of a small hydroelectric plant. The cybersecurity community needs to start talking about how these risk levels will affect decisions about how to secure control systems. It just doesn’t make sense that a manufacturer of kiddy-widgets would need to worry as much about protecting his control system as a similarly sized chemical manufacturer that is handling toxic inhalation hazard chemicals.
Not only is the level of risk different, but the mode of attack is probably going to be different as well. That chemical manufacturer will probably have to be concerned about the probability of a terrorist attack while the widget manufacturer will be more concerned about the potential of attack by a disgruntled employee. These two different types of risk will require two different types of security planning and execution.
Finally, even though Joe Weis has been pushing this idea for years, there really hasn’t been much discussion about the resilience of control systems to unintended system upsets. Most of the cyber-incidents over the last decade have not had anything to do with deliberate attacks on the systems, but seem to have been the result of human error or inappropriate machine responses to and accidental stimuli.
Insecure by Design
One final area that I would like to address that Dale inexplicably did not include in the comment on this blog is the whole idea of ‘insecure by design’. I don’t know if Dale coined the term, but he certainly uses it often enough that many folks cringe at the site of it. The vast majority of PLCs (as an example of insecure devices) in use today never had any thought to security included in their design process. In the most cases, if an attacker gains administrative access to a control system network they can pretty much re-program the PLC to do what the attacker wants. Worse yet, it has been demonstrated in the wild that it can be done without the system owner being able to detect either the access or the re-programing.
Even if every PLC manufacturer started today to turn out only PLCs with secure communications for programing purposes, it would be 10 to 15 years before all of the existing insecure PLCs were pulled out of processes because of life-cycle issues. But, we really don’t need to concern ourselves with replacing all of the insecure PLCs, again because not all of them are equally at risk for attack.
Instead of beating each other up over whether it is more important to cure insecure by design or post-install secure communications link, we need to be talking about defining a methodology to identify relative risk to attack. With that tool owners could decide which PLCs need to be replaced with an expensive high-security PLC, which PLCs should be protected by less expensive add-on secure communications devices and which PLCs can be just left alone to last out their current useful life.
Priorities Must Be Established
In a perfect world it would not be possible to attack any control system because the design, manufacture and implementation of every control system would take security into account at every step of the process. Unfortunately, this hasn’t been done for the control systems currently in use or even being designed for use. It’s too late to worry over much about that; it is what it is.
What current system owners need are tools to determine the risks of their systems to attack, either from within or without. Those systems at the highest risk level, particularly those that are in critical infrastructure installations, then need to determine which parts of their systems are at the most risk and harden those first and hardest. Then they need a plan to continue hardening their systems as time and resources permit.