Tuesday, November 6, 2012

Reader Comment – Not All ICS Are Equal


As I promised earlier I would like to address the second half of Dale Peterson’s comment posted to yesterday’s blog about pay for patch. In the second half of his comment Dale said:

Side thought - we are going to need to stop treating ICS as a single category. Not all ICS is used in critical infrastructure. We shouldn't act as if every HMI vuln has a big consequence. The C3-Ilex is actually used in the electric sector and a few others so that does affect what almost all would label critical infrastructure. I'll have a blog [DigitalBond.com] on a different (not patching) aspect of this up later today.

This is not the first time or venue that Dale has made this point, and from time to time we all need to be reminded that we shouldn’t take all reported vulnerabilities with equal seriousness. Nor should we consider every control system to be equally vulnerable.

ICS Vulnerability Reports


Dale has mentioned on a number of occasions that he is concerned with the fact that it appears that ICS-CERT pays as much attention to vulnerabilities in obscure human machine interface (HMI) applications as it does to PLC vulnerabilities in widely deployed systems. It is certainly true that the vast majority of alerts and advisories concern systems that are not widely used in critical infrastructure in the United States. What isn’t as clear is how much of the limited resources of ICS-CERT is taken up with these lesser used system. That would be an interesting study for the DHS IG or the Congressional Research Service.

Having said that, I think that ICS-CERT is still providing a valuable service in the reports that it provides on these systems. These would be even more useful if they maintained an easily searchable database with these vulnerabilities and responses listed in a way that would allow a small company to investigate the security background of these vendors.

Besides, do we really want a government agency like this picking and choosing which researcher reports it is going to address? Unless some outside agency sets very strict guidelines on how those decisions are made (and who else in DHS has the technical background to set those guidelines), such a system of picking and choosing will quickly devolve into a political process that will serve no one well.

Dale in a blog today, also points to large scale problems that ICS-CERT is under-sourced to handle. He give the example of the Project Shine report of 500,000 internet facing control systems that were recently reported to ICS-CERT. He notes that identifying and notifying the owners of those IPs will take a great deal of time and effort. He also questions if it is worth the time of ICS-CERT to complete that herculean task since it is unlike that more than a small fraction will be associated with critical infrastructure facilities. I will note that I am expecting to see a public announcement in the next week or so from a private group that they are going to accept responsibility for identifying and contacting those IP owners, relieving ICS-CERT of that burden.

No, the real problem here is the ‘limited resource’ side of the equation, not that ICS-CERT takes on the vulnerability communications task on little used systems. I think that the doubling of the budget and manpower for this office would have a minimal impact on the size of the federal budget, and it would provide a substantial improvement in the capabilities of the organization.

ICS Vulnerabilities


The other side of Dale’s comment concerns where these vulnerabilities are found in the field. Two systems with control systems that only differ in the number of PLCs connected to the system will have the system vulnerability, but will have different levels of risk associated with them. And that risk is not determined by size alone; a whole host of other factors including socio-economic and political go into determining the risk of an attack on the system.

This is something that has generally been missing from the discussion of ICS Security. Systems that control the stability of the electrical grid are more at risk of attack than those that control the operation of a small hydroelectric plant. The cybersecurity community needs to start talking about how these risk levels will affect decisions about how to secure control systems. It just doesn’t make sense that a manufacturer of kiddy-widgets would need to worry as much about protecting his control system as a similarly sized chemical manufacturer that is handling toxic inhalation hazard chemicals.

Not only is the level of risk different, but the mode of attack is probably going to be different as well. That chemical manufacturer will probably have to be concerned about the probability of a terrorist attack while the widget manufacturer will be more concerned about the potential of attack by a disgruntled employee. These two different types of risk will require two different types of security planning and execution.

Finally, even though Joe Weis has been pushing this idea for years, there really hasn’t been much discussion about the resilience of control systems to unintended system upsets. Most of the cyber-incidents over the last decade have not had anything to do with deliberate attacks on the systems, but seem to have been the result of human error or inappropriate machine responses to and accidental stimuli.

Insecure by Design


One final area that I would like to address that Dale inexplicably did not include in the comment on this blog is the whole idea of ‘insecure by design’. I don’t know if Dale coined the term, but he certainly uses it often enough that many folks cringe at the site of it. The vast majority of PLCs (as an example of insecure devices) in use today never had any thought to security included in their design process. In the most cases, if an attacker gains administrative access to a control system network they can pretty much re-program the PLC to do what the attacker wants. Worse yet, it has been demonstrated in the wild that it can be done without the system owner being able to detect either the access or the re-programing.

Even if every PLC manufacturer started today to turn out only PLCs with secure communications for programing purposes, it would be 10 to 15 years before all of the existing insecure PLCs were pulled out of processes because of life-cycle issues. But, we really don’t need to concern ourselves with replacing all of the insecure PLCs, again because not all of them are equally at risk for attack.

Instead of beating each other up over whether it is more important to cure insecure by design or post-install secure communications link, we need to be talking about defining a methodology to identify relative risk to attack. With that tool owners could decide which PLCs need to be replaced with an expensive high-security PLC, which PLCs should be protected by less expensive add-on secure communications devices and which PLCs can be just left alone to last out their current useful life.

Priorities Must Be Established


In a perfect world it would not be possible to attack any control system because the design, manufacture and implementation of every control system would take security into account at every step of the process. Unfortunately, this hasn’t been done for the control systems currently in use or even being designed for use. It’s too late to worry over much about that; it is what it is.

What current system owners need are tools to determine the risks of their systems to attack, either from within or without. Those systems at the highest risk level, particularly those that are in critical infrastructure installations, then need to determine which parts of their systems are at the most risk and harden those first and hardest. Then they need a plan to continue hardening their systems as time and resources permit.
 
Once we get well started on this, then we can worry as a society about what kind of security we need to reasonably put into place in the lower risk facilities.

2 comments:

Anonymous said...

The question of resources necessary to address the scale of the ICS security conundrum is coming more to the fore of

late. I take this as a sign that the overall conversation is moving along sane lines.

For a lot of interesting reasons we have collectively looked to the US federal government to be a large part in solving

the problem to this point. A lot of good folks have done and continue to do good work inside the government, and we are

now beginning to address what work they can do and perhaps what work is outside of their reasonably expectable scope.

Project Shine is the poster child at the moment. The question it poses nets out to: "is it possible for the federal

government to address each individual insecure facility and device?" The obvious answer - short of limitless tax dollars,

and even then due to the known restraints federal employees need to work within - is "of course not, don't be silly." The

nascent Publicly Accessible Control Systems (PACS) working group spinning up in the private sector may help us find some

other means for addressing that issue, or at the very least narrow the options.

The "which systems are critical?" question you and Dale are pointing out may be another. Do we want someone within the

government determining what is important and what is not? "Critical" will remain a relative term depending on who you are

(akin to the old saw: "a recession is your neighbor losing their job, a depression is you losing yours"). Even defining

the issue inside the accepted 18 CIKR categories is not straight forward (assuming that electric utilities, for example,

operate better when their datacenters have functioning HVAC systems).

Rather than taking either extreme - that government has either no role or the only role - we need to divide and conquer.

The government will always have some really amazingly big hammers, but sometimes you also need lots of little

screwdrivers.

Christine Wanta said...

Various industries have had to face a core gap in security namely, risk analysis, when it begins to incorporate security principles, especially in the parts of mechanical controls that became IT controllers, and be required to have security due to its business model and its national impact. The real issues is not just the lack of tools, but the lack of data to make tangible analysis. You can design the tool, but to design the numeric valuation is what makes it useful. Before you can deliver a good tool you must have a solid basis for the value of an item, and this requires substantial and more importantly quantity and quality for sufficient statistical data...in IT, we still only have superficial data streams, often unsubstantiated, so any CERT delivers based on what is available, and it is the professionals job to apply this to your own world...which takes a depth of knowledge about security, your environment, and experience rare, unfortunately, in many security teams. We live in interesting times ;)

And no, the term "insecure by design" has been around for decades in IT and for much longer in industry...its a term related to choosing the best fit even when there are more secure processes to get the job done and in military applications, construction, etc., is where I came across it. The frustrating use is when its chosen based on price point, but in general it occurs based on lack of foresight and gets applied by people in hindsight.

 
/* Use this with templates/template-twocol.html */