Wednesday, September 18, 2013

Reader Comment – 09-17-13 – Disclosure – An Opposing View

Last night Jake Brodsky, a long time reader, commentor and well respected user of industrial control systems in a water treating environment, posted a very impassioned response to my post about the ICS-CERT policy on not acknowledging researchers responsible for uncoordinated disclosures. I would like to recommend that all readers take a look at Jake’s response because it is a very good explanation of the problems that many users (not just utilities) have with responding to vulnerabilities in their industrial control systems.

Security Implementation Issues

Jake makes a very real point that many (some might say most) owners of control systems do not have the resources to respond in a timely manner to even the properly coordinated vulnerability disclosures that we would all prefer to see. As larger and larger numbers of control devices are deployed in distribution and manufacturing systems it takes an extraordinary amount of time to test, plan and deploy the useable patches and upgrades that are produced by vendors. And Jake doesn’t even directly address those instances where patches and upgrades cannot be deployed because of their inadvertent effects on other parts of the control system deployed in the field.

Jake and I both agree that uncoordinated disclosures make an already difficult problem even more intractable. Where a coordinated disclosure typically provides Jake and his compatriots in the field with at least a potential solution to their problem, an uncoordinated disclosure provides even more detail about the vulnerability (coordinated disclosures do not typically include exploit code), but it could be months even years before the vendor can develop a mitigation strategy that Jake then has to figure out how to deploy.

Personal Responsibility

Jake takes a very personal interest in the safety and efficacy of the control system for which he is responsible. He also realizes that protecting his system can be best done by improving the overall level of security in the ICS community and he is very active in that effort. Any one that discounts Jake’s opinion does so at his own risk and I support Jake 100% in his efforts to keep control systems safe and secure.

Jake makes an important point in his comment:

“This issue of disclosure and giving credit is about you and me, and the infrastructure we depend on. This is about personal responsibility. If you want to give credit to someone who doesn't comprehend the ramifications of his discovery, that is your right. But I see this as giving immediate gratification to one person in lieu of community security. If we refuse to give them the public accolades they seek, researchers will be less tempted to use black or grey hat disclosure policies.”

 Jake is absolute correct that these disclosures are potentially a problem for everyone on a very major scale. The sooner that our society comes to understand the complexity of the systems that support them and how vulnerable those systems are to attack the better off we will be. Security researchers in particular need to understand the consequences of their disclosures and take personal responsibility for their actions.

Having said that, I disagree with Jake’s last statement. I doubt that Blake or any of his ilk read my blog or have even heard of it and most don’t care that ICS-CERT even exists. I don’t give Blake credit for his benefit; I do it for the folks like Jake so that they know exactly what they are up against. Jake and some of his compatriots may be able to see Blake’s exploit code and figure out a work around defense (okay not many of his compatriots). But more importantly they should be aware immediately that their devices may now be susceptible to attack by any script kiddy that can gain access to those devices.

The places that provide the actual listings of the vulnerability and exploit code (and I do provide links to those sources in my blog post for the reasons described above) are the sites that provide the public accolades that these folks search for. While they are part of the problem, I don’t think that we should fault them too much. I would still prefer to see these disclosures on these sites than to have them sold on the black market to people who clearly intend to be able to use those exploits as weapons. At least we learn of the vulnerabilities when they get posted to these sites.

Legal Responsibility

This brings up an interesting idea. I think that Jake and I would both agree that a researcher responsible for an uncoordinated disclosure that is subsequently used in a successful attack on a control system is at least partially responsible for that attack if his disclosure include exploit code (I’ll explain that caveat in a bit). Unfortunately, I don’t think that any current laws would allow for prosecution of that researcher if that exploit code was used in an attack. If Congress wants to write meaningful cybersecurity legislation, this might be something they should want to include.

I include the ‘exploit code’ caveat in this suggestion to avoid a freedom of speech issue that is very near and dear to me as a blogger. I frequently discuss and describe various vulnerabilities in areas dealing with cybersecurity and chemical security. I acknowledge that some of those discussions might provide an attacker with the genesis of a plan for an attack. I am very careful not to include the level of detail comparable to ‘exploit code’.

Providing ‘exploit code’ level details is the moral and legal equivalent of yelling fire in a crowded theater. Describing the vulnerability is more like pointing out that the theater contains flammable materials and the exits don’t work.

Moving Forward

Clearly Jake is not going to completely agree with much of what I have written above and I respect that. We disagree but we are on the same side. This discussion needs to be held on a wider scale as it has important implications for the future of control system security.

Sooner or later one of these uncoordinated disclosures is going to be used in an actual attack on a real world control system. We need to figure out how we are going to deal with this now, while we can discuss it rationally. If we wait until people are killed in an actual attack the politicians are going to take the discussion out of our hands with knee jerk reactions that will only make our lives more difficult and won’t solve any of the security issues involved.


4 comments:

Anonymous said...

Patrick,

The longer you are involved in the disclosure discussion the more you will realize it is a waste of time. The person who finds the vuln will do what he or she wants with it using their own ethics, judgment and personal interest as a guide.

Your "Legal Responsibility" point came up in the S4x13 Great Debate (brought up by a vendor basically saying I should go to jail after I handed him the mic, good fun). The irony is the vendor who put in the back doors, had no SDL, etc. would not be civilly or criminally liable.

Dale Peterson
Digital Bond, Inc.

Jake Brodsky said...

We are about to find out what happens when an "immovable object meets an irresistible force".

Yes, there are those who will disclose however and wherever they feel appropriate. We can't stop them any more than we can stop murderous bastards from building bombs using hardware store parts.

However, we CAN set standards for how we expect people to behave. We can promulgate expected disclosure policies from reputable researchers. We don't have to give them a podium and recognition for acting in irresponsible ways.

We can also set standards for vendor behavior and user responsibility. A utility that fails to patch or mitigate problems in a timely manner following notification of a problem should be held liable for the outages that ensue. A vendor who inserts undocumented back doors in to equipment or refuses to patch a device they continue to sell should also be held liable for damages if that secret back-door or unpatched flaw is ever abused by another.

I'm not looking to give anyone a free ride here. I'm simply attempting to promulgate a reasonable code of ethics so that honest security researchers know where they stand. Right now, everyone is making up their own myopic ethics. We're not doing anyone any favors by simply throwing our hands up in the air and saying this is a morass that can't be fixed.

There will be people who violate these standards. And no, we can't stop them any more than we can stop some lunatic from shooting up a school or work-place. But we can prosecute them and anyone who assists them.

This discussion is far from being a waste of time. It is the ethical foundation of responsibility to the community. Declaring this useless says to me that Mr. Peterson has given up caring about ethical practices for industry. These ethics cut all ways, Dale. Nobody gets a free ride.

I refuse to give up. I think we should document and expect better coordination. The notion that anyone can apply whatever ethic they feel is right to a complex situation like this is foolish and defeatist.

As Crain and Sistrunk have demonstrated, we can do better. We should aim to improve, not to accommodate every small minded idiot.

(as before, these opinions are only my own)

Jake Brodsky

Anonymous said...

Jake - You almost sucked me in to analogies and responses.

My argument that it is useless to talk about "responsible disclosure" is backed my two decades of evidence. Perhaps you and others can have success were legions have failed, but the data indicates you are working on a Sisyphean task.

Dale Peterson
@digitalbond

Adam Crain said...

The 1st amendment generally protects any speech that is true. You can actually yell FIRE in a crowded theater if there is actually a fire.

Prosecuting people for vulnerability disclosures is a very touchy area IMO that quickly gets into constitutional issues.

I agree with Jake. We need to focus our efforts on a code of ethics, that includes researchers, vendors, utilties, and government agencies.

 
/* Use this with templates/template-twocol.html */