I had four very
interesting
responses from three readers, Dale Peterson (
here
and
here),
Jake Brodsky (
here)
and Adam Crain (
here),
to my
previous
post about the debate on recognizing researchers who disclose vulnerabilities
without coordinating their disclosure with vendors. All four comments are
certainly worth reading; particularly Jake’s since he has specific
recommendations on how to proceed.
Code of Ethics
Jake unabashedly supports his self-interest (and as he points
out all of our self-interest as we could all be affected by a successful attack
on critical infrastructure) by calling for standards on how researchers
disclose their vulnerability disclosures.
“However, we CAN set standards for
how we expect people to behave. We can promulgate expected disclosure policies
from reputable researchers. We don't have to give them a podium and recognition
for acting in irresponsible ways.”
But we already have a de facto code of ethics set forth by
ICS-CERT. Tell them about the vulnerability; they will coordinate with the
vendor. If the vendor doesn’t respond within 45 days ICS-CERT will publish the
vulnerability anyway. The problem with a ‘code of ethics’ is that it is only as
effective as the sanctioning body that enforces it. See for example the lawyer’s
code of ethics as enforced by the American Bar Association; well, maybe that’s
not a good example.
We also have to remember that there is a certain anarchistic
streak in the background of a large proportion of the hacker community. For
this portion of the community cooperation with ICS-CERT is something to be
avoided and even expecting their cooperation with vendors is a pretty long
stretch.
The Legal Approach
Dale makes the point that researchers are going to do what
they want with the vulnerability that they discover and Jake acknowledges that
point:
“There will be people who violate
these standards. And no, we can't stop them any more than we can stop some
lunatic from shooting up a school or work-place. But we can prosecute them and
anyone who assists them.”
To prosecute someone we need something more than a code of
conduct we need a body of law that addresses the issue. So let’s look at how such
a law might work. Let’s start with the simplest form such a law could take; it
is illegal to publicly disclose a software vulnerability. Forget that, even the
most conservative court is going to rule that that is overly broad and vague
and a violation of the first amendment protections of free speech.
Okay, lets limit it to control system vulnerabilities,
surely that provides a societal protection reason for limiting freedom of speech;
you know the old falsely shouting ‘fire’ in a movie theater exemption. I don’t
know though; this could include a discussion in a hot rod magazine about how to
tweak an automotive control chip to get better performance. Or, a discussion in
a medical journal about a new type of interference in the operation of an insulin
pump.
Okay, we’ll limit it to control systems at critical
infrastructure facilities and we’ll come up with some sort of definition of all
of the technical terms that the courts can easily interpret and apply in an
appropriately limited manner. And we’ll train investigators and prosecutors and
judges and juries so that everyone understands all of the technical jargon and
the ins and outs of cybersecurity so that people can be appropriately prosecuted
for offenses against these neat new laws.
And this will stop the uncoordinated disclosures of
vulnerabilities. Yep, just like the drug laws have stopped the sale of illegal
drugs; and just like the laws against cyber-theft have protected credit cards. Oh,
and remember that US laws only apply in the US, not to researchers in other
countries or more importantly to researchers working for other governments.
And meanwhile, the legitimate and ethical security researchers
withdraw from the business because the legal restrictions that they have to
work with make it too hard to make a living. Without those researchers and the
services that their firms provide, how are we going to deal with the
vulnerabilities that are discovered and reported via the underground electronic
counter-culture that will still thrive? How will we develop the tools to deal
with the vulnerabilities that are discovered by criminal organizations? How
will we develop the methods of protecting control systems from attacks by foreign
powers and terrorist organizations? Are we going to rely on the government and
academia?
Embrace all
Researchers
No, we need to remember that the problem isn’t recalcitrant
and uncooperative researchers; the problem is that the vulnerabilities exist in
these control systems. Control systems software, firmware and devices are just
so complex that it is not reasonably possible to develop a system that is free
of vulnerabilities.
We need a vibrant and diverse research community to find the
vulnerabilities and figure out ways to mitigate their effects. We cannot rely
on the vendor community to find these flaws; it runs contrary to the way these
organizations operate. Their mandate is to produce reasonably functional products
at the lowest possible cost. Even if we were to mandate a vulnerability
detection organization within each vendor firm, that organization would never
receive the support it needs because it would be a cost center within the
company not a profit center.
We need to find a way to encourage independent researchers
to continue to look for vulnerabilities in critical systems. And we need to
find a way to get those researchers to share the information in manner that
allows vendors to correct deficiencies in their products and allows owners to
implement improvements to their systems in a timely manner.
Researchers like Adam and Chris (and a whole lot of others
as well) have demonstrated their commitment to finding vulnerabilities and
working with both the vendor community and ICS-CERT to get the vulnerabilities
recognized and mitigated. Their voluntary efforts need to be recognized and
their business models need to be supported.
But we cannot ignore the contributions of researchers like
Luigi who now sells his vulnerabilities in the grey marketplace or researchers
like Blake who freely publish their discoveries. The vulnerabilities that they
discover are no less valuable to the control system community than those
reported by Adam and Chris. And yes, vulnerabilities are valuable, both for
what they tell us about they systems in which they are found, but also for the
insights they provide into control system architecture in general.
Ignoring these researchers and their contributions will not
stop them from probing our systems for weaknesses. It will not slow their
method of sharing vulnerabilities. In fact, for many of these individuals
threatening them or ignoring them simply ensures that they will go that much
further to gain the recognition which is their due.
Dealing with the
Devil You Know
Just because these unrepentant researchers are unlikely to
play by any rules we set up does not mean that they can or should be ignored.
Ignoring them or persecuting them will only drive them deeper underground and
perhaps even into the arms of the criminal or terrorist organizations or unfriendly
states that would find their discoveries useful.
No one wants an unresolved vulnerability published for the
world to see; it raises the risk of the exploitation of that vulnerability way
too high. But with it seeing the light of public exposure this also allows the
vendor and owners to immediately begin working on the means to counter or
mitigate the vulnerability or at least make it more difficult to exploit.
An exploitable vulnerability that is kept from the control
system community while it is distributed or sold through the underground
economy is much more dangerous because no one is working to resolve the issue.
Waiting for such vulnerabilities to be used in a destructive attack on a
critical infrastructure control system to start work on fixing the problem is
much too late.
What we need to do is to find a way to encourage these
electronic loners to become part of the solution to the problem that they pose.
We should encourage them to not only find these vulnerabilities but to come up
with interim solutions that system owners can use to protect their systems
while the vendor is trying to fix the vulnerability. If we can convince them
that the system owners are innocent bystanders and deserve their help against
the inadequate response from vendors, then we can turn these outlaw researchers
into some sort of folk hero in the control system security community instead of
a semi-criminal outsider.
Discuss the Issue
We need to continue this discussion and widen the audience
that is participating. We need to include more of the system owners,
particularly the ones without Jake’s system expertise. We need to include more researchers
that wear white, grey and black hats. We need to include system vendors and
vendors of fixes to those systems. We need to include the regulatory community
that is becoming more involved in cybersecurity issues. And we need to include the general public
because they are the ones that are most likely to be affected without having
any measure of control over the situation.