This is part of a continuing series on the Philosophy of Cybersecurity Legislation. With all of the calls for improving cybersecurity and the increasing sense that legislation is necessary this series will try to define the necessary parameters for effective cybersecurity legislation. The earlier posts in the series were:
Civil Liability for Vulnerabilities
Again, I will start this discussion out from an unusual point, the civil liability for vulnerabilities. There are many reasons for the existence of exploitable vulnerabilities in modern software and firmware, but arguably one of the main reasons is the financial/time pressure to get new products to market make secure coding practice and vulnerability testing too expensive for most vendors. One certain way to shift that cost/benefit analysis would be to make vendors liable for damages related to exploits of cyber vulnerabilities in their products.
To do this, our cybersecurity legislation would establish the presumption under law that a cyber-attack that exploited an undisclosed vulnerability in a product was aided and abetted by the producer of that product by failure to prevent, identify and/or remediate the exploited vulnerability. In cases where a vulnerability was disclosed, and the only remediation made available was the suggestion of actions that the owner of the product could take to prevent exploit, the victim would still sue for civil liability if they could show that they had made a good-faith effort to follow the suggested actions.
Vulnerability Research
There are all sorts of reasons why there would be some not-so-minor objections to this change in product liability law, but I will let that stew for a little bit while we look at the problem from a different perspective and that is the current system of vulnerability research. With the exception of a relatively small percentage of the largest manufacturers of cyber devices and software, companies do not generally do in-house searches for vulnerabilities. Government (mainly defense/intelligence) agencies and cybersecurity firms fund or do a large amount of vulnerability research, but a significant amount is done by independent researchers.
All of these non-governmental out-side research efforts are hindered by a common problem, access to cyber devices and software necessary to carry out their research efforts. Many of the products that would constitute the critical cyber component (3C) of a critical operation (CO) at a private sector critical infrastructure (PSCI) facility are not generally available to researchers or are too expensive to obtain.
Centers of Excellence
There are a limited (if probably really large) number of exploitable vulnerabilities in current cyber enabled products. If those vulnerabilities were discovered, reported early and then remediated, there would be a smaller window of available vulnerabilities for cyber-attackers to exploit. For 3C components, the federal government has a legitimate and specific interest in reducing the points of vulnerability for those components. One way of doing that would be to specifically encourage vulnerability research on those components.
One way of doing that would be to direct the DHS Science and Technology Directorate (S&T) to work with established colleges and universities (and the applicable SSAs) to establish vulnerability research centers-of-excellence (VR-CoE). Ideally, there would be a VR-CoE for each of the 16 critical infrastructure sectors. At a minimum, however, there should be VR-CoEs established by our cybersecurity legislation for the following:
• Medical devices,
• Energy production and
distribution systems,
• Industrial control systems, and
• Transportation control systems
These CoEs would consist of laboratories where 3C devices and software could be tested for vulnerabilities by educators and students. Established independent security researchers could apply for federally funded fellowships to allow them access to 3C devices for testing purposes. The military could even have members of various cyber operations units periodically rotate through CoEs to increase their hands-on hacking skills.
For each vulnerability discovered by CoE researchers on a 3C piece of equipment or software/firmware, the researchers would be encouraged to develop proof-of-concept (PoC) code and potential indicators of compromise (PIC).
Equipment Donations
Getting the equipment for these CoEs to investigate could get more than a little expensive. This is where we can supply a carrot to offset the ‘stick’ of vulnerability liability. A company that donated a piece of equipment to one or more of these CoEs for vulnerability testing could use that donation as a prima facia defense to a cybersecurity vulnerability lawsuit. For equipment/software that fell out of the established VR-CoE coverage, companies could hire independent researchers or cybersecurity research firms to conduct the testing.
I suspect that it would not take long for the CoEs to be overwhelmed by equipment/software donations. CoEs should be allowed to rent out donated equipment to established cybersecurity research firms for their inhouse research efforts. That loaner process would include non-disclosure agreements that would limit the ability of the research firms from publicly reporting on their research until the CoE had completed the established vulnerability disclosure process (see below).
Vulnerability Disclosure
The CoE’s would be required to coordinate disclosure of the discovered vulnerabilities with the manufacturer or vendor that contributed the equipment and/or software/firmware. The manufacturer or vendor would be given 90-days to develop mitigation measures that corrected the vulnerability. The discoverer of the vulnerability would then be given 10-days to confirm the efficacy of the fix. The CoE would then disclose the vulnerability through NCCIC.
For equipment that had been identified as a 3C for critical operations at one or more a private sector critical infrastructure (PSCI) facility, the CoE would inform the affected sector specific agencies (SSAs) responsible for those PSCI of the discovery of the vulnerability as soon as it was reported to the manufacturer/vendor. The report, protected as Protected Critical Infrastructure Information (PCII), would include information on the devices affected, the facilities potentially affected and the PIC for the vulnerability. The SSAs would pass along the PIC to the affected PSCI to be included in their cybersecurity monitoring program I described in Part 2.
If an SSA determined that a vulnerability was potentially
critical to the safe/secure operation of one or more SSA, they would notify the
NCCIC. The NCCIC would then issue the subsequent vulnerability notice in a
two-step process. The initial notification would be made to all SSAs and PSCI
and it would be protected as PCII. Then, 60-days later a public notification
would be made by NCCIC, thus removing the PSCI protection.
Part 5 of this series will look at some the current
definitions in USC will need to be changed by legislation to ensure the
efficacy of the cybersecurity bill I have been discussing here.
No comments:
Post a Comment