Showing posts with label Vulnerability Disclosure. Show all posts
Showing posts with label Vulnerability Disclosure. Show all posts

Wednesday, August 20, 2025

CISA Published 30-day Notice for New Coordinated Disclosure ICR

Today, CISA published a 60-day information collection request (ICR) notice in the Federal Register (90 FR 40638-40639) for a new ICR on Vulnerability Reporting Submission Form. CISA intends to collect this  information as part of their on-going coordinated disclosure process. The 60-day ICR notice was published on October 30, 2024.

CISA continues to provide the following burden estimate:

 

CISA is soliciting public comments on this ICR notice. Normally a person wishing to comment could follow the instructions provided in the Federal Register listing for this notice, but with the current problems being experienced by OMB’s RegInfo web site alternative methods are required. According to OMB’s instructions, comments should be emailed to MBX.OMB.OIRA.ICRComments@omb.eop.gov. The subject line should read: “ICR Comment - 1670-NEW - Vulnerability Reporting Submission Form”. Comments should be submitted by September 19th, 2025.

Friday, January 29, 2021

HR 118 Introduced – Vulnerability Disclosure Reporting

Earlier this month Rep Jackson-Lee introduced HR 118, the Cyber Vulnerability Disclosure Reporting Act. The bill would require DHS to prepare “a report that contains a description of the policies and procedures developed for coordinating cyber vulnerability disclosures” {§2(a)}. This is the same language that Ms Jackson-Lee introduced as HR 43 in the 116th Congress. No action was taken on HR 43.

The Report

The unclassified report would be submitted to Congress within 240 days of the date of enactment. The requirement for establishing the policies and procedures is found in 6 USC 659(m). That subsection provides that:

“The Secretary, in coordination with industry and other stakeholders, may develop and adhere to Department policies and procedures for coordinating vulnerability disclosures.”

The bill would require an annex to the report that would contain information on {§2(a)}:

• Instances in which such policies and procedures were used to disclose cyber vulnerabilities in the prior year; and

• The degree to which such information was acted upon by industry and other stakeholders.

Moving Forward

Jackson-Lee is (as of yesterday) a member of the House Homeland Security Committee to which this bill was assigned for consideration. She should have enough influence in the Committee to ensure that this bill could be considered if she is willing to exert that influence. There is nothing in this bill that cause any organized opposition to the bill. The bill would very likely receive strong bipartisan support (as an earlier version, HR 3202  did in the 115th Congress) both in Committee and on the floor of the House.

Commentary

It is odd that this bill was being introduced this year when there was no action taken on the bill in the previous session. Jackson-Lee did not use her significant influence in Committee last year to have the bill considered.

On the other hand, with the current concern about cybersecurity, there is a good chance that this bill will move forward early in this session, either as a standalone measure or included in some larger cybersecurity legislation.

One last item, the bill probably should have been updated to require CISA to prepare the report not DHS.

Saturday, September 8, 2018

Bills Introduced – 09-07-18


Yesterday with just the House in session (the Senate took an early break for the long Rosh Hashanah weekend) there were 27 bills introduced. Of these one may be of specific interest to readers of this blog:

HR 6735 To direct the Secretary of Homeland Security to establish a vulnerability disclosure policy for Department of Homeland Security internet websites, and for other purposes. Rep. McCarthy, Kevin [R-CA-23]


Okay, I’m really going to follow this one to see how one sets up a vulnerability disclosure policy for a web site…. Something seems to be lost in translation.

Thursday, March 29, 2018

HR 5239 Introduced – DOE Cyber Sense


Earlier this month Rep Lata (R,OH) introduced HR 5239, the Cyber Sense Act of 2018. The bill would require DOE to establish “a voluntary Cyber Sense program to identify and promote cyber-secure products intended for use in the bulk-power system” {§2(a)}. Similar provisions were included in §1106 of HR 8 in the 114th Congress (passed in House, stalled in Senate).

Cyber Sense Program


As I mentioned above, this bill has very similar requirements to establish and maintain a testing program “to identify products and technologies intended for use in the bulk-power system that are cyber-secure, including products relating to industrial control systems” {§2(b)(1)}. There are, however, three significant differences between this bill and the earlier §1106 provisions.

First, this bill removes the requirement for DOE to “promulgate regulations regarding vulnerability reporting processes for products tested and identified under the Cyber Sense program” that was found in §1106(b)(3). Both bills contain provisions requiring DOE to “establish and maintain cybersecurity vulnerability reporting processes and a related database” {(b)(2) in the respective sections}.

Second, this bill adds a requirement for DOE to “provide reasonable notice to the public, and solicit comments from the public, prior to establishing or revising the Cyber Sense testing process” {§2(b)(6)}.

Finally, in the disclosure protection paragraph, there are two changes. The first is structural; since the language in §1106(c) referred to the disclosure reporting regulations that are not included in HR 5239, the disclosure protection language now refers to the broader vulnerability reporting processes and database in §2(b)(2). Second, the language specifically prohibiting disclosure under “section 552(b)(3) of title 5, United States Code, and any State, tribal, or local law requiring disclosure of information or records” {§1106(c)} has been removed in the new bill.

Moving Forward


Both Latta and his co-sponsor, Rep McNerney (D,CA) are senior members of the Energy and Commerce Committee to which this bill was assigned for consideration. Thus, it would seem likely that they would have the influence necessary to have the bill considered by Committee. There are no provisions in the bill that would draw the specific ire of the regulated community, so I suspect that there would be bipartisan support for this bill both in the Committee and before the full House.

Commentary


The vulnerability reporting requirements of the earlier bill were going to be problematic, because the regulated community has very little to do with the disclosure and reporting of vulnerabilities. Establishing effective regulations for vulnerability reporting would have to be targeted at either the independent researcher community (which is increasingly international in scope) or the manufacturers of the affected devices (which is very much international).

The earlier attempt to bring vulnerability reporting for Cyber Secure devices under the disclosure rules of the Critical Energy/Electric Infrastructure Information (CEII) program was doomed to failure. First, the CEII program only prohibits information disclosure by Federal, State and local governments agencies (including, of course FERC and NERC). It would not preclude independent researchers or vendors from releasing vulnerability information.

The interesting thing about the CEII provisions of both of these bills is that there might end up being unintended effects on ICS-CERT. A large portion of the devices that would likely be tested under the cyber sense program would also be used in control systems in other industries not directly affected by the CEII program. If ICS-CERT were notified by the operator of the Cyber Sense program (NERC?) of vulnerabilities reported under §2(b)(2), they would not be able share that information under their normal alert/advisory publication program. Similarly, if ICS-CERT were to share information with the Cyber Sense program, it could be argued that they could not subsequently share that information with the wider industrial control system community.

What the program should be required to do instead of labeling the vulnerabilities as CEII would be to require notifications to be made to registered members of the bulk power industry and then after a set period of time (two months?) the Cyber Sense operator would be required to notify ICS-CERT for general vulnerability publication for the remaining affected industries. Similarly, ICS-CERT would be required to check vulnerability disclosures to see if they involve Cyber Sense listed devices. If so, they would be required to first notify the Cyber Sense operator and withhold general industry notification for the same set period of time.

Tuesday, January 9, 2018

House Passes HR 3202 – DHS Vulnerability Reporting

This afternoon the House passed HR 3202, the Cyber Vulnerability Disclosure Reporting Act, by a voice vote. There were only 12 minutes of debate and no amendments were authorized from the floor. The bill would require an unclassified report to Congress on procedures that DHS has developed with regards to vulnerability disclosures.

While it is currently unclear whether or not the Senate will take up the bill, it would most likely be considered under the Senate’s unanimous consent process which would involve even less debate and no provision for amendments.


NOTE: This bill gives lie to the current picture of the House as a strictly partisan body. The bill was introduced by Rep. Jackson-Lee (D,TX) with no Republican co-sponsors. The bill moved relatively quickly through the Homeland Security Committee and then to the floor of the House. This could only happen if the Democrat, Ms Jackson-Lee, had the explicit support of her Republican Committee Chair.

Tuesday, July 25, 2017

Private vs Public Vendor Vulnerability Disclosures

Yesterday I had an interesting Twitversation with Michael Toecker (@mtoecker) about vulnerability disclosures for distributed control systems (DCS) a type of industrial control system apparently frequently used in power generation facilities (and a number of chemical manufacturing facilities). Apparently one major DCS vendor (Emmerson) does not publicly report their DCS vulnerabilities (via ICS-CERT for example), but relies upon private disclosure to system owners.

The conversation started when Michael tweeted that “Ovation has ~51% of the generation DCS market”. I had never heard of Ovation (not terribly unusual) so I looked it up on the ICS-CERT vulnerabilities page and could not find any listings of any vulnerabilities. I asked Michael about that and he replied: “They have their own cert for Ovation Users.” And the conversation went on from there and well worth reading aside from this post.

Which brings up the whole issue of vendors fixing and then communicating about security vulnerabilities in their software (which is different than the whole coordinated disclosure debate). I cannot remember discussing this particular issue in any detail before so now seems like a good time.

Mitigation Decisions


Software vendors have been dealing with fixing program bugs forever. They have developed techniques for identifying problems (including outside ‘help’ from researchers), fixing them and then getting the fixes into the hands of consumers. Some are better at it than others.

For industrial control system owners, the fixing of software (lumping in firmware and even some hardware issues here) problems is a bit more problematic than with a standard desktop software issue. The system needs to be taken off-line for some amount of time which requires a shutdown of production. The ‘update’ may cause unknown interactions with other control system components that interfere with production. And finally, the owner may not have anyone ‘on staff’ trained to deal with the above issues. So, the decision to apply a software fix is a cost benefit analysis that frequently results in a ‘if it ain’t broke don’t fix it’ response.

For a security related issues the ‘cost benefit analysis’ is even more difficult. The cost analysis remains the same, but the benefit side is much more difficult since it deals with risk analysis. The cost of potential failure has to be modified by how likely is the failure event to happen. Where no failure history exists (no attacks here) that probability is really difficult to determine.

That is especially true if there are no in-house cybersecurity experts to help make the decision. This is where the system owner has to rely on the information provided by the vendor (and/or system integrator) in describing the vulnerability that is being fixed by the most recent update (patch, new version, etc). A detailed description of what could go wrong, what an attacker would need to successfully exploit the vulnerability and other potential mitigation measures that could reduce the risk will greatly assist the asset owner/operator in making a realistic risk analysis.

Vulnerability Reports


In a near perfect world (no vulnerabilities in a ‘perfect world’) a software engineer from the vendor would call up the control system engineer at the user site and have a detailed discussion of the discovered vulnerability, the fix applied in the latest update, the potential interactions with other systems in use and the probability that an attacker could/would use that vulnerability upon that particular user. That is not going to happen for a whole host of obvious and not so obvious reasons.

In a less perfect world, the conversation would be replace by a detailed written report from the vendor describing the vulnerability in great detail, how it would affect operations and interactions with all probable other devices and software with which the product could be expected to interact. It would also include a discussion of the threat environment in which the product existed, with a report on the history of known/suspected exploits and the potential for exploits in a variety of customer environments.

Again, not going to happen. Too much time and expertise would be required to develop such reports that would also end up disclosing too much proprietary information. And, probably more importantly, they would never actually be read by the owner/operator.

In the real world, what happens is that a brief report (one to two pages) is prepared describing the vulnerability, who it might effect and the potential consequences of a successful exploit. To make the preparation and subsequent analysis of the report easier, a set of standard descriptions is developed and used in standardized report format. Not as much information would be provided, but that which is provided is more accessible and more likely to be used.

Vulnerability Communication


Now, not all vendors have the staff necessary for the development, publication and dissemination of these reports. Instead, they will rely on various computer emergency response teams (CERTs) to provide the communications. A vendor engineer will communicate with a CERT engineer to provide the necessary information and the CERT will write the vulnerability report. Frequently, but certainly not always, the individual who discovered the vulnerability will be involved in providing information to the CERT.

The decision then has to be made as to how the vulnerability report will get into the hands of the owner/operator. Where the vendor/CERT has contact information on all the owner/operators of the affected equipment the report can be directly communicated to them. Where the vendor/CERT does not have that contact information then the only way to get the information to the owner/operator is via public communication of the report.

Public disclosure has a couple of problems associated with it. First it is a public admission by the vendor that a mistake was made in the development of the product; something that the sales department does not generally want to tell potential customers. Second, it substantially increases the number of people that know about the vulnerability, thereby increasing the risk of potential attempts at exploiting the vulnerability.

Typically, the former problem is dealt with by the vendor/CERT first distributing the vulnerability reports privately to those customer with whom they are in contact (generally larger customers), allowing some reasonable time to lapse to allow those customers to remediate their systems and then make a public disclosure to the remainder of the customer base.

Oh, and that first problem? Sales is told to suck it up. After all, the other vendors in the market place (especially the big ones) are publicly disclosing their vulnerabilities, so it is not really an issue.

Public Disclosure Benefits


So, are there benefits to public disclosure that might suggest that it is a good alternative even when customer contact information is available? Actually, there are a number. First, and personally most important, non-customers get a chance to look the disclosure reports and provide their two cents worth in the risk evaluation process. Gadflies, like yours truly, get a chance to provide an outside quality control process to the vulnerability disclosure process to ensure that owner/operators have as much information as practical about the vulnerabilities.

Second, outsiders to the communication process have some access to the vulnerability information. This includes folks like investors, corporate management and yes, regulatory agencies. These are the folks that have a vested interest in ensuring that the proximate decision makers at the owner/operator are making reasonable decisions in their cost-benefit and risk analysis calculations. If they do not know about the existence of the vulnerabilities, they have no way of asking questions about the implementation of those processes with respect to those vulnerabilities.

And, last but not least, researchers in the field get a chance to see what types of vulnerabilities other researchers are finding (and ethically disclosing) and how vendors are dealing with those vulnerabilities. This provides some incentives for ethical (coordinated, or whatever current term you want to use) disclosure and it provides for a robust research community that has a source of fresh ideas about what types of vulnerabilities for which they should be searching.


Needless to say, I am a fan of public disclosure.

Sunday, September 22, 2013

Reader Comments – 9-19-13 – Disclosure Debate Continues

I had four very interesting responses from three readers, Dale Peterson (here and here), Jake Brodsky (here) and Adam Crain (here), to my previous post about the debate on recognizing researchers who disclose vulnerabilities without coordinating their disclosure with vendors. All four comments are certainly worth reading; particularly Jake’s since he has specific recommendations on how to proceed.

Code of Ethics

Jake unabashedly supports his self-interest (and as he points out all of our self-interest as we could all be affected by a successful attack on critical infrastructure) by calling for standards on how researchers disclose their vulnerability disclosures.

“However, we CAN set standards for how we expect people to behave. We can promulgate expected disclosure policies from reputable researchers. We don't have to give them a podium and recognition for acting in irresponsible ways.”

But we already have a de facto code of ethics set forth by ICS-CERT. Tell them about the vulnerability; they will coordinate with the vendor. If the vendor doesn’t respond within 45 days ICS-CERT will publish the vulnerability anyway. The problem with a ‘code of ethics’ is that it is only as effective as the sanctioning body that enforces it. See for example the lawyer’s code of ethics as enforced by the American Bar Association; well, maybe that’s not a good example.

We also have to remember that there is a certain anarchistic streak in the background of a large proportion of the hacker community. For this portion of the community cooperation with ICS-CERT is something to be avoided and even expecting their cooperation with vendors is a pretty long stretch.

The Legal Approach

Dale makes the point that researchers are going to do what they want with the vulnerability that they discover and Jake acknowledges that point:

“There will be people who violate these standards. And no, we can't stop them any more than we can stop some lunatic from shooting up a school or work-place. But we can prosecute them and anyone who assists them.”

To prosecute someone we need something more than a code of conduct we need a body of law that addresses the issue. So let’s look at how such a law might work. Let’s start with the simplest form such a law could take; it is illegal to publicly disclose a software vulnerability. Forget that, even the most conservative court is going to rule that that is overly broad and vague and a violation of the first amendment protections of free speech.

Okay, lets limit it to control system vulnerabilities, surely that provides a societal protection reason for limiting freedom of speech; you know the old falsely shouting ‘fire’ in a movie theater exemption. I don’t know though; this could include a discussion in a hot rod magazine about how to tweak an automotive control chip to get better performance. Or, a discussion in a medical journal about a new type of interference in the operation of an insulin pump.

Okay, we’ll limit it to control systems at critical infrastructure facilities and we’ll come up with some sort of definition of all of the technical terms that the courts can easily interpret and apply in an appropriately limited manner. And we’ll train investigators and prosecutors and judges and juries so that everyone understands all of the technical jargon and the ins and outs of cybersecurity so that people can be appropriately prosecuted for offenses against these neat new laws.

And this will stop the uncoordinated disclosures of vulnerabilities. Yep, just like the drug laws have stopped the sale of illegal drugs; and just like the laws against cyber-theft have protected credit cards. Oh, and remember that US laws only apply in the US, not to researchers in other countries or more importantly to researchers working for other governments.

And meanwhile, the legitimate and ethical security researchers withdraw from the business because the legal restrictions that they have to work with make it too hard to make a living. Without those researchers and the services that their firms provide, how are we going to deal with the vulnerabilities that are discovered and reported via the underground electronic counter-culture that will still thrive? How will we develop the tools to deal with the vulnerabilities that are discovered by criminal organizations? How will we develop the methods of protecting control systems from attacks by foreign powers and terrorist organizations? Are we going to rely on the government and academia?

Embrace all Researchers

No, we need to remember that the problem isn’t recalcitrant and uncooperative researchers; the problem is that the vulnerabilities exist in these control systems. Control systems software, firmware and devices are just so complex that it is not reasonably possible to develop a system that is free of vulnerabilities.

We need a vibrant and diverse research community to find the vulnerabilities and figure out ways to mitigate their effects. We cannot rely on the vendor community to find these flaws; it runs contrary to the way these organizations operate. Their mandate is to produce reasonably functional products at the lowest possible cost. Even if we were to mandate a vulnerability detection organization within each vendor firm, that organization would never receive the support it needs because it would be a cost center within the company not a profit center.

We need to find a way to encourage independent researchers to continue to look for vulnerabilities in critical systems. And we need to find a way to get those researchers to share the information in manner that allows vendors to correct deficiencies in their products and allows owners to implement improvements to their systems in a timely manner.

Researchers like Adam and Chris (and a whole lot of others as well) have demonstrated their commitment to finding vulnerabilities and working with both the vendor community and ICS-CERT to get the vulnerabilities recognized and mitigated. Their voluntary efforts need to be recognized and their business models need to be supported.

But we cannot ignore the contributions of researchers like Luigi who now sells his vulnerabilities in the grey marketplace or researchers like Blake who freely publish their discoveries. The vulnerabilities that they discover are no less valuable to the control system community than those reported by Adam and Chris. And yes, vulnerabilities are valuable, both for what they tell us about they systems in which they are found, but also for the insights they provide into control system architecture in general.

Ignoring these researchers and their contributions will not stop them from probing our systems for weaknesses. It will not slow their method of sharing vulnerabilities. In fact, for many of these individuals threatening them or ignoring them simply ensures that they will go that much further to gain the recognition which is their due.

Dealing with the Devil You Know

Just because these unrepentant researchers are unlikely to play by any rules we set up does not mean that they can or should be ignored. Ignoring them or persecuting them will only drive them deeper underground and perhaps even into the arms of the criminal or terrorist organizations or unfriendly states that would find their discoveries useful.

No one wants an unresolved vulnerability published for the world to see; it raises the risk of the exploitation of that vulnerability way too high. But with it seeing the light of public exposure this also allows the vendor and owners to immediately begin working on the means to counter or mitigate the vulnerability or at least make it more difficult to exploit.

An exploitable vulnerability that is kept from the control system community while it is distributed or sold through the underground economy is much more dangerous because no one is working to resolve the issue. Waiting for such vulnerabilities to be used in a destructive attack on a critical infrastructure control system to start work on fixing the problem is much too late.

What we need to do is to find a way to encourage these electronic loners to become part of the solution to the problem that they pose. We should encourage them to not only find these vulnerabilities but to come up with interim solutions that system owners can use to protect their systems while the vendor is trying to fix the vulnerability. If we can convince them that the system owners are innocent bystanders and deserve their help against the inadequate response from vendors, then we can turn these outlaw researchers into some sort of folk hero in the control system security community instead of a semi-criminal outsider.

Discuss the Issue


We need to continue this discussion and widen the audience that is participating. We need to include more of the system owners, particularly the ones without Jake’s system expertise. We need to include more researchers that wear white, grey and black hats. We need to include system vendors and vendors of fixes to those systems. We need to include the regulatory community that is becoming more involved in cybersecurity issues.  And we need to include the general public because they are the ones that are most likely to be affected without having any measure of control over the situation.
 
/* Use this with templates/template-twocol.html */