The vulnerability would have allowed the execution of an unauthenticated SQL statement, allowing a moderately skilled remote attacker to execute arbitrary code. There is no known publicly available exploit for this vulnerability.
This vulnerability highlights an ongoing debate in the cyber security community; how to deal with the announcement of the discovery of vulnerabilities. One side of the debate would have researchers who discover vulnerabilities report them only to the vendor so that there would be no public notification until mitigation measures (a patch for instance) was available. The other side feels that the user community should receive the initial notification so that they would know that they are vulnerable to attack.
Actually the debate is more complicated than that with a third view that feels that vendors either ignore researchers’ reports of vulnerabilities or fix the problem without giving the researchers adequate credit for discovering the vulnerability. Additionally there is the whole question about how independent researchers are compensated for the work that goes into discovering and documenting these vulnerabilities.
This is an important debate on a number of levels, but I would like to look at it from two points of vies that don’t seem to me to get enough attention, the user point of view and the point of view of the regulators and legislators.
User Point of View
Most control system users are never aware of any vulnerability in their control systems unless they are notified of the problem by the vendor. Frequently patches are installed without the realization of the underlying reason for the patch. Because of the problems associated with shutting down processes, installing and testing patches, patch installation is delayed until scheduled shutdowns or just ignored since there is no apparent problem with the system. This may leave the systems vulnerable to attack for prolonged periods of time.
The question then arises how could these facilities respond to early notification of a vulnerability? If no patch was yet available what would the facility do? The most that could really be done would be to pay closer attention to the system. If the other mitigation measures typically recommended by ICS-CERT were not already in place (isolation from internet, firewalls, etc) then a facility might see the reason for their implementation if they knew that there was a particular vulnerability.
This is the main reason for the public discussion of these vulnerabilities; as the user community comes to realize that control systems are vulnerable to potential attack they might be more likely to take the relatively simple minimum defensive measures recommended by ICS-CERT. The use of those defensive measures might be more likely if the vulnerabilities were publicly released before patches were available. Advanced defensive measures are certainly unlikely to be used if the user is not aware of potential vulnerabilities to attack.
Regulatory Point of View
As we continue to see discussion of the possibility of cyber security legislation it is important that the people writing that legislation and the potential regulators are aware of this discussion. One of the things that has been missing in the discussion of cyber security in general, and ICS security specifically, has been a discussion of the legal liability of software producers for vulnerabilities in their products.
One of the things we have seen in the researcher debate is the question of who is responsible for the ‘deplorable state’ of the quality of the software. Some people say that the problem is with the vendors/developers taking shortcuts in the development process. Others say that the user community is responsible because of their demand for lower costs and lack of demand for security measures. One thing is certain; there is currently no financial incentive for an ICS developer to ensure that there are no security bugs in the systems that they sell.
Any comprehensive cyber security legislation needs to address the issue of software vulnerabilities. I am not going to suggest that there be a ‘zero defects’ requirement, but there does need to be some standard set for how vendors deal with reports of vulnerabilities. As a starting point for the discussion of how that might look, I would like to suggest this:
• ICS-CERT should be tasked with verifying reported vulnerabilities in control systems;
• Vendors should be required to share vulnerabilities reported to them with ICS-CERT;
• Researchers should be encouraged to report vulnerabilities to ICS-CERT instead of vendors;
• Vendors should be required to notify customers of all vulnerabilities verified by ICS-CERT within 30-days along with suggested interim security measures;
• ICS-CERT should be given authority to set time limit for patch development; and
• Vendor compensation of researchers should be established for each verified vulnerability.