Yesterday the DHS ICS-CERT published their 2016
Annual Vulnerability Coordination
Report. This is the second
such report from ICS-CERT and many of the same problems exist in this year’s
report. Additionally, ICS-CERT further complicates their reporting by making
changes to the way they are ‘counting tickets’ in the report so that numbers
are not directly comparable to previous years. Oh, yes, they are also reporting
both FY 2016 and CY 2016 data, just to throw another monkey wrench into the
data analysis comparison problem.
Changes in Data Reporting
The change in the
way that ICS-CERT counted tickets makes a fundamental change in the data
reported. ICS-CERT explains the change:
The method used to collect and
report vulnerability data changed in 2016 from that used in prior years. In 2016,
ICS-CERT began reporting metrics data on vulnerability tickets closed within
the FY or CY accounting periods. This prevents reported metrics changing based
on work accomplished throughout the life of an open ticket. In previous year’s
reporting methods, actions taken prior to ticket closure could result in additional
follow-on work being required, which in turn could change the reported metrics.
It is therefore important to note that some information reported in published
alerts and advisories in 2016 may not be included in the FY or CY data cited
herein, since the associated vulnerability ticket may still be open. Data for
tickets will be included in the reporting period in which the ticket is closed.
While the change in data accounting is a legitimate attempt
to make the numbers more meaningful in the long run, it does make 2016 data to
the same data from earlier years. This report makes that clear at multiple
points in the discussion, but they continue to provide graphics comparing the
numbers all the way back to 2010. I predict that a number of agencies and
organizations will not make the distinction clear when they report on ICS-CERT
vulnerability data.
Two Reports Make a Difference
Figure 3 in this year’s report shows how changes in
vulnerability detection may be making major changes in how future reports may
look. The final two columns in the table report 2016 data including a “2 ticket
anomally”. The report explains these two tickets this way:
“The increase is primarily
associated with two (2) tickets closed in 2016 that contain 1,418 and 460
vulnerabilities. Because these 1,878 validated vulnerabilities were associated
with a small subset of affected products, there is some concern that these
outliers could bias the metrics associated with vulnerability type and Common
Vulnerability Scoring System (CVSS) scores. As a result, these are included in
the total number of vulnerabilities reported to ICS-CERT; however, this data is
not included in other metrics treated throughout this document.”
The two advisories are not named. They were a medical
system advisory for a product from CareFusion and another from Philips
Medical. Lest one thinks that this is just a medical device issue, there
have been two similar large-vulnerability-number advisories already published
this year for control system products from Rockwell
(62) and Schneider
(365). In each case the vulnerabilities come from third-party software or
libraries included in the control system product.
ICS-CERT reports that these types of large-scale vulnerability
detections have been made possible “by using automated scanning tools”. It
would seem that the use of such tools will become more widespread and will
almost certainly become a prime tool for researchers working for organizations
with criminal or nefarious intent. This is of particular concern since many of
these identified vulnerabilities are very old (in cyber years) and many have
well known exploits available.
CWE-CVSS Analysis
This year’s report takes a completely different tack in
looking at the variety of vulnerabilities reported. Last year much use was made
of pie charts and word descriptions of the vulnerabilities. This year ICS-CERT
goes to a more formal use of Common Weakness Enumeration (CWE) numbers and
replaces multiple pages of pie charts with a single histogram showing the most
frequent CWE numbers reported.
ICS-CERT has, instead of giving us greater detail on the
types of vulnerabilities, provided more detail on the effectiveness of those
vulnerabilities via a look at a compilation of the Common Vulnerability Scoring
System (CVSS) data on those vulnerabilities. This includes a table of impact
score results and a histogram for access vector analysis.
What is interesting in the impact scoring table is that the
NIST CVSS impact categories are not equal size portions of the 10.0 scale used.
The ‘critical’ category, for instance, is only one unit wide, while the ‘low’ category
is four units wide, while the ‘medium’ and ‘high’ categories are three and two
units wide respectively. Thus, if we were to see a random distribution of CVSS
impact scores, we would expect to see a disproportionate share in the two lower
categories. Instead what we see in the report’s Table 4 is a concentration in
the two smaller categories at the dangerous side of the spectrum. This is
further reflected in the table’s reported ‘CVSS Statistics’; an average value
of 7.8 and a mean value of 7.5. Unfortunately, ICS-CERT again fails to provide
a key statistic, the standard deviation, that provides more depth to the statistics
reported.
Rating the Report
ICS-CERT has a mixed history with its wide variety of annual
reports that it produces. I have very little use for many of the reports that
they produce because of their lack of internally consistent definitions and
misleading data. This report is fortunately better than average for ICS-CERT.
The information is useful for its summary of the types of vulnerability reports
that ICS-CERT produced over the year in question and the limited details
available from those reports.
Unfortunately, ICS-CERT is still guilty of publishing
four-color glossy corporate reports that do more to confuse than to clarify.
The unexplained combining of fiscal year and calendar year reporting provides
no new insights into the process. The changing of definitions of what
constitutes a reportable ticket makes analysis of year-to-year trends
questionable, but these trends will inevitably be carefully misreported in the
press and abused by politicians and activists.
I did like the addition of a start to look at the numerical
data found in CVSS data. More of that should be included in next year’s report.
Unfortunately, reporting that data is going to have to include some sort of
statement about the consistency of that data. While the CVSS system is an
attempt to provide repeatable information about vulnerabilities, it is not a
physical measure. This means that there is some bias in that data, depending on
who scores the vulnerability. If ICS-CERT is providing the individual scores
used in the system, that bias is at least somewhat consistent. If vendors are
the source of the scoring, the bias will be much less structured. The data
source needs to be disclosed.
In any case, I do recommend that those with an interest in
the security of industrial control systems should read this report. Just be
careful on how you use the inconsistent data.
No comments:
Post a Comment