I typically don’t try to promote specific security systems as I am not a ‘qualified expert’ in much of anything that would allow me to make an authoritative evaluation of any particular product. Every once in a while I run across (thanks in this case to a SCADAHacker tweet) an evaluation of a system by someone or an organization that should be qualified to do such an analysis and I think it’s worthwhile to look at such evaluations. I recently ran across a TSA report on the use of a video analytics system used to secure an airport perimeter that falls into this category.
The report was prepared as part of TSA’s Airport Perimeter Security project that provides a technical evaluation of perimeter security systems currently being employed at facilities around the country. This project should provide security managers with an important independent evaluation of integrated security products to supplement claims made by manufacturers and system integrators. This is apparently the first of 15 (perhaps 21, the wording of the report is sort of vague) such reports that TSA is currently preparing.
The actual evaluation was done by the National Safe Skies Alliance, a non-profit organization formed to “support testing of aviation security technologies and processes”.
One would expect that an in depth review of a security system would involve the disclosure of some sensitive information that might be useful to someone trying to compromise that system. This report is no exception. TSA has dealt with that by redacting (blacking out) certain information in the report. While protecting the security of the installation being evaluated, it does somewhat compromise the usefulness of the evaluation.
For example the report redacted a site diagram (page 3, 15 Adobe) showing the areas covered by the video system; an understandable exclusion. Partially understandable, but certainly less helpful to security managers, was the redaction of the test intrusion detection rates in reporting the test results for the four individual intrusion techniques tested (with any details of the intrusion technique redacted). What makes this somewhat confusing is that in the summary discussion of the system accuracy the report notes that over 900 intrusion scenarios were performed (four intrusion techniques performed at a variety of locations within the detection range of seven devices) and that “every alarm instance was accurately reported through the primary management software” (page 13, 25 Adobe).
So what is redacted is the rate of failure to detect; darn that could be valuable information for security managers. What is less clear is how this would compromise system security unless the detection rate is extremely poor. If the system had a high rate (say 80% for the sake of discussion) that would warn attackers to stay away since there attack would have an 80% chance of being detected at the perimeter. On the other hand, if the detection rate were low (say less than 20%) that might make the attacker more willing to risk the attack.
While one can understand why much of the redacted information is not available, the information that is specifically missing from the report is much more bothersome. One of the general complaints about automated surveillance systems is their relative high-rate of nuisance alarms (natural environmental movements that set off the detectors) or false alarm (inappropriate detections with no known cause) rates. Those rates are missing from this report.
In the ‘Scope’ section of the report the author notes that the evaluation period was insufficiently long to establish nuisance or false alarm rates or to determine their cause. I find this hard to believe when there was time enough to evaluate 900 intrusion attempts by two field testers. At the very least the report should have included information about the number of nuisance or false alarms observed during the test period. This may not be statistically sufficient to establish a true rate, but it would provide valuable data in any case.
What concerns me more is the fact that the report states the reason the report could not distinguish between nuisance alarms and false alarms (an important distinction) was that the causes of alarms “had not been recorded by BUF (airport security personnel) personnel” so there was no way to verify alarm type. This would seem to indicate that security personnel were not really paying attention to the alarms on their system, or at the very least were not investigating alarms sufficiently to determine if an intrusion were actually taking place. This is not a fault of the report, but rather of the security management at the facility.
Interestingly, in the discussion of the results portion of the report there is a large redacted box in the section dealing with “Nuisance and False Alarm Reporting” (page 12, 24 Adobe). It would be really nice to know what was discussed there.
Overall Report Evaluation
I’m glad to see that TSA is having this type of system evaluation done. Unfortunately the usefulness of the information presented is compromised by the redaction of evaluated data. In most cases I can understand and even agree with the reasoning for the redaction in the public presentation of this data. For this to be worthwhile, however, TSA is going to have to find a way to make the un-redacted information available to airport security managers and security managers at other critical infrastructure sites. Otherwise this report will just sit on a shelf collecting dust.