I typically don’t try to promote specific security systems as I am not a ‘qualified expert’ in much of anything that would allow me to make an authoritative evaluation of any particular product. Every once in a while I run across (thanks in this case to a SCADAHacker tweet) an evaluation of a system by someone or an organization that should be qualified to do such an analysis and I think it’s worthwhile to look at such evaluations. I recently ran across a TSA report on the use of a video analytics system used to secure an airport perimeter that falls into this category.
The Report
The report was prepared as part of TSA’s Airport Perimeter Security project that provides a technical evaluation of perimeter security systems currently being employed at facilities around the country. This project should provide security managers with an important independent evaluation of integrated security products to supplement claims made by manufacturers and system integrators. This is apparently the first of 15 (perhaps 21, the wording of the report is sort of vague) such reports that TSA is currently preparing.
The actual evaluation was done by the National Safe Skies Alliance, a non-profit organization formed to “support testing of aviation security technologies and processes”.
Redacted Information
One would expect that an in depth review of a security system would involve the disclosure of some sensitive information that might be useful to someone trying to compromise that system. This report is no exception. TSA has dealt with that by redacting (blacking out) certain information in the report. While protecting the security of the installation being evaluated, it does somewhat compromise the usefulness of the evaluation.
For example the report redacted a site diagram (page 3, 15 Adobe) showing the areas covered by the video system; an understandable exclusion. Partially understandable, but certainly less helpful to security managers, was the redaction of the test intrusion detection rates in reporting the test results for the four individual intrusion techniques tested (with any details of the intrusion technique redacted). What makes this somewhat confusing is that in the summary discussion of the system accuracy the report notes that over 900 intrusion scenarios were performed (four intrusion techniques performed at a variety of locations within the detection range of seven devices) and that “every alarm instance was accurately reported through the primary management software” (page 13, 25 Adobe).
So what is redacted is the rate of failure to detect; darn that could be valuable information for security managers. What is less clear is how this would compromise system security unless the detection rate is extremely poor. If the system had a high rate (say 80% for the sake of discussion) that would warn attackers to stay away since there attack would have an 80% chance of being detected at the perimeter. On the other hand, if the detection rate were low (say less than 20%) that might make the attacker more willing to risk the attack.
Missing Information
While one can understand why much of the redacted information is not available, the information that is specifically missing from the report is much more bothersome. One of the general complaints about automated surveillance systems is their relative high-rate of nuisance alarms (natural environmental movements that set off the detectors) or false alarm (inappropriate detections with no known cause) rates. Those rates are missing from this report.
In the ‘Scope’ section of the report the author notes that the evaluation period was insufficiently long to establish nuisance or false alarm rates or to determine their cause. I find this hard to believe when there was time enough to evaluate 900 intrusion attempts by two field testers. At the very least the report should have included information about the number of nuisance or false alarms observed during the test period. This may not be statistically sufficient to establish a true rate, but it would provide valuable data in any case.
What concerns me more is the fact that the report states the reason the report could not distinguish between nuisance alarms and false alarms (an important distinction) was that the causes of alarms “had not been recorded by BUF (airport security personnel) personnel” so there was no way to verify alarm type. This would seem to indicate that security personnel were not really paying attention to the alarms on their system, or at the very least were not investigating alarms sufficiently to determine if an intrusion were actually taking place. This is not a fault of the report, but rather of the security management at the facility.
Interestingly, in the discussion of the results portion of the report there is a large redacted box in the section dealing with “Nuisance and False Alarm Reporting” (page 12, 24 Adobe). It would be really nice to know what was discussed there.
Overall Report Evaluation
I’m glad to see that TSA is having this type of system evaluation done. Unfortunately the usefulness of the information presented is compromised by the redaction of evaluated data. In most cases I can understand and even agree with the reasoning for the redaction in the public presentation of this data. For this to be worthwhile, however, TSA is going to have to find a way to make the un-redacted information available to airport security managers and security managers at other critical infrastructure sites. Otherwise this report will just sit on a shelf collecting dust.
2 comments:
Patrick, My name is John Romanowich and I am the President and CEO of SightLogix. It was our equipment that was validated by this TSA report at Buffalo airport. I am a reader of your blog and serve as the Chairman of the Security Industry Associations CFATS workgroup.
Fortunately, the reports (in their full content) are posted on TSA's Secure Webboard. Every airport is required by regulation to establish an Airport Security Coordinator (ASC) position. The ASC has webboard access provided to them by the TSA. It's also the location where they go to retrieve Security Directives. As such, I am told over 400 domestic airports have access to this full report.
Unfortunately, I do not have access to the full report nor does anyone else without access to the TSA webboard. I can however provide some additional information that is not confidential. There is more to the story than what was mentioned about Buffalo Airport. Our equipment went through a 9 month three season test by the TSA before being funded at Buffalo Airport. And they are now testing another one of our products through three seasons. This funding and testing is based upon the favorable results achieved in their test environments.
SightLogix was founded to make outdoor video analytic cameras with a high probability of detect with low nuisance alerts. I am confident the testing yielded a probability of detect well above 80% because we run our own test on our systems during qualification and will not leave the site without 100% coverage and margin to spare. In fact, since we have achieved an extremely high level of accuracy our focus turned to cost reduction in the last year and we are now able to offer a unit at twice the performance of those originally tested and at mainstream pricing with a recently introduced newer design.
As you mentioned, the key is high probability of detect with low nuisance alerts. Since our cameras are all geo-registered to the scene we know the location of the target with good accuracy, allowing the filtering of small animals very accurately based upon size. All said, should more of the report become publically available with respect to their measured detection percentage and rate of nuisance alerts I will post it on your blog…
It is a very good article on physical security.Employees feel more secure in the presence of security devices.These devices also raise the productivity of employees as they feel that they are being viewed.
Post a Comment