Friday, October 29, 2010

2011 Chemical Sector Security Summit

Today DHS updated their Chemical Sector Security Summit page to start the focus on the 2011 Summit. The dates for the Summit were announced; July 6th and 7th, 2011 and it will be held again in Baltimore, MD. While the agenda has yet to be announced the revised web page does explain that the Summit will be preceded by a ‘Demonstration’ on July 5th and followed by two ‘Workshops’ (Explosives and Control System Security) on July 8th. No word yet if there will be separate registrations for the three events.

The revised web page still contains links to the presentations from the 2010 Summit. This provides a good example of what type of information can be expected to be presented. If you can afford the time and travel (the Summit is free), this is probably a very good meeting for anyone in the security management team for a high-risk chemical facility.

BTW: This announcement comes almost a month earlier than the initial announcement of the 2010 Summit last year. This is getting to be a real big deal. Maybe we can convince DHS to provide videos of some of the key presentations for this upcoming Summit

DHS ICS-CERT Control System Internet Accessibility Alert

Today DHS ICS-CERT posted a new alert on their Control Systems Security Program page. They report that:
“The ICS-CERT has recently received several reports from multiple independent security researchers who have employed the SHODAN search engine1to discover Internet facing SCADA systems using potentially insecure mechanisms for authentication and authorization. The identified systems span several critical infrastructure sectors and vary in their deployment footprints. ICS-CERT is working with asset owners/operators, Information Sharing and Analysis Centers (ISACS), vendors, and integrators to notify users of those systems about their specific issues; however, due to an increase in reporting of these types of incidents, ICS-CERT is producing a more general alert regarding these issues.”
The information provided by the SHODAN search engine can provide information that could make it easier for an attacker to gain access to the identified systems. Two earlier ICS-CERT publications describe how this information could be used to gain access to, or control of, these systems.

Mitigation

ICS-CERT recommends:

• Placing all control systems assets behind firewalls, separated from the business network,
• Deploying secure remote access methods such as Virtual Private Networks (VPNs) for remote access,
• Removing, disabling, or renaming any default system accounts (where possible),
• Implementing account lockout policies to reduce the risk from brute forcing attempts,
• Implementing policies requiring the use of strong passwords; and
• Monitoring the creation of administrator level accounts by third-party vendors.
I recall seeing this briefly discussed on the SCADASEC List earlier this week, but I cannot find a copy of discussion.

Reader Comments 10-28-10 ICS Security

Yesterday’s posting on who is responsible for ISC security has attracted more response than any other blog that I have written over the last three years. There have been two comments posted on this site from an anonymous reader, a couple of comments over on the SCADASEC list, a blog post over on ControlGlobal.com, and at least one Tweet. The comments have been generally positive, but they do point out some issues that were not adequately addressed in the original blog.

Not Likely to be Attacked

My anonymous reader and a commenter over on SCADASEC had nearly identical responses to my comment that the local widget manufacturer was unlikely to be attacked by terrorists. They noted that the local widget manufacturer was more likely to be “the victim of a successful attack than a major installation” because it would be less likely to be adequately protected due to resource issues.

Vulnerability is not typically the key factor in determining who is likely to be attacked by terrorists. One of the key factors considered in terror target selection is the consequence of the attack. While this assessment will be made based upon differing values depending on the terror group involved, we can make the general statement that the more spectacular the result the more likely the facility is to be targeted. Once potential targets are selected the question of vulnerability is raised; more vulnerable spectacular targets are most likely to be attacked.

Having said that, I must agree that everything is a potential terrorist target. There are just too many terrorist groups out there to be able to judge in advance all target selection criteria. An al Qaeda group is more likely to attack an Israeli owned widget manufacturer than an Iranian owned chemical plant. Still, the likelihood of the average widget manufacturer being targeted is much lower than the large chemical plant, there just isn’t enough potential political gain from attacking the admittedly weaker target.

But the point I was making in my blog was that a complete reworking of our SCADA systems to a more secure, non-Windows platform, would be a very expensive undertaking. Spending that money would not be a reasonable response at the widget manufacturer. The expense would be much greater than the reduction in risk would justify. That doesn’t mean that the widget manufacturer shouldn’t take security measures; just that those measures don’t need to be as elaborate. This is the whole basis for risk-based security procedures embodied in CFATS.

To use an extreme example for illustrative purposes lets look at political kidnappings. In Mexico, it is becoming more common for various drug cartels to use kidnappings to convince the local governments to look the other way when it comes to anti-cartel law enforcement. Now everyone in Mexico is a potential target for these kidnappers, but Mayors are much more likely targets. It is reasonable to provide extensive personal security measures to Mayors, but the same measures would not be reasonably provided to street vendors.

In a similar vein Walt Boyes over at ControlGlobal.com notes “Couple of things he says, though, should be taken issue with. One is that not every industrial facility with a control system needs to worry about an attack. He specifically mentions food processing.”

Obviously I wasn’t as clear as I had hoped when I asked: “How about food processing companies or drug manufacturers?” I was juxtaposing auto manufacturers and food processors, two types of manufacturers that have not been much mentioned in public terror target discussions. I certainly think that, given the very public food recall issues in recent years, we certainly must consider food and drug manufacturers to be potential terror targets; though probably not as high a risk as a petrochemical facility but more at risk than an auto parts supplier. There is a continuum of risk to consider.

Other Attacks

Walt also pointed out that I ignored two other types of ‘assaults’ on industrial control systems, insider attacks and accidental cyber incidents. Both of these should certainly b e of concern to process control system developers and the system users. The later concern is not an attack and the former is not a terrorist attack. As such they require different types of preventive measures than terror attacks.

Walt and Joe Weiss have long been proponents of paying as much attention to the issue of accidental control system upsets as to potential terrorist attacks. Such problems are certainly much more likely to occur at any given facility than a terror attack, and are infinitely more common than any terror attack. Walt and Joe argue that the vulnerabilities demonstrated in these incidents are illustrative of the potential risks of a terrorist attack. Besides, their relatively common occurrence demands its own preventive response. I certainly agree with both of these points.

At the risk of being accused of using a buzz word, I think that control systems need to be made much more resilient. Making sure that these systems are less likely to accidentally cause severe, life-threatening incidents should be a priority for venders. Many of the resilience enhancing measures would also help protect against terror attacks, but should not have to rely on counter-terror issues to be considered. Remember, the widget manufacturer is probably more likely to be affected by these problems than the large chemical manufacturer because of the lack of trained programming staff.

While many of the counter-terror security measures would also provide some measure of protection against an insider attack by a disgruntled employee (or terrorist associated employee, which may be the same thing in some instances) the protective measures that will be most effective will have nothing to do with counter-terrorism measures. They include proactive employer-employee relations, active problem resolution mechanisms, access to counseling and mental health services as well as supervisor training to recognize potential problem employees.

9/11 References

Matt Franz had an interesting Twitter® comment about the blog post noting that “I could use one less 9/11 reference”. I must admit that, on reflection, three separate references to 9/11 in a single blog post might be a bit much. As a writer, the phrase is such good shorthand for a wide variety of ideas that it has almost becomes a cliché.

One thing that we must never loose sight though of as we ponder counter-terror security measures is that 9/11 is a bright-line boundary in the history of counter-terrorism in the United States. Before that date most Americans, including politicians and industrial planners, considered terrorism to be an overseas threat, not something to be seriously worried about here in the ‘good ole USofA'.

Before that date the terror attacks had been the purview of lone-wackos like McVeigh or Kazinsky; random events that most people wouldn’t get any more concerned about than being involved in a car wreck. The foreign attacks were considered to be more of a joke than a real threat (A truck bomb to bring down a skyscraper? Get real. Contaminated salad bars in Idaho? Come on.). After that date, the terror threats that the rest of the world had been dealing with for decades were finally real to most Americans.

One a final point about 9/11; security professionals and politicians think very seriously about the American public’s response to the 9/11 attack. Americans were upset at the lack of protection they had been provided against such an attack, almost as upset as they were about the attack itself. Everyone knows that there will be serious political blood-letting if there is another major attack on US soil. And, the security community knows that such an attack is almost inevitable.

Francis Turner reminds us in his posted comment of this quote from Admiral Mike McConnell, the former head of the NSA and former DNI; “The USA will do nothing to stop cyber attacks until a large attack against the country is successful – and at that point the government will step in and do the wrong thing”. Violent knee-jerk reactions are usually painful.

More Discussion Needed

While it is certainly good for this writer’s ego to see this blog post being discussed so widely, it is even more important that more people be brought into the discussion. Providing serious protection to the highest risk control systems is going to be very expensive and even the lower risk systems are going to need costly upgrades to provide the minimum level of terror-attack preventive measures.

What level of protection is needed is a very important discussion to have, important at the corporate level, the community level, and the national level. Make no mistake about it, the average American is going to have to pay for the protection. The payments will be made in higher prices, higher taxes, or more likely both. This is why this should be an important political discussion not just a technical conversation.

As an industry, as a community we need to push the discussion outwards to the political arena. If we don’t we will be forced to shoulder the blame after the attack. We will be held accountable by the American public and their politicians.

Thursday, October 28, 2010

Who is Responsible for ICS Security?

There is an interesting post over at ThreatPost.com that looks at how some industrial control system software vendors don’t seem to be taking ICS security seriously. This is really just part of the larger discussion that is taking place in the control system security community. I have been following the discussion through a number of security company blogs and the SCADASEC List. It’s been interesting seeing the chicken-or-egg discussion about who is responsible for the current sorry state of security of industrial control systems, the people who design the software systems or the people who buy the systems.

The Issues

The one side maintains that the vendors are the ones responsible for making the decisions that make these software systems inherently vulnerable to cyber attacks. The use of the Windows® operating system makes them vulnerable to a known host of attacks yet the design makes it difficult to implement the patches used to keep the security side of those programs up to date. To compound that problem there are obvious security holes in the control programs that were either overlooked or actively implemented with no thought to the security implications.

The advocates for the developers argue in turn that the users get what they pay for. The ever present push for lower priced systems requires that the vendors and their developers take the identified shortcuts. Even when security upgrades are offered, users are not interested in paying the added cost for those security add-ons.

To make matters worse, control systems are major capital purchases that companies expect to use for a long time. The cost of control systems is not just the money paid for the software and hardware, but must include the cost of integrating that software with the facility controllers and sensors. Making any changes to the software or hardware undoes much of that work, increasing upgrade costs and decreasing revenues while the facility is off-line getting everything working again.

Finally, until just recently, there really was no problem. No one was attacking control systems for a host of reasons. Control systems were isolated from outside access and they were too complicated and site-unique for it to be possible for outsiders to effectively attack them anyway. Besides what incentive was there for anyone to attack these systems anyway? Yes there have been isolated problems, but they have been insider attacks that security systems would not typically prevent anyway.

A Broader View

Actually, this is not a discussion that is unique to the control system community. Since 9/11 the issue of security in many industries has had the same back-and-forth finger-pointing exercise. Private industry has had a problem getting their hands around the problems of preventing terrorist attacks on their facilities. Since there has been no actual attack on an industrial facility, or even a serious plot against one, in this country companies are having a hard time justifying the kinds of expenditures that are necessary for preventing serious attacks on their facilities.

The Stuxnet discussion has not changed that perception. Most commentators have been expressing their awe at the amount of time, expertise and money that must have gone into the development and deployment of Stuxnet. The comments that it must have been a nation-state attack on a politically motivated target just re-enforces the perception that most domestic industries simply don’t have to worry about such a sophisticated attack against their facilities. After all, there is nothing that they have done that would attract the ire of the US or Israel or any other computer savvy country.

Of course, no one seriously thought on September 10th, 2001 that anyone would seriously consider flying an airliner into an iconic skyscraper in New York City. I mean, how could that be target (I know, some incompetent tried to blow it up with a truck bomb, but really…)? But remember, afterwards everyone saw it coming.

To avoid being second guessed after some future attack, what do we need to do to protect ourselves from an, as of yet, unknown probability of attack on an industrial control system? Remember the current IT industry standards of security would not have protected a facility from the Stuxnet worm. A number of people are suggesting that there needs to be a wholesale reworking of the ICS systems to a platform that is not as susceptible to attack. Others suggest that a comprehensive defense in depth standard would protect against most attacks and reduce the potential impact of those that do get through.

The first approach is going to be very expensive since the proposed systems don’t yet exist. They will have to be designed from the ground up and recouping the development costs is going to significantly raise the initial cost of these systems. Not to mention the lost production time involved in tearing out the old systems, installing, and tweaking the new systems.

What Facilities Really Need ICS Security?

Not every facility with an industrial control system is at risk of terrorist attack. The plant down the street that makes wigets and has a small tank of bleach that they use to clean the wigets is not likely to be attacked. The big time chemical facility that is producing chlorine and bleach and shipping it in railcar lots is certainly a potential target. Clearly the wiget factory does not need the same level of ICS security as the big time chemical facility. Where do we draw the line in between these two facilities?

For chemical facilities the line is easier to draw because of the CFATS regulations. By definition any facility covered under CFATS is considered by DHS to be at high-risk for a terrorist attack. Presumably, all other things being equal, those facilities that are not covered are not at high-risk of a terrorist attack. Even in CFATS, DHS has risk-ranked each covered facility into four tiers. Again, presumably the highest-risk covered facilities need the best security protections.

For other industries the line can get harder to draw. Energy production, because it is all linked in one of three(?) power grids and taking out one producer could have a serious impact on all other producers, certainly needs high-level ICS security. But how do your rank the security threat of automotive manufacturers or their suppliers. How about food processing companies or drug manufacturers?

The other question is do we just need to worry about terrorist attacks? Are there others out there that might want to attack a facility for some non-political reasons? We keep hearing rumors of attacks on facilities in Eastern European countries where the attack is executed just to extort the owners into paying cyber-protection money. Do we need to protect against the same thing happening here?

Public Discussion Needed

This discussion needs to be taken out of the realm of just the computer security specialists. They can provide the information on what is needed to secure these systems, but the value judgment of what needs to be protected is at heart a political discussion. Endless fighting about who is responsible for the current sorry state of affairs is a pointless exercise. We need to decide which systems need to be protected to which standards and then work towards that goal. And we need to get started pretty quick, no telling when the next 9/11 is going to start the public looking for people to blame.

Wednesday, October 27, 2010

AF Cyberspace Doctrine and ICS Security

Thanks to the folks over at HSDL.org I was directed at a copy of a relatively new US Air Force doctrine manual, Cyberspace Operations, AFDD 3-12. As with most doctrine type manuals it is full of flowery language describing generalities about warfare. Not much there for cyber security in general or chemical facility security specifically. However, there are a few items of interest to our communities.

AF Support for Domestic Operations

There has been much made of the recent DHS-DOD memorandum of understanding about cybersecurity operations in the US. There are elements within this country that greatly fear any US military operations within the United States as a major threat to civil liberty. This doctrine manual very briefly addresses domestic operations (pgs 36-7).

The Air Force does little to defuse the concerns noted above when it says:

“Attack and exploitation operations in an HD scenario may involve complex legal and policy issues; however, these issues do not prevent the application of attack and exploitation operations for HD, but temper it.”
Taken out of context this sounds disturbingly like a willingness to use ‘cyber forces’ against US civilians. Terms like ‘attack and exploitation operations’ raise red flags and provide a shot of adrenaline to anti-government radicals. It is typical that a military writer would not recognize the emotionally laden connotation of those words. It certainly would have been less loaded to say ‘Cyberspace operations’ instead of ‘Attack and exploitation operations’.

Reading on further on the same page we see a more reasonable clarification of what types of operations and under what control the Air Force would operate in defense of the homeland.

“Properly implemented cyberspace operations support defense of the homeland. When a domestic incident occurs, the escalation processes inherent in civil support procedures are implemented. A non-DOD civilian agency is in charge of civil support incidents, and military assistance is provided through a relationship similar to direct support, as articulated in civil support agreements and the Standing civil support EXORD [Executive Order]. In all cases, the Air Force is prepared to support homeland operations through intelligence and information sharing within the appropriate legal framework.”
They then go on to describe efforts that the Air Force undertook to re-establish ‘cyberspace infrastructure’ after Hurricane Katrina:

“Based on their expertise for establishing the cyberspace domain, Air Force combat communications groups deployed throughout the Gulf region to reconstitute the cyberspace domain and allow military and US government organizations to communicate and be connected for situational awareness and C2[command and control].”
Ten Things Every Airman Must Know

One of the things that this document makes very clear is that everyone in the Air Force has an important role to play in cyberspace operations, regardless of in which command they may serve. The Air Force has always been a very technology oriented organization and that has never been more true than today. It is important for everyone in the Air Force to understand that they have an individual responsibility for defending their local piece of cyberspace.

While this is apparent throughout the document it is specifically addressed in Appendix A, Ten Things Every Airman Must Know (pg 39). This list could easily be adapted by any company to apply to all of their employees. With that said, here is the Air Force list.

1. The United States is vulnerable to cyberspace attacks by relentless adversaries attempting to infiltrate our networks at work and at home – millions of times a day, 24/7.

2. Our enemies plant malicious code, worms, botnets, and hooks in common websites, software, and hardware such as thumbdrives, printers, etc.

3. Once implanted, this code begins to distort, destroy, and manipulate information, or “phone” it home. Certain code allows our adversaries to obtain higher levels of credentials to access highly sensitive information.

4. The enemy attacks your computers at work and at home knowing you communicate with the Air Force network by email, or transfer information from one system to another.

5. As cyber wingmen, you have a critical role in defending your networks, your information, your security, your teammates, and your country.

6. You significantly decrease our enemies’ access to our networks, critical USAF information, and even your personal identity by taking simple action.

7. Do not open attachments or click on links unless the email is digitally signed, or you can directly verify the source—even if it appears to be from someone you know.

8. Do not connect any hardware or download any software applications, music, or information onto our networks without approval

9. Encrypt sensitive but unclassified and/or critical information. Ask your computer systems administrator (CSA) for more information

10. Install the free Department of Defense anti-virus software on your home computer. Your CSA can provide you with your free copy.
I think that the last item is very interesting. Knowing that it is almost inevitable that personnel are going to bring their personal electronic devices to work, it seems that the Air Force is trying to proactively prevent those devices from being an attack route to their electronic systems by including them under the same defensive umbrella. This is certainly an idea worthy of emulation in the private sector.

All in all this is an interesting document if you can read through the Air Force speak. I have been away from the military (Army in my case) for about 20 years now and the language still brings back many memories. The vocabulary of buzzwords has changed, but the same stilted phrasing is still in use.

SCADA Security and USB

Eric Byres has an interesting blog posting over at the Practical SCADA Security blog talking about responses to the Stuxnet worm. He observed that many companies were reacting to Stuxnet, even companies without Siemens control systems. One of the responses that he had noticed was the increased restriction of the use of USB sticks (jump drives) on control system computers; a reasonable precaution when considering the probable use of portable drives in the Stuxnet infection process.

Eric discusses a couple of different options for controlling the use of jump drives including physical locks for USB ports and software controllers that will limit what specific devices can be accessed through USB ports. It’s an interesting discussion, well worth reading.

USB Ports

There are a couple of points that need to be made about the use of USB ports on computer systems in general and specifically in control system computers. First off, USB ports (and the supporting software systems) have made it much easier to attach a wide variety of devices to computers. The plug-and-play capabilities have made it much easier to replace keyboards, mice, and printers without having to worry about device drivers or re-booting systems.

In fact it is becoming much more difficult to find devices that can be plugged in to places other than a USB port. Blindly blocking such ports with permanent blocks (I have heard of people using super glue or epoxy resin to physically block these ports) could cause some significant problems down the road.

Physically managing USB ports has been complicated by the ready availability of devices that allow users to expand the number of USB ports. I use one on my lap top because I can’t plug my mouse and jump drive in at the same time because the two available USB ports are too close to each other. I plug this adapter in and turn one USB port into four.

Any control of the use of USB ports needs to include a training component, explaining to operators, technicians and control systems engineers the reasons for the control. Failure to do this will inevitably lead to people bypassing the controls to allow them to use their own USB based devices on the system.

Jump Drives

The wide availability of inexpensive file storage capacity in a plug-and-play device has made major changes in the way that we use computers. I know that it has made my job as a freelance writer much easier; I can carry my files with me in my pocket and use a wide variety of computers to do my work, transfer completed work to my customers and get a wide variety of data necessary to do my job.

There are a number of legitimate reasons to use jump drives on control system computers. Eric points out the use to collect system diagnostic data when network connections are not working. I have used a jump drive to collect screen shots from the control system work station to use in operator training documents.

As companies increase their control system security awareness there actually may be an increased need for the use of jump drives. A control system engineer I know does his system programming on his desk computer that is also used for the standard office work that no one can avoid. Needless to say this computer is linked to both the control system and the office networks.

This is exactly the type set up that the Stuxnet worm was designed to utilize. The USB device would not need to be plugged into one of the control system work stations to infect the control system. Insertion of the infection device into any of the office network computers would have lead to an infection of this dual use computer.

The obvious solution to this problem would be to give the control system engineer (and the technician who backs him up on system diagnostics) a separate computer for use with the control system that is not linked to the office network. A cheaper solution would be to continue to use the dual use computer, but remove the physical link to the control system using a jump drive to transfer the appropriate program files (but note that this was yet another method of Stuxnet propagation).

Considered Solutions

Anyone that has done any serious problem solving work knows that knee-jerk, reflex type responses to problems frequently cause more problems than they solve. In anything involving control system problems this is certainly true and control system security problems are no exception.

A well thought out response will be the only method that will have even a remote chance of protecting control systems from an attack as sophisticated as Stuxnet. The response will have an increased chance of being successful if it comes out of a team consideration of the problem; a team that includes operators, process engineers, IT, vendors, and security specialists. All of these specialties will be needed to put together a workable control system security program. And don’t forget the training program to support the security system.

Tuesday, October 26, 2010

Security Issues – Chemical Reactions

An article from last week on KSL.com describes a chemical incident that is classic violation of industrial chemical safety 101; don’t mix incompatible chemicals. A closer reading of the article shows that the lesson should also apply to chemical security 101.

The Incident

A chemical delivery was taking place at an industrial facility. The driver was given the job of offloading the chemical into a vessel (the article calls it a vat, I’m not sure what that is) that was thought to be empty. It apparently was mainly empty, but it contained chemical residues. The chemical being off-loaded was sodium hypochlorite solution. The residues were hydrochloric acid (almost certainly a solution). Second year chemistry students should be able to explain what happened next; a chemical reaction producing chlorine gas.

The article quotes a facility spokesperson as saying: “There were fumes that overcame two of our employees. They started coughing and having burning sensations in their chests.” The small amounts of the chemicals involved probably saved the lives of the two people standing nearby. They were both taken to nearby hospitals and released later the same day.

Chemical Safety

Unintentional mixing of chemicals is something that should be avoided as a matter of policy. Most industrial chemicals are not going to react in a hazardous manner when being mixed with residues like this. Even so, quality issues and other consequences of contamination are more than reason enough to avoid such mixing.

There are some chemicals, however, which are very reactive and can cause all sorts of nasty consequences when mixed with other chemicals that could endanger employees, the facility, and neighbors. A key component of a chemical safety program is to identify those chemicals used on site that could react dangerously with other chemicals. Then the chemicals that they violently react with must be identified. Finally, the facility must establish procedures and protocols that keep the reactive chemicals physically separated.

There are some chemicals, like sodium hypochlorite, that react violently with such a wide variety of chemicals that it is almost easier to list the chemicals with which it doesn’t dangerously react. If chemicals like these are going to be off-loaded from a bulk truck or container into vessels or containers on-site, those containers should probably be dedicated to that service and not used for any other chemical.

I am not a big fan of allowing off-site personnel to unload bulk deliveries of chemicals, hazardous or otherwise. Such personnel are unlikely to be aware of the tell tale signs at a facility that they are attempting to unload something in the wrong place. Additionally, they are generally unaware of the other chemicals on-site that might present a specific hazard to the unloading operations.

The only exception to this is when a supplier driver is unloading into a tank on site that the supplier owns and only their drivers have the keys to open the unloading lines. Even then a prudent facility will have some way of verifying that the load is the correct chemical to be unloaded into that tank.

Chemical Security

The article notes that the “spill temporarily forced an evacuation at” the facility. From a chemical safety perspective this is a very reasonable precaution. Chlorine gas has a very small short term exposure limit and is very unpleasant to inhale at concentrations well below the lethal threshold.

Security personnel will readily recognize the security issue presented with that statement. An emergency evacuation is going to cause some level of confusion at even the best trained facility. This confusion could be a potential tool used to facilitate a successful penetration of the facility.

Security forces should not have secondary emergency response responsibilities. They should be trained to go to a heightened state of alert during other emergencies at the facility. Personnel monitoring security cameras and other security sensors should be physically isolated from chemical storage and process areas of the facility. This will make it easier for them to maintain surveillance during chemical emergencies.

Safety and Security

Once again we can see that there are both safety and security components of many actions in high-risk operations. A compromise of either compromises both. High-risk facilities need to get in the practice of examining both the security and safety consequences of anything that they do at the facility.

Security Policy

There is an interesting article over on SecProdOnLine.com about the CFATS process based upon a presentation made at the recent ASIS meeting in Dallas. The author makes a number of observations about the CFATS process, but one of the more important points is described this way: “Saad [Michael Saad, CPP, senior director of consulting services at Huffmaster Crisis Response LLC] explained that planned security measures must be written into company policy and procedures.”

Establishing an effective site security plan under CFATS is going to require the development of a number of new security procedures and policies. It is also going to require an in depth review of all current policies to ensure that they support the security plan.

Personnel Policies

CFATS regulations require a facility to have a background check program in place for facility personnel. This program needs to be integrated into corporate personnel policies. Hiring policies need to reflect the requirement to be able to pass a background check. Job postings, both internal and external, will need to note this requirement.

Companies that have both CFATS covered facilities and facilities that are not designated high-risk facilities will have to make the decision as to whether or not the background check requirements will apply to personnel not assigned to CFATS facilities. That decision will have to include consideration of the fact that many corporate jobs (IT, order processing, etc) will support security procedures at covered facilities; those personnel should also be covered by the background check procedures.

Disciplinary policies will need to be reviewed to determine how security violations will be treated under those policies. Procedures will need to provide for immediate dismissal for some types of violations. These need to be clearly delineated and carefully considered. Immediate dismissal polices certainly need to be reviewed by legal personnel.

Contractor Policies

CFATS facilities need to review all contracts for on-site services to ensure that they support the site security plan. Requirements for background checks need to be clearly spelled out. This needs to include provisions requiring contractors to present background check information to DHS inspectors as required. Contracts should also spell out the security processes and procedures that need to be complied with by contractor personnel while on site.

Facilities need to look at requiring contractors with a daily or long term presence on-site to have specific security policies in place that compliment facility security procedures. If this requirement is put into place, facilities will also need to include provisions for periodic audits of those procedures to ensure that they are not just paper drills.

Sales Policies

Facilities that ship theft/diversion chemicals of interest have to have policies in place to vet customers. This needs to be carried well beyond security procedures into the entire sales process. Sales personnel need training on these requirements, and they need to be specifically spelled out in detail sales procedures. Sales contracts need to include language covering these requirements and sales literature should include mention of these requirements. Depending on the involvement of sales personnel in the vetting process, those personnel may need to be covered under the CFATS background check requirements even if they never set foot on facility grounds.

Order processing procedures also need to reflect these vetting processes. All personnel in the order processing chain need to be trained on these security requirements. Any personnel with independent decision making authority in the vetting process need to be covered by the personnel surety process.

When ever possible, the processing of orders for theft/diversion COI needs to include provisions for verification of vetting. Specific personnel should be required to sign-off on the vetting at designated places in the vetting process. This should include customer approval, new-site shipment approval, and approval of unusual order patterns. Documentation of those approvals should be clearly documented so that downstream personnel can continue their order processing in a timely manner.

Review All Policies

These are just the most obvious policies that must be considered when site security plans are established. Every company policy needs to be reviewed to ensure that provisions of that policy not only do not contradict security policies, but that they actively support security. Security, like safety and quality, must be included in every part of the corporate culture.

Sunday, October 24, 2010

Emergency Response Planning – Defining Requirements

I have been looking at emergency response planning for gas pipelines over the last couple of weeks. This is not an effort to pick on this industry of show my expertise in this industry (practically non-existent). Gas pipeline emergency response plans (ERPs) are a convenient tool to look at emergency planning requirements for a variety of regulated industries because of the recent complaints about the apparent lack of emergency planning at the San Bruno, CA incident and because there are existing requirements for emergency planning in 49 CFR 192.615.

Establishing FEMA ERC Authority

Actually §192.615(a) provides a fairly decent description of the requirements for written procedures for an operators response to “minimize the hazard resulting from a gas pipeline emergency”. The problem is that these procedures should be designed to support an emergency response plan, but they are not an ERP in and of themselves. Part of the reason is that they only reference operator actions and an effective ERP will include a number of other agencies.

The first thing that must be established in the regulations is who is responsible for preparing, coordinating and exercising the emergency response plan. Clearly it cannot be the pipeline operator because of the lack of jurisdiction over the public emergency response agencies (police, fire, EMT, hospitals, etc) that will be required to execute the bulk of the ERP. Because gas pipelines are a Federally regulated industry, I have argued that FEMA would be the appropriate agency and a County Emergency Response Coordinator would be the appropriate official to be responsible for preparing, coordinating and exercising the ERP.

FEMA would be required to establish a list of standard ERP scenarios for potential gas pipeline incidents. In an earlier blog I proposed six potential scenarios for this industry, based on §192.615(a)(3).

Initial Information Requirements

The next thing that the regulations need to establish is the requirement for the regulated pipeline operator to provide specific information to each FEMA ERC affected by a pipeline they operate. The term ‘affected by’ would have to be specifically defined to include such things as large pipelines that are outside of a county boundary, but that could reasonably be expected to affect that county in the event of an emergency.

The information required to be provided would include the locations of all transmission pipelines and significant distribution pipelines (above a specified size), the identification of an emergency response planning official for the operator, and emergency contact information for the operator’s control rooms.

Planning Participation Requirements

The FEMA ERC would be required to conduct emergency planning coordinating meetings with the pipeline operators and local emergency response officials. Those meetings would use the standard ERP scenarios as the basis for establishing the local response plans. The pipeline operator’s ERP official would be expected to provide technical information about the pipeline and potential leak consequences to the planning officials.

Operator’s ERP

The pipeline operator would be required to prepare an individual ERP to support the County ERPs for each of the applicable planning scenarios. FEMA would provide a standard format for the ERP to facilitate the preparation of the ERP. The pipeline operator would be given a deadline for submitting their supporting ERP’s to the FEMA ERC who would review/approve the submitted ERP’s. Once approved, copies of the operator’s ERP’s would be provided to the local response agencies, the FEMA State ERC, and FEMA Headquarters.

Emergency Response Exercises

The FEMA ERC would be required to coordinate, plan and execute annual table top exercises for each of the gas pipeline ERP’s. FEMA would be expected to produce standard formats and supporting information for such exercises that could be adapted by the County ERCs. Operator participation in these exercises would be required by regulation.

Counties that included a high consequence area as identified in the regulations would be required to conduct a full-scale emergency response drill for one of the gas pipeline ERC’s every two years. Again, FEMA would be responsible for establishing a standard drill design for covered planning scenario. The FEMA County ERC would use these standard drill packages to coordinate, plan and execute these drills.

After action reviews of these exercises would be required and the FEMA County ERC would be responsible for ensuring that appropriate changes were made to the County ERP and each of the supporting ERP’s (including the operators ERP). The same reviews and updates would be required anytime that one of the ERP’s was actually employed to respond to a real-life incident.

Regulatory Clarity

Clearly specifying who has responsibility for emergency response planning will go along way to ensuring that there are adequate emergency response plans for clearly identifiable emergency situations. Adequate ERP’s, properly reviewed, exercised, and updated, will provide the best chance to minimize potential danger to life and property to both accidents and deliberate attacks on these necessary and potentially dangerous pipelines.

Friday, October 22, 2010

Reader Comment 10-21-10 Security Reporting

An anonymous reader yesterday took objection to my characterization in yesterday's blog that the Netbiter vulnerability was “first reported on SecList.org”, pointing out that the actual report was made on Bugtraq which is hosted on SecurityFocus.com. Anonymous then goes on to say that:

“That you guys don't know what Bugtraq is basically proves that the SCADA industry and users are a decade or more behind in terms of security and shows us why we're in such a mess.”
My Apologies

I apologize to the folks at Bugtraq and SecurityFocus.com for not giving them appropriate recognition for their work on this matter. It was completely unintentional. The link provided by the DHS ICS-CERT Alert goes to the Bugtraq archives on SecList.org and I did not track it back any further than that.

And no, until yesterday I had never heard of Bugtraq or SecurityFocus.com. Nor do I suspect that many in the SCADA user community had heard of them either. That is one of the reasons that I am addressing these issues in my blog. Many of us in the SCADA user community are just now becoming aware of the extent of the cyber security issues facing our control systems. We (collectively) had assumed that our systems were immune to the well known IT security issues because of their isolation and complexity. Big mistake, I know, and it is a mistake that I am trying to help correct in this blog.

I can’t comment about the SCADA security community’s knowledge of Bugtraq because I just operate on the very fringes of that community, sucking in as much knowledge as I can. I do have to say that the professionals in the SCADA security community that I have had contact with, or have been following on the Internet, seem to be very committed, intelligent, and involved people, very passionate and knowledgeable about their work. I would suggest that ascribing my lack of knowledge to that community is unfair.

I do appreciate the comment by Anonymous for pointing out just one more area where my knowledge is lacking, the more I learn the less I know.

Perspective on Preparedness Report

Readers of this blog will probably remember a series of blog posts I did in August and September in support of the National Dialogue on Preparedness. In these blog posts I urged readers to participate in this dialogue though I did warn you that “I don’t expect DHS to put much effort into using the results of this citizen participation”. Well, the Preparedness Task Force published their report Wednesday and it certainly looks like I was correct.

Ideas Discussed

You might remember that in my final report on the Dialogue I reported that:

“There have been a total of 266 Ideas submitted to the Dialogue. A total of 869 registered users have posted 420 comments on those Ideas and voted a total of 3,297 times.”
I also reported on the seven ideas of those 266 that I thought would be of particular interest to the chemical security community

“● FEMA CBRNE Preparedness Division (5 votes, 0 comments, rank 98) - Hot
“● Funding Post-Emergency Response Research (6 vote, 0 comments, rank 87) - Hot
“● Update "pre-fire" tours to include "pre-hazmat" considerations (9 votes, 0 comments, rank 65) - Hot
“● HAZMAT Rail Shipment Notifications (-4 vote, 0 comments, rank 252)
“● TSA Chlorine Dispersion Modeling Study (0 votes, 1 positive comment, rank 278)
“● Bring in the Military (-13 votes, 4 positive comments, rank 265)
“● Counter-Terrorism Emergency Response Plan - CFATS (11 votes, 0 comments, rank 45)”
While these ideas were of special interest to the chemical security community, they were pretty representative of the vast majority of the ideas that were submitted by the Dialogue participants. Fairly detailed suggestions how the Federal, State, and local governments could improve the preparedness of this country to respond to a wide variety of disasters and catastrophes, both natural and man-made.

I didn’t agree with most of the proposals, but they were honest efforts by a wide variety of citizens to do their part to help guide the government to better ways to prepare for and respond to potential catastrophes. It’s too bad that our efforts were apparently a complete waste of time.

Ideas Ignored

The report covers the results of the National Dialogue on Preparedness in a two-page appendix. Almost the entire first page of that appendix is devoted to a description of how the Task Force reached out to “preparedness-minded citizens, non-governmental organizations, and private sector partners” (pg 75). It turns out that the online forum was just a part of that Dialogue.

The report identified three “core themes that emerged from the National Dialogue [that] align with Task Force deliberations and recommendations”:

“Integrating Non-Governmental Stakeholders”,
“Integrating Preparedness into Educational Curricula”, and
“Establishing Financial Incentives for Preparedness”.
If you read the detailed explanations of these themes you could be forgiven for concluding that these were consensus themes shared by a large number of the Idea presenters and commentors. Unfortunately, that is nowhere near the case. Now these themes were supported in a number of the ideas, but they could hardly be considered ‘core themes’.

The vast number of ideas dealt with specific situations and how to resolve those situations. Reading the report it is clear that the Task Force was never concerned with dealing with this type specificity (and to be fair they never directly solicited specificity), but that is what the bulk of the Ideas were. It would have been nice if they had at least acknowledged the types of the responses that they had received, perhaps by listing, say the top ten ideas as measured by the votes they garnered.

But no, the report instead ignores the attempts at presenting solutions to specific problems and pretends that the bulk of the participants supported the key themes that just happened to support the conclusions reached by the Task Force. This is the type response to public participation in political dialogue that ends up convincing people that political participation is a waste of time.

Ideas Encouraged

Having said all of that, I will continue to recommend that readers of this blog participate in public forums like this. It does provide individuals a very public venue to share ideas that they have. I will remind people that the likelihood of political officials paying any attention to the ideas being discussed is quite low, but I will still recommend that they participate.

After all, the sharing of ideas is always a good thing. Particularly in a venue that brings a wide variety of people to the forum, putting one’s ideas out in public provides for the possibility of cross-fertilization of ideas and the raising of unique hybrids; hybrids that will probably end up being stronger and more vital than their parents.

So, don’t expect the politicians to listen; but just perhaps some fellow citizens will.

Thursday, October 21, 2010

More SSP Edit Updates

May be I should just be more patient in reporting updates on the CFATS web site. No, I just can’t do it; my Mama taught me to share. This afternoon DHS-ISCD updated two pages on the CFATS web site with information about the CSAT SSP Edit Process User Guide that was re-posted yesterday afternoon.

The CSAT Site Security Plan page has been updated with a link to the new User Guide. No other changes were made on this page. The User Guide was now upgraded to an explanation in the ‘Latest News’ section on the CFATS Knowledge Center page. That explanation is a very well written summary of the document that deserves to be printed in full here (just wish that I had written it).
“A new SSP-Edit function is now available that will allow a facility to retrieve and make administrative or technical edits to its most recently submitted Site Security Plan. Administrative edits can be made at any time and there is no limit to the number of times administrative edits can be made. Technical edits can only be made 90 days or more after an SSP has been submitted. Editing a previously submitted SSP will not affect any applicable deadlines under the CFATS regulations unless DHS notifies the facility to the contrary. See the CSAT SSP Edit Process Users Guide posted under “Documentation” below for details.”
Still no FAQ listed for this, but ISCD does not pre-publish FAQ (though some of them may be written ahead of time), the wait until some user contacts the help-desk with some sort of question about the editing process. So we’ll just have to wait until CFATS facilities start to look at their information

DHS ICS-CERT Issues Two New SCADA Alerts

This morning DHS ICS-CERT issued two new alerts for vulnerabilities related to control systems produced by two different companies. The first vulnerability has been reported in Intellicom’s Netbiter® WebSCADA product while the second vulnerability has been reported in the Moxa Device Manager. No patch is yet available for either vulnerability though Intellicom recommends that their users “change the default password when installing the product” (always great advice).

The Netbiter vulnerability was first reported on SecList.org back on October 1st. ICS-CERT reports that they are working with Intellicom “to address these vulnerabilities”.

The Moxa vulnerability, with exploit, was published yesterday in great detail on ReverseMode.com. This may explain why ICS-CERT has taken the unusual step of posting their alert before they have been able to contact someone from Moxa.

CFATS Knowledge Center Update 10-21-10

This morning DHS-ISCD updated their CFATS Knowledge Center web page with links to the CSAT SSP Edit Process User Guide that I reported on last night. It shows up both in the ‘Documentation’ section (page 3) on the bottom of the main page and again in the ‘Related Documentation’ section of the Site Security Plan sub-page. Still no mention in the ‘Latest News’ section and no related questions in the Frequently Asked Questions (FAQ) list.

Reader Comment 10-20-10 Safety as Security

Andrew Ginter, who writes the Control System Security Blog, posted an interesting comment on yesterday’s blog on how Stuxnet could be used against high-risk chemical facilities. The comment addresses a number of good detailed points, all worth reading, but one of the most interesting is summed up in one very important sentence:
“Safety systems need to start being evaluated from an adversary's perspective: is there a set of components, which if destroyed or caused to malfunction simultaneously, can cause a catastrophe?”
As facilities take detailed looks at their security plans and procedures it will quickly become apparent that many of the security actions that facilities take will be closely related to safety procedures that are already in place. The reason for this is fairly obvious, the release of hazardous materials into the environment is the ultimate goal of both programs. Safety programs are designed to prevent accidental releases and security programs are designed to prevent deliberate releases.

From a control system perspective, Andrew points out a very important point; “The worst consequence that a worm, or even an insider with access to the control system, should be able to produce is denial-of-service: triggering a safety shutdown.” But, this will only be true if it is not possible for the same cyber system attack to affect both the safety systems in place in the facility and the control systems.

Upper management needs to have this clearly explained to them. At one facility where I worked the computer safety systems and the standard control system were installed on the same computer, using the same control system software. Engineering had requested a separate control system for the safety systems, but corporate management deleted that system from the budget, noting that the current control system had more than enough capacity to handle both systems.

We designed in some manual safety protections into our processes, including manually locking out valves for chemicals that could present reactive hazards for the process. But it is hard to keep such operator-centric safety processes operational; it requires aggressive auditing. Unfortunately we could not come up with such manual processes to back-up all of the cyber safety controls.

As many writers have pointed out safety is an attitude as much as it is a program. The same is true for security. All of the safety programs or security programs mean little if there is not a facility wide attitude that these programs are important and should not be shortcut under any circumstances. When the twin attitudes of safety and security are present in a facility team, the interactions between these two programs will reinforce both.

Wednesday, October 20, 2010

SSP Edit Tool Reappears

The folks at DHS-ISCD once again updated their CSAT web page by adding a link to the CSAT SSP Edit Process User Guide. Readers might remember that this link (same link apparently) appeared on this page earlier, only to disappear a short time later. In the earlier incarnation its arrival and departure were not announced. Once again, there is no announcement on the CFATS Knowledge Center about this document, nor is it listed on that page. No change was made to the SSP web page either.

At first glance the ‘new version’ of this manual is the same as the old version. The version number is the same, but the publication dates are different (September 2010 vs October 2010). I don’t see any obvious differences so my earlier blog probably holds true about this document.

Stuxnet and Chemical Facilities

There has been some discussion about how effective Stuxnet could be against a facility without insider knowledge of how the control system for a process is set up. It is easy to see how an attacker with a detailed knowledge of what programmable logic controller (PLC) operates which valve, pump or other device could cause serious disruptions or damage at that facility with a tool like Stuxnet. But, if an attacker doesn’t have that insider knowledge, how much damage could they actually do?

First we need to look at what kinds of motivations could be behind a Stuxnet attack. In an earlier blog I looked at a criminal extortion type attack. In this blog I would like to limit the attacks considered to terrorist attacks. Even here there would be two types of attacks to consider; an attack to shut a plant down or an attack to have a larger, off-site terror effect.

Plant Shutdown

We can imagine a number of players that could have a reason to want to shut down a chemical facility. The obvious are competitors (an unlikely suspect, the chemical industry has not gotten that cutthroat), political adversaries (perhaps in retaliation for a perceived attack on one of their process industry), or eco-terrorists (with a particular concern with the products of that plant).

Given access to a Stuxnet worm and the ability to program it to collect/report PLC label data, an attacker with minimal knowledge of the process could make the plant effectively inoperable for extended periods of time. All it would take is the knowledge that most programmers use tags for devices that are readily identifiable; valves are labeled valves, pumps are labeled pumps.

With this knowledge all it would take to make the plant shutdown would be to re-program some of the PLCs to ignore change of state commands. For example, once a valve opened, change the programming to ignore a ‘close’ command. Or, for a pump, once it started, program it to ignore ‘stop’ commands. While some of these could have serious consequences, the vast majority would be more of a serious nuisance than a catastrophic hazard. The probability favors management identifying the problem and shutting the plant down before a catastrophic failure resulted.

Again, in most cases the shutdown would be for a relatively limited amount of time. The mess would have to be cleaned up, potentially some equipment replaced, but the actual length of the shutdown would be determined by how long it took to clear the Stuxnet caused programming problem and make the system proof against a repeat attack. But this could be sufficient for most of the probable attackers of this sort.

Off-Site Terror Attack

Deliberately conducting an attack with a high-probability of an off-site effect, would take a bit more process knowledge; not necessarily insider knowledge, but at least a knowledge of the chemistry concerned. Again the ability to collect tag information from the system would be a key to executing this kind of attack.

In a system designed with due respect to process safety, there will be a number of operating restrictions programmed into the control system. For example there would be prohibitions for opening valves of two incompatible chemicals at the same time to prevent uncontrolled reactions. Valve tags frequently contain chemical names, again to make it easier for the programmer to keep track of what valves control what chemicals. Thus it could be a simple matter to determine which safety programming could be reversed to cause a potentially catastrophic process reaction.

An alternative tactic would to change temperature controls in a critical reaction. This could be done by manipulating the parameters of a temperature controller. When a process control provides for automated cooling to be applied when a temperature was reached in a process, changing that to the application of heating could easily cause a runaway reaction that could have catastrophic consequences.

This is not as high a probability attack as placing an explosive device next to a flammable liquid storage tank, but it is less likely to result in the attacker being captured or killed in the process of the attack.

Insider Assisted Attack

To conduct a Stuxnet attack with a high-probability of success, especially if success is measured by the level of off-site consequence, will require some level of insider knowledge of the process. This does not mean that the person programming the control system (a process engineer or even perhaps a Siemens employee or contractor) has to be working with or for the attacker. Programming notes or process safety files will provide enough insider information to allow for more effective attacks using Stuxnet.

This type of information may be available to a number of employees, contractors, or even visitors to the facility, depending on the level of physical and document security at the facility. Process safety information, for example, is required by regulation to be accessible to all employees.

Stuxnet is a Threat

Thus we can see that Stuxnet or Stuxnet clones are a definite potential threat to chemical facility control systems. As far as I can tell, there is not currently any iron clad defense against Stuxnet type attacks. The use of a variety of cyber security techniques may increase the chance of detecting this type of attack before it can cause serious process damage.

Limiting the use of USB memory devices on the control system is an obvious first step as is limiting the connections between the ICS computers and the Internet. Furthermore, protecting all other computers on the corporate networks with updated anti-virus signatures and all security patches will help protect the ICS from this type of attack.

Tuesday, October 19, 2010

Small Unit Assaults

I ran into an interesting document on PublicIntelligence.net that is allegedly prepared by the TRIPwire people at DHS. I have to say ‘allegedly’ since I have no way to verify that it is a real document; but I believe it to be legitimate. It describes small unit tactics that have been applied in various attacks in S. Central Asia. It also provides some suggestions for making it difficult to conduct such attacks against public facilities in the US.

Nothing in this document provides information about intelligence on a planned attack like this in the US, nor even claims that one is reasonably likely. The closest they come is the statement (pg 3) that “the same TTPs [Tactics, Techniques and Procedures] proven successful overseas such as in Mumbai or in Afghanistan could be adopted for use in the United States”. They certainly don’t make any claim or even discuss the use of such tactics against chemical facilities. So, I’ll make that case; that such tactics could very successfully be employed against chemical facilities.

Easy Targets

Chemical production facilities are typically fairly large installations with long fence lines and a low population density. The real targets within most installations would be the large chemical storage tanks. Most of these tanks are not hardened (pressure tanks being the main exception) so relatively small explosive charges placed at the base of the tanks can cause catastrophic release of the chemicals within.

The resulting pools of flammable and/or combustible chemicals can be easily ignited causing large fires on site that could damage additional storage tanks. In fact, such fires in tank farms containing pressurized tanks of flammable gasses or toxic gasses would be the easiest attack methods to successfully execute against such tanks.

Multiple two or three man teams would the ideal way to place large numbers of explosive devices on tanks in multiple tank farms. With the devices set to go off even nearly simultaneously, even if the teams are neutralized fairly quickly, the limited bomb response teams in most communities would not have enough time to search for and neutralize even a significant number of the devices before they start going off.

Facility Penetration

Without a really large and mobile guard force, it would be practically impossible to respond to multiple penetrations of the fence line in a timely manner. Aggravate the response teams by setting off a small vehicle borne improvised explosive device (similar in size and complexity to the ones currently being used in some Mexican border towns) on the other side of the facility and the entrance teams are unlikely to be intercepted until they announce their presence by shooting at personnel in the more densely populated process sections of the facility.

Employing such devices at entrance gates would be even more effective drawing any on-site guards to protect access through the gate. It would also and slow any off-site response forces from moving through the gates as they would have to maneuver around the wreckage.

To draw the response forces from searching for the planted explosive devices, the assault teams would move to the production areas shooting at anyone that they encountered. If they could reach production control rooms, office structures or security control rooms, any response forces would remain focused on the teams until the first explosions started in the tank farms.

The larger the facility, the more effective this type of attack would be. This would be especially true for large petrochemical refineries. While larger facilities would have better security, it is impossible to defend the long fence lines. Additionally, the owner/operators of these types of facilities would be the least likely to have armed on-site guard forces. The potential problems with weapons discharges in such facilities are routinely used to justify not using armed guards or on-site response forces.

Counter Measures

Large chemical processing facilities are going to have to take a number of well defined actions to reduce the likelihood of these attacks being conducted against them. First and foremost they are going to have to use aggressive perimeter patrolling techniques to prevent the reconnaissance activities that must precede such attacks.

Second they are going to have to employ their defenses in depth to hamper the movement of attackers within the facility. Multiple internal fence lines and separate fencing of tank farms will be required. All fence lines must be backed up by intrusion detection and video surveillance to allow identification of the location of intruders at all stages of the attack.

Third such facilities are going to have to review this tankage disposition. Flammable and combustible liquid storage tanks should not be contained within the same diked area as pressurized tanks containing flammable gasses and non-flammable toxic gasses. This will help to limit the ability of attackers to successfully breach these tanks without taking a great deal of time properly emplacing advanced explosive devices on these tanks.

Finally, large petrochemical facilities are going to have to take a serious look at the issue of arming internal security guards and security response forces. Very professional forces will be required to do this, forces with extensive training on fire discipline and very familiar with potential inherently dangerous areas of the facility where firearms may absolutely not be used. Security companies are going to have to look to alternative weapon systems that do not employ burning propellants.

Security managers at large chemical production facilities, especially petrochemical facilities are going to have to take a serious look at this type of attack. Preventing these attacks are going to require aggressively looking for reconnaissance and planning activities as well as close cooperation with Federal, State and local law enforcement agencies. Preventing these attacks will be much more effective than stopping them once they have started.

Emergency Response Planning – Process

The whole point of emergency response planning is to think about what can go wrong in advance and establish the initial response parameters. This allows for everyone to start responding in a coordinated manner while allowing for the incident commander to get established, collect information on the actual incident, and to formulate a specific plan for dealing with the situation on the ground.

No one expects the plan to provide enough information to bring the incident to a successful resolution; there are just too many variables and no two incidents are exactly the same. The military has a very old saying; no plan survives contact with the enemy. The plan establishes who starts out doing what and gets everyone moving in the same direction, allowing the commander to get the timely information necessary to actually start a coherent, situation-specific response.

Types of Incidents

The first step in the emergency planning process is to identify the incident to which the plan is supposed to respond. Since we have been using the gas pipeline regulation as the basis for this series of blogs we will continue with that here. In an earlier posting I established the four incidents covered in the current regulations {§192.615(a)(3)} and two additional incidents that I proposed adding to that list. Other regulations will identify other incidents that require similar emergency response planning processes to be initiated. These gas pipeline incidents are:

• Gas detected inside or near a building.
• Fire located near or directly involving a pipeline facility.
• Explosion occurring near or directly involving a pipeline facility.
• Natural disaster.
• Pipeline overpressure
• Loss of pipeline pressure
Each of these incidents is going to require a different initial response and will even start out with different people in charge of the response, but all of these situations could involve a number of different agencies and the pipeline operator getting involved in the response effort. That is why the role of the FEMA Emergency Response Coordinator (ERC) is so important; the ERC ensures that all applicable agencies are involved in the planning process.

Identifying Incidents

The next thing that must be done as part of the emergency planning process is to identify which parameters will call for the initiation of each plan. For example, with our first incident listed above someone is going to call 911 or the gas company complaining of smelling gas. Thus when a caller says “I smell gas” or “I think I have a gas leak” this will trigger the call receiver to initiate that emergency response plan.

While it may seem obvious what would initiate each of these responses, it will take some careful consideration in establishing these initiation parameters. The FEMA ERC will have to ensure that all response agencies agree on the incident indicators that will trigger which response.

Incident Command

The next most important part of the initial response plan is the designation of who makes decisions and when initial response escalates to a level requiring a change in authority. For instance with the gas detected incident, the initial responsibility for determining if there is, in fact, a gas leak will probably rest with a gas company technician even though there may be police and or fire representatives on site. That technician will then determine if it is a simple ‘shut off a valve and open some windows’ situation or if the leak is severe enough to require a building evacuation.

The former will continue to be handled by the technician while the later will escalate to a situation requiring the establishment of an on-scene commander from one of the public emergency response agencies. The parameters for incident command escalation need to be clearly spelled out in the emergency plan.

Initial Response

Every incident is going to require a timely initial response of people to the scene to establish exactly what is happening with the incident. Again looking at the gas detected incident, it seems clear that a technician from the pipeline operator needs to be involved in the initial response to determine if there is actually a gas leak, the extent of the leak, and what type of response is needed to stop the leak.

What other emergency personnel need to show up on the initial ‘I smell gas’ call needs to be determined by local circumstances. Local officials may determine that a police patrol car will need to respond to provide legal back-up for the gas company technician. Another locality might decide that, if available, a fire truck might need to head in the direction of the call on a non-emergency basis to be more immediately available if needed. All of this needs to be spelled out in the ERP for this type incident so that dispatchers, both pipeline and emergency response, know who to notify to respond to the incident.

One good way to determine what types of initial response need to be included in the plan is to conduct a couple of table top exercises to see what types of things might happen in an incident and what resources need to be involved to handle those expected problems. The FEMA ERC would conduct these exercises using scenarios based on actual situations from around the country. FEMA would develop standard generic scenarios with the local ERC plugging in local data (weather, locations, personnel, etc) into those scenarios. After action reviews of these exercises will go a long way in determining what types of resources need to respond to an initial incident.

Response Escalation

Everyone in the emergency response community knows that the simplest incident can quickly escalate into a major multi-agency catastrophe. Again using the ‘I smell gas’ call, we can see that most responses will be successfully handled by the pipeline operator’s technician responding to the scene. Most such incidents will be loose connections, or blown out pilot lights, potentially dangerous but readily resolved.

Unfortunately, that is not always the case. The gas smell may be coming from a leaking main or building feeder line. Or the gas may have been leaking from a small leak for long enough to permeate an entire apartment building with explosive levels of gas. The first responder needs to be trained to identify these larger problems quickly so that the response can be appropriately escalated.

Again the emergency planning process needs to identify these escalation points and determine the immediate response to the escalating situation. A good rule of thumb to use to determine how much escalation must be covered in the emergency response plan is that if a trained incident commander is not currently in place and in charge, then the situation should be covered by the ERP. This allows the incident commander time to get into place, identify the critical elements of the situation and formulate a specific plan for that specific situation.

Public Information

Keeping the public appropriately informed is an important part of the emergency response. In the simplest instance this includes the pipeline operator providing gas users with information about what to do in the event of a suspected gas leak in their house. Or it may be police officers directing traffic around an area with a suspected gas main leak. On the high end it may a large scale evacuation notice.

All potential communications with the public need to be identified in the emergency response plan. This identification of who is authorized to decide that the communication is appropriate and whom will be responsible for making the communication. It may be a good idea to include major local press outlets in this planning process, that way they will know where to go to get authoritative information in the event of an emergency.

Proper Prior Planning….

A good emergency response plan will handle most emergency situations, preventing escalation into major events that can cause extensive loss of life and significant damage to physical infrastructure. A clear delineation of who is responsible for what will help to eliminate the confusion and counter-productive efforts that can waste so much time and effort.

Monday, October 18, 2010

Intrusion Detection for Computer Systems

Andrew Ginter has an interesting post over on his new blog, Control System Security. He discusses in pretty good detail how network intrusion detection systems work. As we hear more and more details about Stuxnet it is becoming clearer that we must take control system security as seriously as we take the protection of any other critical cyber system. We are also becoming more aware that there is no single action that we can take that will protect these systems against all attacks.

A network intrusion detection system (NIDS) as described by Andrew does much the same as a physical intrusion detection system does for our facility perimeter. As someone approaches or broaches our first line of perimeter defense it alerts the security forces of a potential problem that must be checked out and evaluated. The NIDS does the same thing for the cyber perimeter.

Neither the IDS nor the NIDS stops perimeter penetration, it simply identifies a potential penetration of the perimeter. If there is a barrier outside of the detection system, there will be fewer potential penetrations to investigate or respond to. But, even with the best practical barrier system, there will be potential penetrations to be investigated. Hopefully they will be false alarms, but if you don’t back-up your perimeter protection with intrusion detection you will not know about actual penetrations before it is too late.

Andrew’s post is well worth reading. He explains the concepts in terms that most non-specialists can easily understand. You won’t be able to design or implement an NIDS after reading this post (unless of course you could before hand), but you will find it easier to understand your local geek (employee or consultant) when they explain why you need to spend your hard earned profits on another non-productive security process. Unproductive, that is, as long as you don’t think about the fact that without your industrial control system working as designed, you won’t have any widgets to sell.

PHMSA ANPRM – My Comment Submission

As I noted in Saturday’s blog, PHMSA has published in today’s Federal Register an advance notice of proposed rule making (ANPRM) in support of their attempt to revise the regulations dealing with liquid hazardous materials being transported in on-shore pipelines. I am submitting the following comments on that ANPRM today.

PIH Chemicals

On October 18, 2010 PHMSA published an ANPRM in the Federal Register (75 FR 63774-63780) requesting public comments on potential modifications to the regulations concerning the safety of on-shore hazardous liquid pipelines. I would respectfully submit that the PHMSA regulations should take cognizance of the special hazards associated with poison inhalation hazard (PIH) chemicals as PHMSA considers their revision of these rules.

While, by definition, the leak of any regulated hazardous material potentially puts unprotected population at risk, the specific hazards associated with PIH chemicals may result from much smaller quantities of these chemicals being released into the environment. Relatively small amounts of these chemicals require significant emergency warning and response activities that are not adequately reflected in the current regulations.

Above Ground Facilities

In the ANPRM PHMSA asks (75 FR 63775) “Should PHMSA promulgate new or additional safety standards for… [a]ny other pipeline facility used in the transportation of hazardous liquid by pipeline?” Because of the reactive nature of PIH chemicals, small leaks in underground portions of the pipeline are not as potentially dangerous to the population as the same sized leaks in above ground sections of the pipeline. The reaction of the PIH chemical with soil elements, including ground water, will reduce the potential inhalation hazards to unprotected civilians.

Anytime a portion of the PIH pipeline is above ground this protective action is absent. Thus any portion of a PIH pipeline above ground should receive special attention in the PHMSA regulations.

HCA and Transportation

For example, later on the same page in the ANPRM PHMSA asks: “Should PHMSA amend the existing criteria for identifying high consequence areas[HCA], to expand the miles of pipeline included in an HCA?” The recent (July 2009) Tanner Industries leak near Swansea, SC demonstrates that releases of PIH chemicals near transportation routes pose a special hazard. If there had been a higher volume of traffic on US 321 that day there would have been a much higher number of fatalities.

Any time that a PIH pipeline traverses an area near major thorough fare the PHMSA regulations should provide treatment similar to the HCA provisions even if the roadway is in an otherwise rural area. Once again, any above ground portions of the PIH pipeline in these areas will have an even larger potential affect.

Leak Detection

Most pipeline leak detection methodologies rely on measured changes in flow rates, pipeline pressures or other measurements internal to the pipeline. For the hazards posed by many hazardous materials this is probably adequate. Once again, the special nature of PIH chemicals does not appear to be adequately addressed using these techniques. PHMSA asks in the ANPRM (75 FR 63777):

“If PHMSA adopts new leak detection requirements, should there be different performance standards for sensitive areas? For example, should PHMSA require operators to install more sensitive leak detection equipment, such as externally-based systems, in those areas?”
Once again, I would like to suggest that any place where a PIH pipeline is above ground externally based leak detection sensors are the only technology that would provide adequate warnings of the relatively small leaks of PIH materials that could affect unprotected civilians.

HAZCOM and ERP

PHMSA does not address issues related to hazard communication or emergency response planning in this ANPRM. I would like to suggest that PIH pipelines present such a serious potential hazard that the PHMSA regulations should specifically address both of these areas. While security professionals are concerned about publishing locations of these pipelines, anyone living within the zone of a potentially toxic leak from one of the many PIH pipelines needs to know:

● What pipelines are in the area,
● What actions to take in the event of a leak, and
● How they will be warned of a significant leak.
PHMSA should consider adding a hazard communication requirement to these regulations for PIH pipelines. Operators should be required to notify residents of the existence of an above ground PIH pipeline in the area potentially affected by a leak resulting in concentrations higher than the established short term exposure limit for the specific PIH chemical contained in the pipeline. The exposure area would be determined using accepted dispersion modeling procedures using the maximum amount of the PIH material that could be released before automated shut off valves could effectively isolate the release and stop the flow of additional PIH material into the environment.

To avoid the potential security problems of posting pipeline locations on the internet, older technology, direct mail notification, could be used. This would give people with a very real need-to-know the information they needed to protect themselves. Pamphlets used in the notification process should include information on what actions to take in the event of a leak, a detailed description of procedures for “shelter in place”, and evacuation procedures (including the location of shelters).

The regulations should also include a specific emergency planning requirement for each above ground section of PIH pipeline. The plan would include the hazcom notification requirements discussed above, the leak identification and notification procedures, and a requirement to provide designated medical providers serving the potential above ground leak zone with medical treatment information for the PIH. Copies of the plans should be submitted to the PHMSA Administrator for approval and copies provided to the local FEMA office for coordination with local emergency response agencies.

PHMSA is to be commended for its willingness to address the issues associated with on-shore hazardous liquid pipelines. Thank you in advance for your consideration of my comments.

Sunday, October 17, 2010

Another Detailed View of Stuxnet

Thanks to Bob Radvanovsky at the SCADASEC List for pointing me at a new source of detailed information about Stuxnet. He recently posted a link to a blog at ESET.com that is covering Stuxnet in some detail, in fact they recently revised a major paper on the malware.

In many ways this paper is similar to the Symantec paper I recently discussed; it is a very technical discussion about the Stuxnet operation on MS Windows® systems. It does take a little bit different tack on the discussion so it complements that paper. More importantly it includes information on the recently patched escalation of privilege (EOP) vulnerability that previous papers had to dance around. It also has a brief and deliberately vague discussion about the second EOP that has yet to be patched. If for no other reason, this paper is worth downloading for the technically minded just for the discussion of the EOP vulnerability.

I was hoping that I would be able to give the SCADA users in the audience a reasonable explanation of the Key Board EOP that was patched this week by Microsoft (MS10-073). I read the portion of the paper dealing with that vulnerability and quickly got bogged down in the details. Fortunately, Randy Abrams had a really good explanation in his post on the ESET.com blog:

“A flaw in the software that translates what you type on the keyboard to letting the computer know what that was allowed Stuxnet to have more control over the infected computer than it was supposed to be allowed.”
In fact, he does even better in discussing the two latest Microsoft patches for vulnerabilities used by Stuxnet:

“For a normal user, all of the pictures of computer code don’t really matter. What is critical for you to understand is that if you do not apply the recent Microsoft security patches, anyone can hijack your computer using the print spooler attack and the privilege escalation attacks.”
That’s good advice, even if you have to shut down a process to get the patch installed. Of course there is still one other EOP vulnerability that Stuxnet used that is not patched, so many will want to wait for that patch before shutting stuff down to be all of the holes patched at one time. Of course, there will be other ‘zero day’ vulnerabilities coming down the road too with their associated patches.

That’s one of the problems with using standard IT type security measures on SCADA systems, they can be a lot more time-consuming, expensive and painful to implement.

Saturday, October 16, 2010

PHMSA ANPRM – On-Shore Hazardous Liquid Pipelines

In Monday’s Federal Register (published on-line on Saturday) the Pipeline and Hazardous Materials Safety Administration (PHMSA) is publishing an advance notice of proposed rule making (ANPRM) dealing with the safety of on-shore hazardous pipelines. PHMSA is considering updating/changing their regulations in six areas. Those areas include (75 FR 63774-5):

• “Scope of the pipeline safety regulations and existing regulatory exceptions,
• “The criteria for designation as a High Consequence Area (HCA),
• “Leak detection and Emergency Flow Restricting Devices (EFRD),
• “Valve spacing,
• “Repair criteria in non-HCA areas, and,
• “Stress corrosion Cracking (SCC).”
PHMSA is seeking public comment on the need, scope and extent of the changes in these six specific areas. Public comments may be posted to the Federal eRulemaking Portal (Docket Number PHMSA-2010-0229). Comments need to be posted by January 18, 2011.

Friday, October 15, 2010

Fake Stuxnet Cleaner

As if Stuxnet were not enough of a problem by itself, now Symantec is reporting that they have found a Trojan ‘in the wild’ that is claimed to clean the Stuxnet infection from a computer. They report that this ‘free software program’ is being offered under the bogus name of “Microsoft Stuxnet Cleaner”. Symantec has tested the program and it wipes out the content of the C Drive and changes file extensions for .exe, .mp3, .jpg, .bmp and .gif so that the files cannot be opened using standard software. Symantec has labeled this as “Trojan.Fadeluxnet”.

That this is apparently being offered as a ‘free software tool’ should be a major warning. Secondly, a legitimate free Microsoft anything would only be available from a Microsoft web site. Finally, a “Microsoft Stuxnet Cleaner” would most likely be included as part of a standard Windows update package, not a stand alone product.
 
/* Use this with templates/template-twocol.html */