CyberCompare

Penetration tests at companies with key operational technology (OT and ICS)

What penetration tests are and how they protect your organization

Why is the topic relevant? 

Regularly occurring security audits by “white hat” hackers or penetration testers are now a standard part of daily business for many CIOs, and they are a standard offer from many IT service providers. However, with the increasing threat level at industrial companies (from, for example, ransomware), protection for “Industry 4.0” operating technology and networked devices is also coming into focus.

This applies equally to plant manufacturers or producers of “smart devices” whose products are connected to the grid via their customers, as well as to in-house production and building technology at, for example, automotive suppliers, chemical factories, utilities, or pharmaceutical companies. Software code testing is as much a part of this as vulnerability testing of hardware and network configurations. 

What can or can’t a penetration test do, and how does it fit into the security strategy?  

The technical focus of penetration tests with no social engineering aspects is on attempts to penetrate the target network via open or unhardened interfaces.  

They are not suitable for (for example) reviewing general security management or, more specifically, recovery capabilities in the wake of an attack.  

Penetration tests serve as independent validation of a variety of security measures. 

They can, in particular, be used to detect critical security vulnerabilities. 

For example, they are also required within the IEC 62443 framework for component manufacturers, system integrators, and operators of industrial plants. 

What is different about penetration tests for OT versus tests for pure IT environments? 

Many methods and procedures are the same as they are for organizations that have purely office IT (or if one simply disregards operations and automation technology).  

For example, specialized operating systems such as Kali or Parrot Linux are used, or applications such as Wireshark for recording data traffic.  

However, there are also significant differences: 

  • The failure or malfunction of cyber-physical systems can have significant consequences, including personal injury. This means that “just trying it out” is usually not a good strategy. In fact, a good deal of time usually needs to be spent on understanding the underlying processes and procedures
  • Real-time systems, the inclusion of operating technology over several levels, as well as fieldbus protocols such as ProfiBus, BACnet, Modbus/TCP, DNP3, or HART, as well as proprietary controls increase the size of the playing field and complexity enormously. To put it somewhat polemically: Penetration tests of pure office IT architectures are like a mill, tests of composite architectures with OT and networked devices at customers are like chess 
  • Network and vulnerability scanners such as NMap or Nessus (for searching for default passwords, among other things) should be used very carefully ─ even at level 3 (typically Windows machines or Linux servers), malfunctions cannot be ruled out. Some PLCs automatically lose their warranties; “crash” (DOS status); or trigger firmware updates, for example, when a standard port scan is applied. Redundant systems are often not fully identical. This means, for example, that the scan of the standby server can be performed without any problems, but the scan of the active server causes a crash. Even simple Powershell scripts or additionally saved log files represent system changes that can lead to unforeseen disruptions. Nevertheless, scans must be carried out ─ how and when is the core piece of knowledge here. 
  • There are more stakeholders to coordinate: production managers, for example, must be informed and involved, and process engineers and plant operators are an integral part of the extended team. 

How to find and use penetration tests in practice

As a customer, what should I consider in order to “get the most out of the penetration test”? 

It is always case-dependent, but generally, a few recommendations for action tend to crystallize as mostly correct: 

  • Naturally, to secure an independent controlling function, you should not hire the IT service provider that is also responsible for security. This should be self-evident, but it is not. An interesting approach here: Two service providers can be assigned in parallel or one after the other, and the results compared
  • Targets should be defined honestly. What do we really want to achieve? Don’t we already know what we should do first? If systems are breachable via Shodan and an open telnet port, for example, and the network is flatter than a pancake, I can save myself the penetration test 
  • White box testing is generally much more efficient than black box. If the penetration testers initially require a lot of time to search for information, everything will just be that much more expensive. Of course, it may also be that real hackers just have a harder time accessing information on a network structure and organization, and, in a best-case scenario, may simply shy away from the effort required. But if we’re going to spend money, it’s not going to be to simulate the best-case scenario and then congratulate ourselves that we already have “superior” protection 
  • To this end, network plans should be updated in advance, and, ideally, schemes for communication paths between IT and OT 
  • Overviews of user rights should be available, particularly for remote attacks on OT systems at levels 1 to 3 
  • Knowledge workers must allow sufficient time for briefings 
  • Corona has shown that most penetration tests and security assessments can be performed remotely and with sufficient depth of field (exceptions prove the rule).  

What kind of vulnerabilities are typically found, and which of these should be prioritized? 

As always, the following applies: The next euro spent after the penetration test should mitigate risk the most. 

It is best if the penetration test partner makes a proposal for this as a basis for discussion (otherwise, it is necessary to take the laundry list of vulnerabilities and discuss prioritization with the provider). 

The vulnerabilities identified naturally run along the entire NIST framework. 

In most cases, however, companies that subject the production environment to testing have also done homework on key aspects beforehand (e.g., asset and software inventory for level 2 and above, at the least, or implementation of password policies). 

  • Startup plan: Often, with regard to individual system outages, there is no clear plan for restarting (beyond an initial reaction plan). How long can I allow a system to be down? That can vary quite a bit. Common discoveries in systems naturally include outdated operating systems (Windows NT, Windows 7, etc.) for which it is not clear what would happen in the event of an outage (key points: hope principle/single point of failure). 
  • 2FA: Or there is no form of multi-factor authentication for supraordinate control units. In fact, almost all penetration testers agree on this point: comprehensive two-factor authentication makes it so difficult for hackers to gain access that only extremely motivated attackers will make the effort.  
  • Network architecture: Does and do network segmentation and “demilitarized zones” work, respectively? Clearly not, if network sniffing in the enterprise IT network can pick up industry-protocol-based traffic. 

Or, if there is write access via terminals in the DMZ. The use of the same physical network switches for IT and OT is also a major vulnerability. Also, cloud connectivity among ICS systems is often necessary today, but these connections are usually not sufficiently secured against attacks. 

  • Realistic (prioritized) patch management is often lacking in OT environments. Here, the following generally applies: Priority in patch management should be given for IPC, PLC, and SCADA systems as well as servers at Purdue levels 3 and 2. The sensors, actuators, and controllers, and most of the industrial protocols among them are usually inherently insecure, and software patches involve a lot of effort in terms of system testing. Dale Peterson (for example) offers pragmatic methods for prioritization here and at ICS-Patch.  

Recently, an interesting  interview  was published on this subject with SANS instructor and long-time industry professional Justin Searle. 

In it, he points out a few more tips and tricks for better OT protection: For example, one should use two separate identity management systems for IT and OT, possible connections (including existing side channels) between IT and OT should be listed, and adequate skilled personnel for implementing the identified measures in a timely manner should be ensured. 

How do things look with regard to automated solutions? 

Providers of automated penetration tests advertise that manual efforts that typically take many days or weeks can be pared down to just a few hours. The prices are attractive ─ especially if one wants to conduct vulnerability tests more often than just once every leap year.  

And even if conventional consultancies rightly point out that human creativity cannot be replaced, it is also clear that automated processes based on up-to-date databases can run checks that are more thorough than a single human or a small team could carry out. This is especially true at global organizations. We all know the history of chess: At first, the human was superior to the machine; then a human aided by a computer won out; and today, even grandmasters don’t stand a chance against AI algorithms.  

Manual penetration testing teams (and real attackers) also rely extensively on automated tools to find vulnerabilities. 

So perhaps the question is: How high is the level of test automation, and how much of the testing effort can in-house employees handle? In industrial environments, at least, it currently appears that a fully automated penetration test without a team offers significantly less insight into potential attacks. But, naturally, this may change over time.  

What providers are recommended? 

A Google search for “penetration test” delivers more than one thousand hits, depending on the day. 

And, logically, all providers advertise with customer references. 

Who should one choose? As always, in general, the answer is: it depends. 

Here are a few questions that, in addition to a structured RFP process, may help with the selection process: 

  • Are resumes provided for the team members who actually perform the tests? 
  • What industrial controls, firmware solutions, protocols, and RT operating systems do the vendor’s specific team really have experience with? 
  • Are proprietary tools used, and, if so, for what purpose (usually a good sign)? 

Are OT and IoT security issues for your company? As an independent entity with a portfolio of proven security providers, CyberCompare can provide you with comparative offers at no charge and with no obligation. Reach out to us or use our diagnostic to learn more about your cyber risk profile.

Please remember: this article is based our knowledge at the time it was written – but we learn more every day. Do you think important points are missing or do you see the topic from a different perspective? We would be happy to discuss current developments in greater detail with you and your company’s other experts and welcome your feedback and thoughts.

And one more thing: the fact that an article mentions (or does not mention) a provider does not represent a recommendation from CyberCompare. Recommendations always depend on the customer’s individual situation.