Examples of this longstanding problem are abundant. Late last year, it was discovered that malware known as BlackEnergy had been targeting industrial control systems — specifically, human-machine interface (HMI) products — of companies in the utilities sector and installing backdoors beginning in 2011. Despite an advisory from the Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team (ICS-CERT), patches were slow to be created, taking months, and some were not available until recently.
This is concerning. But more concerning is the fact that it will be many more months or even years before most companies apply these patches to their systems, if they do at all. This is just a reflection of the fact that industrial systems have never been designed for the constant updates and patches that the modern world requires to continually secure systems.
Perhaps the best example of how far these systems lag behind modern security requirements is the fact that none of the widely used industrial control protocols even support authentication, let alone encryption. This all ultimately means that once attackers are in a network, they can remain in systems for years, as illustrated by the BlackEnergy campaign. It would be simple for attackers to remotely execute commands to massively disrupt critical organizations such as energy providers once already in the network.
Healthcare organizations are also a prime target. With the industry’s wide and comparatively open infrastructure, new connected devices are constantly bridging the gap for attackers to infiltrate secure networks. Even when integrated into healthcare industry-standard security suites, these devices are still the source of attacks and where attackers pivot to compromise healthcare systems.
The IT staff at hospitals have no insight into what software is actually running on medical equipment. Understandably, the equipment manufacturers do not want unintended changes made to the configurations of these devices. But this essentially results in opaque devices, which customers are unable to verify while manufacturers label them as “fully secure.” In reality, the only reason these devices are still considered secure is the fact that no one has tried to compromise them. Yet.
Airplanes are a great example of this last point. Until recently, the average person would not even consider that “hacking” an airplane was possible. Yet, when Chris Roberts ended up in the news for making a plane fly sideways (or so the FBI seems to claim), security researchers began to examine all the ways someone might actually be able to interface with aircraft systems.
Airplanes increasingly have satellite or cellular communications links to the ground, and there is a rapidly growing trend of airlines offering some form of in-flight Wi-Fi, whether for access to the Internet or general in-flight entertainment systems. While it remains to be seen whether any of those communications paths could actually result in a successful attack on critical flight systems, they are all possible attack vectors that did not exist even a few years ago.
Moreover, almost all of the avionics systems connected in these communications paths run a combination of off-the-shelf and proprietary software. Like industrial or medical systems, patches are rarely made available and, when they are, it can take months or years until they are applied. It is only a matter of time until we start finding malware at 30,000 feet.
So what can we do to avoid and overcome these problems as devices, gateways and software solutions permeate older infrastructures and become avenues for attack?
More than two years before coming under FBI questioning about possibly hacking into the in-flight entertainment system of a commercial plane while it was in mid air, a security researcher told peers he accessed the computer controls of other highly sensitive aviation and aeronautics systems, including the International Space Station.<
The 2012 talk – titled By Land, By Sea, By Air – has already touched off howls of protest from some researchers who say even the passive accessing of restricted parts of a plane while it’s in flight is grossly reckless. Critics also argue the behavior would likely be a violation of the Computer Fraud and Abuse Act, which makes it a felony to gain unauthorized access to protected computer systems.
The more information revealed about Chris Roberts, the more I wonder if this is part of an elaborate smear campaign to paint him as a rogue, mistrustful, dangerous security researcher.