Podcasts

#15: Cybersafety

MITEI

Guests

Stuart Madnick, founding director, Cybersecurity at MIT Sloan

Shaharyar Khan, cybersecurity researcher, Sloan School of Management


Links


Transcript

Emily Dahl: Imagine you’re a system operator for a network of power plants. You ensure that electricity is being delivered reliably to hundreds of thousands of homes and businesses. One day, as your team is looking at the power of being used across the system, you suddenly can’t access the data you were viewing. At first, you don’t know what’s happened. Then you realize that your system has been the victim of a cyberattack that could lead to blackouts or other disruptions to electricity delivery. How did this happen? How could it have been prevented? And what can you do now? These are the questions Stuart Madnick and Shaharyar Khan, our two guests on today’s show, answer for the companies and organizations they work with, to identify cybersecurity risks and prevent attacks. From MIT, this is the Energy Initiative. I’m Emily Dahl, and I’m here today to talk about cybersafety in the energy sector with Professor Stuart Madnick, founding director of Cybersecurity at the MIT Sloan School of Management, and Shaharyar Khan, a cybersecurity researcher at the Sloan School. Welcome to the podcast. Let’s start with the situation I was just describing. How common are these kinds of cyberattacks on our energy infrastructure and what’s the biggest concern here?

Stuart Madnick: First, the situation you described is very similar to the event that happened fairly recently in May of 2019 that affected a significant portion of the West Coast of the United States. Events like this do occur. So far, they’ve been largely sporadic and isolated. Our concern in our research is these things are highly scalable, and therefore our concern is the extent to which these things are growing over time.

ED: Is it the big power plants that we need to worry most about? What about smaller scale facilities? Do we need to be concerned about those as well?

Shaharyar Khan: Essentially, all power plants have similar architectures as it comes to how these plants operate. Essentially, they consist of some kind of programmable logical controller, some kind of controllers which are impacting lower-level operations. On top of that, they are being controlled by some higher-level supervisory control systems. The architecture is fairly identical. The communication protocols are fairly identical. The equipment, essentially, is fairly identical between larger and smaller plants. It’s just the scale changes between smaller and larger plants. Here we also have to consider something else, which Professor Madnick often talks about, which is motivation. While bigger power plants are more visual, it’s a bigger prize to get… smaller plants could be a strategic target as well. It depends on the motivation. In addition to that, for instance, in our research, we found variable frequency drives. These are the small devices which are used to change or alter the speed of motors. We found a number of vulnerabilities associated with those. You can think of them as, for instance, for one of them, certain variable frequency drives can be run in reverse by changing just one parameter within them to a minus one from a one. If something like that were to happen, imagine the impact it would have on your processes, if a particular motor starts running in reverse. The other thing is that these things are not only limited to large-scale plants or small-scale industrial facilities. Recently, I was reading a report by Ben-Gurion University where researchers found that smart sprinkler systems could also be hacked. They claim that in 1,350 sprinklers, if turned on and off remotely at the wrong time or in a synchronized manner, could essentially empty out an urban water tower. Essentially, if 24,000 of them were synchronized together, you could have an entire flood water reservoir that could be emptied out overnight.

ED: When you’re talking about motivation, do you see there being an increase in motivated cyberattacks? Is there an increasing impetus for these attacks to occur?

SM: I think there’s several issues here. One of the things we think is very important, of course, is awareness. This is a key issue. You asked earlier, what is it we can do? I think it’s very important both for the operators of such facilities, as well as the general public, to realize that these great capabilities we have today of controlling enormous facilities can sometimes be turned against us. I think this is the most important message to get across, is to be aware of how these things can be abused. I’m of the personal opinion that these are all training exercises. That a lot of these attacks are just to demonstrate the capability. Some of the more worrisome people in our industry feel that the amount of destruction that could be done if someone really was eager to do so could be enormous.

ED: It certainly sounds like it. It’s good that there are people who are thinking about these kinds of things. I’m curious, what is it that first got you into thinking about cybersecurity, specifically in the energy sector?

SM: Two things. It turns out, I almost had forgotten about it, I’d been working in cybersecurity issues for a long time. I actually co-authored a book in 1979 called Computer Security. In fact, many of my students don’t realize we had computers back in 1979. It’s always been of interest to me. But in the recent decade computers are basically appearing everywhere. My wife recently bought an electric toothbrush not realizing it has a computer in it that’s Wi-Fi connected that can report her dental habits to her iPhone, and on request can send those reports to her dentist. With this widespread computerization, an Internet, if you will, of things going on, the amount of dependency we have, the amount of risk we’re taking on, has increased enormously, and that’s motivated us to really pay attention to it.

ED: Shaharyar, how did you start working on this?

SK: My experience has been fairly recent. Before coming to MIT, I was working as a project engineer, as a lead project engineer at a nuclear facility. My role was to deploy reactor inspection tools for reactor service and maintenance. Before that, I was designing components for nuclear power plants. During my time as an operator, I never really thought that cybersecurity was an issue and that’s the scary part. It was only after I came to MIT, had a chance meeting with Professor Madnick, and he shared a paper that his team had recently written on cyber physical attacks, that I realized that cybersecurity actually does not only impact data and identity, but essentially could be used to cause physical damage. Primary of those was Stuxnet, which fascinated me, and I went on and learned and looked into it. There was another paper written about how to kill a centrifuge. Then I started thinking about my time back in the plant and the whole thing started coming together. I realized that this is something that is really very important and very critical. As I started looking deeper and deeper into it, I realized that the architectures are essentially identical. Whether it was the reactor inspection machine that I was looking at, that also consisted of computerized control and some kind of feedback system with a human operator interfacing with it, whether it is at a power plant, which we later studied, a small-size 20-megawatt gas power plant, or you go to a larger power plant, they’re all essentially identical. That’s when it struck me that this is something that needs to be studied in more detail and we really need to do something to secure these systems.

SM: This is one reason why it’s so valuable to have people like Shaharyar on our team. We’ve been studying cybersecurity and most people have been looking at it from what I call the information technology perspective. How to prevent credit card theft and such. A lot of those same principles with changes and adaptations can be applied and critically need to be applied into these cyber physical systems. An area that, in many ways, has been understudied. As he indicated, the phenomena he has is many people in the industry hadn’t really thought about it. Having someone like Shaharyar is so valuable because we have to be able to translate what we’re learning here at MIT and be able to explain it in ways that will have impact on these industries.

ED: You’ve actually developed a methodology that walks businesses through the process of identifying and preventing risks. I’d love to hear about that.

SM: Certainly. We call our method cybersafety. It actually builds on prior research at MIT called STAMP, which was focused on preventing accidents in industrial settings. We’ve adapted it to apply to cyber physical systems and cybersecurity threats to those systems. There are two key components to this methodology. The first one is identifying, what are the crown jewels? That is, what are the most critical functions and activities that the organization must perform, and what are the most serious losses that could occur? The second step is then identifying the control structures to make sure those losses do not occur.

SK: With the control structure, we basically mean that after we identify what are the critical processes, who or what is controlling that process, we don’t just stop there. We identify who or what is controlling that controller. We build it all the way up to regulatory government and other higher-level controllers. This essentially helps us look at not just the process that we’re concerned with but also the interdependency of that process with other systems. Overall the method provides a very systematic approach to identifying vulnerabilities or points of weaknesses within your system.

SM: This is so important because this systematic approach is what’s lacking in most organizations. Time and time again I’ve heard of events, whether it be accident events, or in our case, cyber events, and the quote is, this should not have happened. The problem is no one thought through completely, what are the mechanisms that should have been, or were in place, that were intended to prevent that from happening.

SK: I would just like to add to that, to differentiate between systematic and a systems approach. It provides a rugged systematic approach to identify these vulnerabilities. On top of that, we are also taking a systems view. What that means is that we’re not concerned about individual components, we’re concerned about the entire system. There’s a certain element of emergence or behavior or properties which you can only recognize if you look at the overall system. To give you an example, if you’re going on the highway, and you see a traffic jam, or a gridlock, that is an example of an emergent behavior of the highway. If you look at an individual car, you would not be able to predict that it’s in a traffic jam. However, you look at the weather conditions, the size of the road, the time of day, all of those other conditions, you would be able to see this emergent effect.

ED: I’ve seen you describe this as a holistic approach, which it certainly sounds like. It sounds like a real discovery process for the businesses and organizations that you’re working with. How do they generally react to discovering these kinds of things about their vulnerabilities?

SM: One of the big issues I often like to stress, we’re in a wonderful age of discovery and invention. We’re developing all kinds of new technologies. Unfortunately, we tend to be so immersed in the value of these technologies, we don’t think about the ways in which either a) it could be misused, or b) how dependent we are on these technologies and how disruptive things can be if they’re no longer available. That requires us to change our thinking to understand the bigger picture, the holistic picture, of the world we’re living in and how interdependent it has become. As humans, we tend to focus on the first order effects, but the second and third order effects can often be much more serious. Part of what goes on in this control structure that Shaharyar talked about, it looks at what is controlling this, what is controlling this, what is controlling this, to look at how these things are so interdependent.

SK: Just to add to the second and third order effects that the Professor is referring to, which is a result of the interdependency or the complexity of the system, we did one study on a power plant. We found that an attack on the automatic voltage regulator of the generator could not only disrupt the direct output of the electricity to the facility, but would have indirect effects in terms of causing these chilled water systems to switch to an alternate source, such as electricity, instead of being driven by steam from the exhaust of the turbine. Those would have additional impacts on the amount of electricity that is imported from the grid. Sometimes those things can compound and cross a threshold which can result in larger scale damages, which are not essentially thought about if you’re just looking at something in isolation.

ED: If you were giving advice to any business right now, what would be the most important steps that they could take to protect their energy infrastructure?

SM: The first most important step, not the only one, is awareness. By training, I’m an engineer. Obviously, many of the people in cyber physical systems are engineers. To some extent, their training causes them harm. Because they’ve all been trained to think about independent failures. You know you’ve got 10 generators. You know generators are mechanical and they may fail. The probability of one generator failing can occur. But the chance of two generators failing at the same time, very unlikely. The chance of three of them failing at same time, very, very unlikely. But a cyberattack that knocks out generator number one, can knock out generators two through eight at the same instant. A lot of things that we are trained to think and understand as engineers in many ways puts us at a disadvantage. We have to understand the whole new world of cyberattacks. That’s step number one. Not the final step, the first step. Because if you don’t realize the threats you’re under, you’re unlikely to do anything about it. The cybersafety method then becomes, what are some of the things you could then do to identify these risks and minimize those risks.

ED: As far as regulators and policymakers go, how should they be responding to cyber threats on energy systems, whether it’s domestic or international?

SM: That’s a big question. I’ll break it down to a few couple parts. First, in general, regulators tend to lag the reality. Because regulations typically take years. There’s a first problem that cyberattacks are evolving at an enormous rate compared to traditional threats to energy. There’s a problem regarding regulators being up to the event. The more important issue that we worry about is often organizations use regulation as the end result rather than what I call the low minimum bound. In other words, regulation may tell you things you must do to at least be somewhat concerned about security, but they almost always do not cover all the things you need to be concerned about.

SK: I often think about it from the perspective that, when something stops working, and you’re in an operational environment, cybersecurity is the last of the things that are on your mind. It’s not the first thing that’s on your mind. The first thing on your mind is, maybe there’s a sensor that failed, maybe we need to tap the computer, maybe there’s something bad, maybe there’s something wrong in the feedback. The operator actually takes steps to either force flags to push through with what he thinks the computer should be doing at that time or force permissives to actually take a particular function. But in any case, does not necessarily consider that a cyberattack could be causing those kinds of things. In my opinion, these things can actually have very adverse effects. We take a very myopic view, because what I think is one of the problems is that the people who are making the policy do not necessarily understand the intricacies of the actual operational environment. They’re so far removed that at that moment in time, whatever the policy is, it sometimes does not apply or it’s not implemented properly, which ends up causing people to take steps.

ED: Stuart, you have an opinion piece in The Wall Street Journal where you’re discussing blockchain security. A lot of people see blockchain as a key component of future energy systems. What are the cybersecurity concerns there?

SM: First thing, blockchain I think is proposed to cure almost everything. I’m not sure it’s going to cure cancer, but almost anything you could imagine, you see blockchain being mentioned. Which is why we thought it’s interesting to talk about blockchain. As I commented earlier, blockchain, like many new technologies, has lots of advantages, which is why it has all of this attention. What has not been focused on, of course, are some of the risks associated with it. What we found most fascinating about it were some of the things that make blockchain so appealing, are actually some of the things that make blockchain so dangerous. I’ll just give you three quick examples. One of the things you hear about regarding blockchain is that it is distributed. That means it doesn’t run on just one computer but runs on hundreds or thousands of computers. If any one computer fails, the blockchain keeps running. That also means there is no off switch. If you’re running a stock exchange and something goes haywire on the stock exchange, what do you hear in the news? The stock exchange has been shut down until tomorrow, until the problem has been resolved. You can’t do that with a blockchain. In fact, this actually happened. Someone found a flaw in the blockchain software and exploited that flaw to start stealing money. The other people realized what was going on but they couldn’t stop it. What they did, they formed a team of “good guys” who used that same flaw to steal the rest of the money and redistribute it afterward. That’s an example where the feature of being nonstop actually was a disadvantage. Another example I often use is transparency. That is, you can see the software that runs the blockchain. First, that’s critical because that’s how it’s able to be transferred to hundreds or thousands of servers. The logic behind it is, because it’s readily available, people can look through the software to find possible flaws and fix them. Unfortunately, the bad guy may look through the software faster than you do, find flaws that you have not yet discovered, and exploit them. Whereas in a traditional system, the software is kept as secret as possible, making it much harder for the bad guy to find flaws in it. The third and last example I’ll give is anonymity. That is, on a blockchain typically you’re identified by your keyword or pass code, if you will. Think of it as a password but it’s enormously long and something you would never ever guess. You’re not identified by your own name. Think of it like a Swiss numbered bank account. That means that nobody knows exactly who owns each of the assets on the blockchain. That has many valuable aspects. It’s also why it’s very popular for blackmail and for criminal activities as well. The more intriguing thing is that if you had a safety deposit box at a bank and you lost a key or you died and nobody knows where the key is, you could go into the bank and they would arrange for a locksmith to come in and pick the lock or take a crowbar in open the safe deposit vault. The trouble is, if the key of the blockchain is lost, there is no way to break into it. In fact, that happened. The executive of a cyber blockchain exchange died. As the headlines report, $137 million are locked and nobody has the key. As I mentioned, blockchain can be used in many applications. We, in fact, in our own research group, are looking at blockchain in the role of energy systems. The reason why is you want to have a way to distribute updates, in this case updates in terms of what we call whitelist and blacklist. Who are people the energy system should allow to access it, who the energy system should not allow to access it. We’re using a blockchain as a way to distribute that across all the systems simultaneously. We are fans of using blockchain, we just need to be cautious about ways it can be misused.

ED: What about our personal lives? Because nearly everything we interact with today is digitally connected, how can we manage our own cybersafety?

SM: First thing, as I said, in many ways, awareness is so fundamental to all of this. As I often say to people, think about, what are the things you depend on? What are the backup mechanisms you have in mind? As a simple example, in my class, I have to ask them, how many of you have your photographs posted on Instagram or Facebook? In how many cases is that the only copy of the photograph you have? If that photograph disappeared, would that be of any concern to you? Usually there’s a bit of shock in their lives, like, could that ever happen? It’s more a matter of awareness, number one. When we think about it in terms of energy, we as a society have become so increasingly interdependent in energy, that when things break down, we often have no fallbacks. Both as individuals and as societies, we need to think about our dependencies and our fallback plans, and what are the fallback plans for the fallback plans.

SK: One of the things that we learn from our cybersafety method is that we’re looking at redesigning and rethinking these control structures or control systems. The only sure way that you’re not vulnerable is to keep it offline. Not to put it on the Internet. But that’s not always possible. One of the things that we also do within our cybersafety method is to bring in these analog controllers, which are not hackable at all. For instance, which have no communication and which are not changeable. They’re more difficult to configure. Basically, go back to the old days. Basically, go back to the ways we used to do things in the olden times.

ED: Before we wrap up, is there anything else that you’d like to add?

SM: Let me end on a somewhat more positive note. I often like to say that my talks often come across as gloom and doom. The point being is, none of these things would exist if we did not have all the fantastic advantages we have. The goal is very simple. We want to take as much advantage of these great advances as we can, but do it in a way that does not increase our risks. We want to maximize the benefits. I think in many ways the techniques we’re developing allow us to do that.

ED: That’s great. Thanks both so much for being here.

SM: Thank you.

SK: Thank you.

ED: You can read more about Stuart and Shaharyar’s research in Energy Futures magazine, which you can find in this episode’s show notes at energy.mit.edu/podcast. Share your questions, comments, and show ideas with us on Twitter @mitenergy, and subscribe and review us wherever you get your podcasts. From the MIT Energy Initiative, I’m Emily Dahl. Thanks for listening.


Research Areas

Press inquiries: miteimedia@mit.edu

We're hiring! Learn more and apply