#13: Artificial intelligence



Josh Joseph, chief intelligence architect, MIT Quest for Intelligence

Nicholas Roy, director, The Bridge, MIT Quest for Intelligence



Francis O’Sullivan: From MIT, this is the Energy Initiative. I’m Francis O’Sullivan. Welcome to the podcast. Today we’re joined by two guests from MIT’s The Quest: professor Nick Roy, director of The Bridge, and Dr. Joshua Joseph, chief intelligence architect. Nick, Josh, it’s an absolute pleasure to have you joining us here on the podcast today. For me, personally, this is going to be quite a different experience. As regular listeners will know, we tend to focus on specific technologies, a specific set of energy expertise. Though I know both of you guys have a lot of expertise in the energy space, I think the work that you guys are doing with The Quest, and with The Bridge in particular, and the transition of that fundamental work to industry, is broader. I think we’re going to have a very exciting and slightly different conversation. I’d like to kick things off just by asking both of you to reflect a little bit on The Quest, MIT’s effort to really understand intelligence. What does that really mean from your own personal perspectives? What does it mean, perhaps, in a more applied sense, going forward?

Nicholas Roy: The Quest really comes out of the fact that MIT has been a founding member of sort of the AI enterprise from the very beginning. There’s a huge amount of AI research on campus, but over time it’s become a little bit diffuse. There are some really big questions to be answered out there, that if we can bring many faculty together and give them the resources to really dig in and spend a fair amount of time and really work together to answer some of these really big questions. I think energy and the impact of AI on energy and how AI can be used to understand energy consumption, that’s a good example of a really big question that we could bring people together and really start making progress on. The Quest is really designed to do that, is to bring people together, really focus their efforts, and also develop technologies that can be used to democratize AI, really bring AI to bear on non-AI problems and make it a lot easier for researchers at MIT and other places to work on those. So there’s these two pieces. One is the basic research component—that’s called The Core—and the other is The Bridge, which is really the application of AI. Josh is our chief intelligence architect, trying to work out how to build those tools.

Josh Joseph: For The Bridge, one of the things—I guess I’ll speak for me personally for a second, and I think that’ll tie more into The Bridge—is that, as an AI researcher, very often, my feeling is, with a lot of the research, it feels pretty disconnected from real problems. I think one of the things that we think a lot about in The Bridge is, what are the sorts of AI methods and research and tools that can make some sort of concrete impact on the real world and a real problem? Energy is probably a really awesome example of something where we have these tools and it’s great that there’s research in this, but which tools really matter to energy, for example.

FO: A secular theme in energy today, if you were to ask somebody from the sectors, like digitization, they feel like, oh, well, suddenly we now have access to much higher resolution data, cheaper sensors, so on and so forth. We see an embracing of the kind of deployment of this kind of capability in many sectors. But that’s almost always where it ends or where it has been ending.

NR: Yeah, that data is almost a curse and a blessing, right? We have all this data, but then what you do with it and what you end up, generally, or what we see a lot of the time, is that energy researchers have all this data and they end up having to team with somebody from AI who knows how to set up the models and run the algorithms and produce the answer. Then the energy researcher can then interpret that and figure out what to do next. But that’s not a very efficient pipeline. If we can take that AI researcher out of that loop and make it a lot easier for a non-AI energy person to know what to do with the data and build the kinds of models that Josh… and I’ll let you talk about specifically what people might use.

JJ: To add a little bit onto that story, too, I think one of the things we’re really seeing is it can be really hard for non-computer science researchers to even get computer science PIs very interested. Because there’s a pretty big gap between what AI stuff or AI methods or tools or whatever that would make a difference for a lot of these applications and where the state-of-the-art computer science research is. It can make it really hard to make a CS researcher care a lot about the application because the methods that are used, often you don’t even need the state-of-the-art stuff. You just need someone that can ingest data, fit a straightforward model, produce some sort of answers, some sort of insights, some sort of whatever, that then you can make a decision based on. But it’s hard to publish in computer science conferences based on that.

FO: It’s a really interesting analogue for me in the field of economics, actually. If you look at academic economists in the pure economist sense, they’re really often not interested at all in some questions that in our world of a more applied economic context, are fundamental, hugely important, hugely valuable. I can see exactly the same challenge in the application of AI and machine learning at the frontier to these problems.

JJ: I can give you an example like that, even from our world. Yesterday, I was talking to a few neuroscientists who don’t have any real AI background, but are brilliant neuroscience researchers. They’re doing some sort of labeling by hand, like by a grad student. They’re labeling all this data to then publish research on. It’s like, well, have you ever just reached out to a CS PI? They’re like, “No, how would we… what, do we just send an email to one? What’s going to happen?”

NR: There are PIs who actually run the equivalent of office hours because there’s such a demand for that kind of expertise and they’re just like, “Yeah, I’ll be available from this time to this time to help with anybody who wants.” But that, again, is not very efficient. Imagine that those graduate students could just log on to The Bridge platform, when it exists, and then they see the example pipelines of how to do this and this is where you plug in your data. It’s like where we were with the web 20 years ago. You basically had to be a computer scientist to get a webpage. But now, you log on to WordPress or Tumblr and you don’t have to understand HTML and how the web works to get your own web presence. That’s where we want to get to.

FO: From my perspective, having spoken to a lot of folks in the energy sector, conventional and renewable energy and so on, in the electricity sector, and increasingly even in mobility and so on, one thing that stands out is that there is this awareness of a potential or a powerful impact that these kind of techniques could have for their business. But they struggle with approaching this space, the discipline, and thinking about how we can bring this discipline in-house in a way that fits and can be kind of culturally integrated, and so on. I think in the work that you guys are doing, particularly with The Bridge—obviously, this is a fundamental element of being successful. Could you guys—and I’m interested in both of your perspectives—reflect a little bit on what you’ve seen or your experiences in helping the users ultimately get comfortable and embrace the technologies?

NR: There’s a bunch of things to your question. One is, what we are trying to do as a The Bridge, we’ve talked to external partners and it’s resonated a lot. There’s a lot of organizations out there that have essentially the same problem. Maybe they don’t have AI PIs but they have data analysts, a small number of them, and they’re the ones that are the bottleneck on all the other units of the company trying to do this. There’s a real need out there. Then the second part of your question is like, how do people even know what they need and ask the right questions? I think that’s just part of MIT’s mission as an educational organization is, we don’t necessarily need to train people on all the details of PyTorch and TensorFlow or what have you, but understanding what these tools can do and what they’re for, et cetera, I think is really important to what we’re trying to do.

JJ: I think one way that we at The Bridge are very concretely trying to attack this is the two-pronged approach. One is, what are the standard workflows, we’ve started to call them, that you would put together, that you would build, that you would run as an AI researcher in this well-framed AI problem? Maybe you’re talking about image segmentation. Maybe you’re talking about forecasting from tabular data. There are a few pretty standard workflows you’d run through. I think there’s still this gap, like you were saying, between maybe domain experts or even more businesspeople, with being able to use that. The second part of this that’s really important here with The Bridge is almost a… consulting is the wrong word, but a hands-on, AI researchers that are The Bridge AI researchers that will show up and hand-hold and guide through how to use these.

NR: But if we do our job well, we won’t even need that anymore. Long-term vision, the pie-in-the-sky dream, is to create a generation of AI-native students. The undergrads leave here already knowing a fair amount about computation. The next thing is, we want them to just inhale AI and become what people are calling AI-native, so that they understand from day one what these things are, what they can be used for. They may not know all the details, but they’re very comfortable with them.

FO: In your experience to date, if we look across the economy, aside from the computer science and tech space itself—though I’m interested in your reflections there, too—what sectors of the economy have most embraced the potential that AI and machine learning brings to their business? Are there those out there, sectors that you’ve seen from your own experience, that really feel like they’re not quite at the party yet?

NR: The first one is easiest, which is you can look to see which organizations have invested the most in AI. It’s Internet ad companies—Google and Facebook. Apple’s not strictly speaking an ad company in the same way, but the Silicon Valley companies have really embraced that. One of the things that makes it easy for them to embrace is the fact that risks for them are not really embedded in the AI. I mean, obviously we’ve seen Facebook and Amazon have some spectacular—I guess Amazon’s failure wasn’t so spectacular, but it was interesting in how they deployed AI. The places that, or the sectors of the economy that are more dependent on correctness and certification are having the hardest time deploying it.

JJ: So healthcare, things like that.

NR: Healthcare. Healthcare, again, as long as the doctor’s in the loop, I think it’s actually okay. And it is being used. But I was thinking more… construction would be one where we haven’t seen as much AI. There’s a couple of reasons for that. One, I think, is the construction problem is not well-aligned with what AI can do right now. The other is that the cost of getting it wrong is really high in construction.

FO: That’s really interesting. Let me give you an example of something I’m familiar with. A major U.S. utility or wireless company, they’re really interested in having a much, much higher resolution around the nature, the behavior of their customers. Because forever and a day, an electricity customer was viewed effectively as having no elasticity at all with respect to demand and so on. That’s the way the business has evolved and today that’s changing. The potential to add elasticity is changing. But we remain in a situation where many of these companies have millions and millions of customers, all of their usage pattern data and so on, and have never even attempted to explore it. One of the key reasons is that the regulator, which shapes a lot of the energy business, says there are questions around privacy, data access concerns, et cetera, et cetera. That for me feels like a big hurdle. I’m not sure, I’m curious, in your work, have you come across that? How are sectors where there’s real potential trying to overcome what is, I think, a very valid and important concern that we have to manage?

JJ: I can answer this a little more concretely from two different sectors, maybe, that I’ve seen a bit more hands-on. This is even a little bit before I joined The Bridge. One is with healthcare. There’s all of this really interesting… you have models making predictions and we pretty well understand how we certify machines to do stuff for even human technicians in the loop. But what happens as you start automating that technician’s job? Do you just view the computer that’s been automated as a human and run it through the same process? Do you say to the regulators, we’ve done our job, we’ve certified the machine the same way we’d certify a human, so we’re good? I think there are a bunch of really interesting… we can get into questions on that. I think another really interesting industry that I’ve seen some of this in is in finance, specifically in trading. A lot around alternative data. Here you have hedge funds that are using everything from satellites as people talk a lot about, to obviously there’s things like news, but there’s also stuff like credit card data that’s very well understood and known as a thing. You have all of this interesting regulation around, well, there’s consumer data in there, but should these hedge funds have access to it? How should they use it? How do you use it safely? I think are a bunch of really interesting questions.

NR: These are things that we’re thinking about, too, in The Bridge. We don’t pretend to have the answers, but it’s pretty clear that we need to understand the issues. As we roll out these tools and we provide data to students and faculty and researchers on campus and the world over, we need to make sure we’re doing that in a responsible fashion. One of the things that’s, again, a curse and a blessing in the U.S. is that we don’t have a GDPR. That obviously gives the U.S. a lot more flexibility in some situations, but also doesn’t force us to actually think carefully about these issues. Europe is both further ahead and struggling with a lot of these issues. I think if we can watch what happens there, then we can learn some interesting lessons without having to go through some of the same pain.

FO: GDPR, just for our audience who may not know that particular abbreviation, is General Data Protection…

NR: … Regulation, yeah.

JJ: One of the other things that’s been really interesting with The Bridge is that we also think a lot about ethics and how that intersects the tools that we’re building, things like that. I think we’ve had some really great conversations, too, with the Berkman Klein Center at Harvard, and the Media Lab here at MIT has a lot of joint programs that think a lot about AI and ethics, and data privacy and usage is a really interesting component of that that we try to think really hard about. But it’s still a very immature thing.

NR: Josh and I have been likening it to how human subject experimental protocols evolved from the ’50s to now. If you look at the literature and you look at the bill, the Congressional bill that authorizes use of human subjects, there’s a clear articulation of a defining principle, which is informed consent. So long as you have informed consent, you do that properly, then everything else falls automatically from that. What’s the defining principle of AI research? I don’t think we know yet. My hypothesis, or our hypothesis, is that it’s transparency. As long as you understand what’s happening and what data is being used to support the decision and how the model works, then everything else probably flows from that. But that’s a hypothesis. It remains to be seen.

FO: I think that’s really interesting because in the energy space, take this utility data, for example, I think a tremendous proportion of the population will be pretty happy with that data being used, if there was a service being delivered that was a value to them and so on. But you always have a subset that’s not comfortable with that. I think this question then about informed consent becomes that is a hurdle that’s going to be very hard to clear. That then leads to the fact that, of course, today, in cryptology, in cryptography and so on, there’s a lot of progress on things like zero-knowledge proofs and so on, which really enable a lot of interesting and new stuff happening. I’m curious of your own reflections on innovative ideas about how to use some of those tools in unlocking more of the potential that these data sets have.

NR: If the question is, what is the potential for innovation? The potential is huge. There’s clear need for all kinds of progress, technical and policy progress in lots of areas. Somebody said to me the other day that a good principle is also the principle behind Google Maps, which is that you can see your own individual data, and you can see the aggregate of everybody else’s data, but you can’t see the individual of everybody else’s data. That’s an interesting principle. I forgot the rest of the question, sorry. I’m going to hand it off to Josh.

JJ: One of the other interesting things we’ve seen, specifically from a group here on campus, is Sandy Pentland has this data trust initiative here. What they think a lot about is, how do you let different people or different companies in a supply chain share data together, but in a way where there might be proprietary insights in their specific data, but if you share them behind a wall—and I’m likely butchering the very intense and complicated details of the stuff that they build. But I think those are the sorts of things that people think a lot about, with how do you share data and how do you do it respectfully. Then there’s obviously, with a lot of the blockchain stuff, a bunch of really interesting solutions in there, like Enigma or what Numerai is doing as alternative ways of sharing data sort of while protecting privacy.

NR: These are the kinds of things we want to do in The Bridge. Or not do in The Bridge but have The Bridge support so that as Sandy and his group work this out, we can take advantage of their research.

FO: That’s very interesting. With The Bridge and its explicit charged delivery to the real world, the benefits, you have to take into consideration the realities of the real world. How are you guys integrating the legal issues? Is that part of the process today? Are there lawyers involved in some of these kinds of discussions, the tools that you’re building, do you guys wrap it in that kind of legal oversight? Or is that not where we need to go at this stage?

NR: My view is that certainly it’s premature, so this day in December we don’t have lawyers involved. I think, again, looking to the example of how human experimental subject research is conducted, that also doesn’t involve lawyers. It involves oversight, but the regulatory framework is set up. As long as you’re acting responsibly inside that regulatory framework, all is well. I could imagine the same thing being true for us. We just need to figure out what that framework is.

FO: That makes a lot of sense. Speaking of frameworks, the other issue that strikes me about all of this work is how ubiquitous it can be to all aspects of an operation or an enterprise. On the energy side, operational efficiency and using data to do that, that’s a thing, that’s been a thing for a while. But here’s also the longer-term planning strategy and the potential there to use data. There are challenges with ubiquitous data and with sparse data, and so on, and how you bring all of that together. Where are you guys in your thinking about distilling the more fundamental work into these different functions via the Bridge? Is that something that you’ve thought a lot about at this stage?

NR: I would actually say that, as you get into some of those more complicated questions, the distinction between The Core and The Bridge probably starts to get a little blurry. What’s wrong with sparse data? It doesn’t necessarily explain the entire phenomenon, but a good approach to filling in or compensating for scarcity of data is to have a good physical model. Where AI is right now, there isn’t actually a great working theory that connects machine learning with principal physical models. I mean, you can do it. I don’t want to say that we don’t know how to do it. On a case-by-case basis for individual domains, we’ve shown how to do it, but just as a general working theory, that feels like something that The Core is really trying to work out and then we can bring that in. Those kinds of questions are going to be a long way out, but very much core to the enterprise of The Quest overall. The other thing that you almost touched on, but didn’t, is in terms of operational efficiency. One thing we know for sure is that AI is extremely energy-intensive. Part and parcel of what, again, The Core and The Bridge want to do is develop more energy-efficient AI algorithms that allow us to understand what’s going on without having to have giant data centers that are consuming huge amounts of energy, wherever it comes from. That’s a real problem for AI. It’s operationally a problem because it’s expensive to build these data centers. The energy’s expensive. And it limits the places that you can put AI.

JJ: It feels like some of that is maybe just a function of we’ve seen all of this growth in the deep learning methods, thanks to GPUs, but GPUs were never really built to do a lot of the deep learning stuff.

NR: Right, that’s exactly right.

JJ: I think what we’ve seen some with The Core, and hopefully will make its way into The Bridge, is as we get more and more specialized hardware that is specifically geared towards these methods, you’ll get a lot of that energy savings, rather than trying to hack them into a GPU.

NR: We view the energy enterprise, whether it’s generating energy or consuming energy, really important to what we develop under The Bridge because if we develop these tools under The Bridge badly, then there are long-term environmental implications and we really want to get out ahead of that.

FO: That’s really interesting because there are the environmental implications, but there’s also then the cost implication, and the fact that the tool and its utility may not be embraced. I suppose that then leads me ultimately to this question about the spectrum of applications and the spectrum of sophistication that we’re ultimately going to see. Throwing the kitchen sink at a problem with respect to all of the tools that you, Josh, have in your chief architect’s tool box.

JJ: It’s a big tool box.

FO: It’s a big tool box, exactly, an ever-expanding tool box. That’s going to be really exciting but that’s probably, I’m sure, in many instances, huge overkill. There’s this in-between space where we’re losing out on where some of these techniques will be really helpful, very beneficial, but will not be embraced if you only have that big effort. Where are we? Where are you guys in terms of exactly that distillation? When are we going to get economy AI available so that we can see that roll out? That for me, in the energy space, I think, is particularly important. Because there’s a lot of places where we’d like more data, we’d like more intelligence, but the value of that kind of optimization at an individual node is very small. To do that, we really need these lower cost solutions available.

JJ: I wonder if there’s some amount of this that makes it a little bit challenging to answer first. AI can mean so many different things and there’s so much in that bucket. There’s often that joke in AI research that once we understand it, implement it, we no longer think of it as AI. Like routing on MapQuest back in the day. That was AI search. I remember the first AI class I took from you, Nick. We learned search. We learned search algorithms.

NR: Nobody thinks of that as AI anymore, that’s exactly right.

JJ: Right. I wonder if it’s that same, little by little, these AI tools get integrated and then they no longer are AI tools because Siri’s responding to me, so that’s not AI. Well, Siri maybe still needs some help. But those sorts of things just happen step by step.

NR: Alexa might be AI. Siri’s still working on it.

JJ: Right. There was this really interesting paper that measured the “intelligence”—I’m using quotes that no one can see—“intelligence” of the different—did you see this, Nick?—of the different AI assistants. I forget the results, but I should go look at it. Alexa versus Siri versus Google’s Home. Anyway.

NR: To Josh’s point, a lot of times people ask, when are we going to get AI? There isn’t “an AI” to get. It’s like physics. When are we going to get physics? There’s a science or a study of AI and there’s individual pieces. I think, to your original point about economy-sized AI that’s in bulk and super cheap, that’s a really interesting question. That’s not something that we’ve thought about, but we actually should. Could you work out like a dollars per… I don’t know what the unit of inference would be.

JJ: Graduate student time? [Laughter]

NR: Could you work out the cost of different kinds of algorithms and then decide which ones provided the best value proposition for whoever is asking the question. I do not know of the state-of-the-art in that particular area of economics in AI. People are almost certainly looking at it, and I think that would be something we would want to bring to The Bridge.

JJ: Another way I think about that, too, is you ask the question, how do we get this impact of AI? How do we get this impact of software on energy? That means so many different things. There are so many different ways that programming or any sort of automation will play out, but we have to talk about it at a much more granular level to get any real meaningful answer.

FO: To give you guys a specific example, going back to the electricity system, it has always been this one-way flow. Obviously today on the physical technology front, we’ve made a lot of progress with technologies like solar photovoltaics and storage, which are very modular, which allow these technologies and the service they provide to be deployed in a much more spatially disaggregated sense. Which has tremendous potential in theory for altering and improving the efficiency and the environmental footprint of electricity delivery. But one of the huge problems that exists in embracing that fully is, the electricity system is exactly that. It is a system, a singular system ultimately, that requires very careful balancing. Once you start introducing this proliferation of agents into the system, it becomes very difficult to actually manage that.

NR: That totally makes sense. Essentially, you’re going from a highly centralized system right now to a much more distributed system.

FO: Right.

NR: There’s no question that’s harder to manage and analyze in lots of different ways. However, there are real advantages in the sense that you’re much more robust. You no longer have single-point failures. AI’s kind of the same thing right now, too. A lot of people are putting everything into the centralized cloud. You don’t think of the cloud as centralized, and there is distribution and redundancy, but to most people, it looks like one thing, whether it’s AWS [Amazon Web Services] or GCP [Google Cloud Platform] or whatever. There’s a real question to be asked about how much should you put in the cloud versus much should you put at the edge? What are the cost/benefit tradeoffs there? That’s something that we’ve been talking to one of our partner organizations, IBM, about. They asked us the question, what should be in the cloud and what should be at the edge? And I don’t think we know.

FO: Josh and Nick, this, for me, has just been fascinating, absolutely fantastic. As I said at the outset, I was expecting this to be very different. I’ve learned a lot. I think my takeaway from our conversation is that, reflecting on the energy space today, and that’s quite multifaceted, but just even bits of it, looking at the problems that we see and looking at the tools that you guys are developing and delivering, it’s very clear that there’s going to be a long and happy relationship between energy and machine learning, artificial intelligence, and whatever it is after it’s deployed into the future.

NR: Whatever it is after it’s deployed, that’s…

FO: Whatever it’s going to be called after it’s deployed.

NR: It’s going to be fun either way.

FO: Yeah, you know what it’s going to be? It’s just going to be the future energy system. That’s what it’s going to be what it is.

JJ: Right, and we’re not going to even notice it then. It’s going to be like MapQuest.

FO: Exactly, that’s right. All right, guys, thanks so much.

JJ: Thank you, Frank.

FO: Show notes and links to this and other episodes are available at energy.mit.edu/podcast. Tweet us @mitenergy with your questions, comments, and show ideas, and please do subscribe and review us where you get your podcasts. From MIT, and from the MIT Energy Initiative, I’m Francis O’Sullivan, and thank you for listening.


Press inquiries: miteimedia@mit.edu

We're hiring! Learn more and apply