- Welcome to another episode of "Supply Chain Frontiers," the MIT CTL podcast where we explore the trends, technologies, and innovations shaping the future of supply chain management. Today's episode will be hosted by Dr. Matthias Winkenbach, director of research at the MIT Center for Transportation and Logistics. Today we're diving into the use of AI and supply chain research and applications. Matthias will interview three MIT CTL researchers: Willem Guter, a research engineer at the MIT Intelligent Logistics Systems Lab and the MIT Computational and Visual Education Lab; Dr. Elenna Dugundji, a research scientist who leads the MIT Deep Knowledge for Supply Chain and Logistics Lab; and Dr. Bryan Reimer, founder and co-director of MIT AgeLab's Advanced Vehicle Technology Consortium. They will cover AI and machine learning research in the context of warehouse automation and robotics, demand forecasting, machine learning and AI in sourcing, and the procurement function, and the use of AI in advanced vehicle technology. - Thanks for joining us, Willem. Let's start with something simple. What's your current research focus? So what are you working on? - So in general, I would say my research focuses on the intersection of machine learning and traditional optimization methods trying to solve business questions. So that's questions like how do we route a truck, what does our fleet look like? And then also how do we optimize a warehouse using these methods. - So how does AI come into the picture here? How are you currently using machine learning and AI methods and these types of questions? - AI can fit in in lots of different ways here. One of the ways I'm most excited about is sort of being able to optimize almost in real time things we couldn't before. Lemme give an example of that. We're looking at telling AMRs, so that's robots in your warehouse, when to charge in real time. So before you had to optimize a policy and they would follow that policy, which was the perfect average, you could say. It's the best most of the time. Now with machine learning you can optimize minute by minute and get the best at every time. - Okay. So in a way you're not just trying to make good decisions for the entire ensemble of these robots, but you're basically trying to make each of them make smarter decisions individually. - Exactly. - Interesting. So why weren't we able to do this before? Why are we using machine learning or AI for this and why did more traditional methods fail at that? - Well, there's a combination of reasons. So one of those is speed. You might have heard about how much compute it takes to train an AI model, but what you don't usually hear about is that after it's trained, it actually runs extremely quickly. So traditional methods, there's not this big upfront load, but there is a larger time each time you have to run it that makes it difficult to do in this real time scenario. The other piece is information. So we have more real-time information than ever and AI lets us use all of that in a way that is useful, whereas earlier you'd really have to look at fine tuning information by hand and finding what's useful and what's not and constantly sorting through that. - And for the models that you are working with right now, how do they first of all identify what data is out there and what could be useful? I mean, how do they distinguish between what's actually of value versus what might just be noise? - So that's a great question and it's one that a lot of people would love to answer. The models I'm working with right now tend to be what are called deep learning models, which are something the mechanics behind how they learn aren't fully understood. We can dive into the math behind what's called stochastic gradient descent, but the important piece to get away is that they do sort of learn to ignore this noise, but they can learn to tune back into it over time if you continue training and if there's valuable information there. - I see. You briefly mentioned the use case that you're currently working on with those autonomous mobile robots. So in short, AMRs, can you provide a little bit more detail on that? Who are you working with there and what's the specific industry application of what you're currently working on? - So we're working with our partner Mecalux. They build and deploy these AMRs to their clients and they're looking at both shrinking the required fleet size, so can we do more with fewer AMRs? And then they're also looking at optimizing existing fleets. So can I grow my warehouse or have higher throughput in my warehouse with my existing AMR fleet? - And what do you think is the key metric that they're actually looking at to assess whether all the investment into building a new AI-based approach to this pays off or differently put, how do we know whether it's worth it? - Yeah, that's an interesting question. I think there are a few metrics and it does sort of depend what kind of warehouse you're running and what you're looking at. Like you can talk about are you picking alongside humans, that's gonna change your metrics very much, but I think the main thing that they're looking at here is warehouse throughput. So how much demand can I fulfill from an existing warehouse and how much faster can I do that using these new techniques. - As you're trying to make those robots smarter basically. You mentioned already decisions around charging and the like. What other decisions are you trying to either replace or augment or maybe both with AI-based methods? - So one of the fun things about AI-based methods, and I mentioned this a little bit earlier when I talked about the speed, is there is just a whole range of decisions that we could improve here. So some examples of what we're looking at right now are things like where to park, how to drive, so how to get between point A and point B as efficiently as possible. When to drive, so when to wait, when to go, when to send another AMR, even if you're a little further away to reduce traffic or whatever the reason. I think in the future we could dive even further, whether that's looking at the actual picking process or maybe the optimization of the placing to be put on a truck and shipped outta the warehouse. There's really a whole range of possibilities here. - And it sounds like what you're working on with Mecalux and this particular project is relatively close to real world application. It doesn't sound like this is gonna become reality in five, six years from now, but potentially much sooner than that. Thinking about a little bit further ahead, what kind of breakthroughs do you think AI might make possible or what kind of breakthroughs are you hoping it'll make possible at some point? - To me it all comes back to that every decision is thought out. So I've been talking about that for the warehouse, but you can try to imagine what that would look like in the rest of your whole business operations, if every single action was taken not to an optimal policy, but to a very close AI approximation of what is optimal. We could see the whole supply chain sped up to I think a surprising degree. - And do you see the potential of more and more of these currently somewhat isolated systems to become integrated? So for instance, right now you're focusing on how to make warehouse robots smarter. I know that a few other folks are looking into how to use AI for things like inventory management and the like. Do you think we'll ever get to a point where those models will become one big nexus of models that work together? - Absolutely. I've already seen in my research that some models I developed for one piece of warehouse optimization, say when to drive, are surprisingly applicable to other parts, say parking. And I think that sort of cross model working together is something that we're gonna see expand more and more as well as, like I mentioned earlier, the information they can integrate is going to expand as well. And so you're gonna be seeing not only information about your supply chain, but your models are gonna be integrating information about your suppliers, your customers, and using all of that to make the best decision for today. - And so the work that you're currently focused on is obviously targeted at a relatively narrowly defined use case. How transferable is the knowledge, but also the technology that you and the team are developing there to other contexts that might even fall outside of the four walls of a warehouse? So basically what do we learn from these types of projects that can be generalized to other types of problems? - Yeah, some of it absolutely is specific to within the four walls. So when you're thinking about things like parking, that is probably a warehouse problem, but at the same time you can think of, well, I'm parking my trucks outside my warehouse at a certain level of abstraction, I'm parking my goods in different places. So I think a lot of these overarching models really can transfer well to different things. - And so far we've talked a lot about the positive outlook on AI and machine learning, the great potential that it bears, but I'm sure there must be, at least currently there must be a lot of limitations as well that keep you from doing what you would probably love to do. So for a non-technical audience, how would you describe the most critical limitations of that technology today and what are you guys doing to overcome those? - So to me there are two main limitations when you're working with this AI technology. The first one is the easiest to overcome and that's data availability. So you typically need huge amounts of data. We're looking at months if not years of data to train these systems. However, there are some cutting edge techniques using things like transfer learning. So where you train it on one set of data and then you only need a little bit to transfer it. Or looking at things like simulation where maybe you only have a very small amount of data but you can simulate whole months or years of data virtually and gather it that way. What I foresee is really the bigger issue is the hallucination problem. If you spend enough time talking to ChatGPT, I'm sure at some point or another it's hallucinated to you. It's made up a fact that's not true or said something that you never told it and doesn't check out in the real world, but this is a problem that can find its way into other AI systems. So when they're making what they think is the perfect decision for the moment, sometimes it's really not and it's very hard to verify that they will always do that. So I think that's probably the biggest roadblock to deploying these in the real world is this verification and checking that they're doing things that make sense. - That's probably also a major limitation that at least partially might explain why we see much more autonomous systems already being deployed within let's say the four walls of a warehouse or a factory than outside of it. Even though the public discussion has been very much about autonomous driving, delivery robots, delivery drones, but none of that or very little of that has become a reality yet while the types of robots that you're currently working on, they exist and they work on a daily basis already, right? - Yeah, absolutely. Inside a warehouse it's relatively easy to say, hey, if you get in a situation where things look wrong or you're trying to drive too fast, just stop. If you try to do that on the road, people tend to get a little angry. - Yeah, understandably, this has been super interesting. Is there anything else that you think the audience should know about the great work that you guys are doing before we closing? - I think you should just follow it and see where we're going. You know, there's exciting stuff. Like I said, we're looking at all these sorts of decisions that right now you're not even thinking about as something you can explore. - Alright, that's a positive note to end this interview on. Thank you Willem. This has been very insightful. - This episode of "Supply Chain Frontiers" is brought to you by the MIT Intelligent Logistics Systems Lab, an MIT CTL lab, which is a research initiative designed to revolutionize logistics operations through cutting edge research at the intersection of operations research, artificial intelligence and machine learning technologies. This lab is made possible by foundational support and collaboration with Mecalux, a global leader in intralogistics. Learn more at intelligent.mit.edu. - Thanks for joining us, Elenna. It's a pleasure to have you. Let's start with a simple question. Can you describe your current research focus? What are you and your team working on these days? - Oh, thank you Matthias. I'm delighted to be here. My work with the Deep Knowledge Lab for Supply Chain and Logistics, it focuses primarily on deep learning applications in global trade, and the global trade has two main pillars of work. One is the actual movement of goods through airports, for example, special air cargo and the other is through maritime terminals and the maritime terminal logistics, yard logistics, et cetera. Also ocean congestion arriving at the terminals. That's one pillar. The other pillar of work is related to sourcing, strategic sourcing and the procurement function and spend analytics. - So how are you currently using AI and machine learning in that particular space? - So AI and machine learning, we started this trajectory in the procurement function in spend analytics. That was an interesting challenge presented to us where the company had a very large volume of uncategorized spend every quarter because of new vendors onboarding. In a company where you have maybe 50,000 suppliers, you can imagine that there are just routinely a large number of SKUs, stock keeping units, each quarter which need to be categorized. And if you are not categorizing them, then you don't have a good overview on your spend. So this work had been previously done by the company, by a team of people going through all these line items and manually categorizing them. Then this thought of, well there should be a better way. They had outsourced it to a consultancy company that made a rule-based model, but then that rule-based model quickly broke after two or three quarters. So they were looking for something that could naturally learn. So in this case, we applied a machine learning model. We tested several different functions that could do classification and continuously learn so that every time there becomes a new description you can properly put it in the right category. So this was a huge win because now the people who were previously doing this very tedious work were freed up to do more important strategic work for the company and also made their life a lot easier. That work was work we started before generative AI really exploded in the world. That was work we had done with a classical machine learning with natural language processing and tokenization of the text. And then we subsequently explored with a company what you could do using generative AI and reading text. And we did a couple of cases and you might have heard of something called RAG, Retrieval-Augmented Generation. This RAG is a way to use a large language model to convert your language into specific queries on your own data. So you are using a trained model that understands human language, but you are doing the query on your own data. So this allows you to not expose your queries to the outside world. If you're using OpenAI for example, or any of a number of other large language models, you can run this behind your own company firewall in a secure way. And we developed over the past couple years a number of different examples where in one case we use RAG for text and another case we use RAG for SQL queries. An example of SQL queries is, for example, you want to know how much if you're preparing for a negotiation with a vendor and instead of having to click through dashboards, you can simply ask the question and get the answer right back. Now dashboards are already very helpful, but if you have a complicated query, sometimes that dashboard is not giving exactly what you want and it can be very frustrating and particularly if you're under high time pressure to negotiate, it's a huge win for people whose main task is negotiating not to have to get bogged down with clicking through dashboards. - I'm just curious for not so technically versed audience if you had to put it in simple terms. You mentioned earlier that existing methods, like for instance, rule-based methods for the SKU classification task quickly broke. So I think you've already answered my next question, which is why are you using machine learning and AI in the first place and kind of what makes these types of methods more robust or more capable for these tasks? Simply speaking, why don't they break so quickly? - This is an excellent question. So this has to do with the fact that machine learning is inherently stochastic. In the previous example, what had been done at the company was a deterministic rule-based system. So it said if this, then that, if this, then that. And there was no leeway to have something specifically different. So if you followed if this then that and you ended up somewhere it wasn't exactly what you wanted, it could either just abort and say don't know, or it could give you something spurious. So having a stochastic model allows you to have more flexibility to make splits and also allows you to learn as you're having new data every quarter. So that's a very important part of the machine learning itself. Now for the generative problem, this is also a very interesting that the generative processing of data also allows you to ask if, for example, in the past we did have chatbots, but I don't know how many people out there might have had the frustrating experience where you're asking a question and the chatbot doesn't answer and you try to ask it again and you try to ask it again and you ask it in a different way and a different way and the chatbot is just not understanding what you're trying to say and finally you get frustrated and say, oh, this chatbot is stupid, or you hang up if you're in a call. But again, the stochasticity in the generative approach allows you to custom tailor questions. So you don't need to have specifically seen the exact question before. The model understands, if you've seen this, it's probably a 90% or 95% probability, this is what you mean even if you haven't seen it before. So that same principle in the classical machine learning is applying in this chatbot that you can flexibly answer questions that the chatbot has not seen before because of the stochasticity. It's extremely powerful. - And that's already a good segue to what's next, right? Because I mean you've talked about what you guys are currently working on with a bunch of research partners, but if you look a few years down the line, what do you think or what do you hope are the breakthroughs that AI will make possible in the let's say medium term future? - Oh, it's developing so fast. Difficult question to answer. So you asked what breakthroughs AI makes possible. So in the generative case, this makes possible to be able to ask questions on the fly and being able to get answers on the fly. Now we talked about an example of using asking a question where you are getting an answer, your question is translated into a SQL query and that SQL query acts on a database. Now this is something we did a couple years ago already, but you can extend this in a number of interesting ways. One is instead of asking a question to a SQL database, you can ask a question to a graph database. And this is something I find very exciting because this allows you to connect very disparate systems in the procurement function. So for example, you have a case in resilience, you have a case where a supplier is not available, that supplier might have gotten hit by a flood or a hurricane or a fire or there's some geopolitical risk or a bankruptcy or any number of reasons a supplier if you have 50,000 of them or even if you have only a handful of them, if that supplier is not available, it's a problem. And if you have a large number of SKUs, then you might want to understand how a supplier is affecting how many different finished goods. Now typically the way this data is structured, you're having a finished good and on your bill of materials you see this finished good is made of this component, this component which is made of this raw material and that raw material and you can trace it from the finished good down to different tiers of suppliers. If you're not creating a graph, it's very difficult to understand the full impact of one supplier going out, how it's impacting maybe two or three different finished goods. So by making this graph you can see that in an instant and you can also query it using the same methods I was just describing before. You can also connect it not only your bill of materials to your supplier database, you can also connect it to your spend database so you can immediately see this supplier goes out, these are the finished goods that are affected and this is how much total spend is affected. And if you are managing a large complex supply chain, that is really a huge win because right now it just by looking only at the bill of materials, it's a lot of work to go manually and have to check everything your supplier database and your spend database. When you can connect all this together in a large graph and query the graph, it's extremely powerful. - That sounds fascinating. I imagine having a system like this would've been very useful for instance during COVID when things suddenly went missing and nobody really knew why and the network effects that you were just describing were probably the reason why because there were no existing methods that were able to capture these dependencies. I suppose? - That's correct, but what you're saying now actually touches on another part of my research in the global trade, which has to do with the movements of air cargo and ocean freight and how that picture connects. It's a little bit of a segue, but we are also using machine learning in that area as well. In particular, once a purchase order is issued in say Asia, and that product is then shipped on an ocean liner and then it comes at the ocean liner and then you have a terminal logistics that need to be processed and then from there it maybe goes onwards by road or by rail. Now in all steps of this process, you also require, if it's going onward by road, you need chassis available. And if you're going onward by rail, you need intermodal equipment available, you need rail cars available. So this is related to another stream of work, which we're doing, is understanding the complete supply chain, starting from the beneficiary cargo owner to the ocean carrier to the terminal, to the equipment providers and motor carriers and rail carriers and also warehousing space. In COVID, what largely happened was people are wondering why is a particular good that I want not on the shelf. It's a whole breakdown if you have ships that can't get to shore, if you have chassis not where they need to be. If you have rail cars not where they need to be. If you have a rail yard being used as a warehouse because the warehousing space in the region is not available, the whole system broke down. So that's a very unique situation, but we can use machine learning here also in a number of different ways. We talked a little bit about generative models and we talked a bit about classification models. There's also other kinds of models you can use for machine learning which are using clustering and also making predictions for forecasting. And in our work, which we do for the import global trade, we are using clustering in order to understand ship signals. So we use something called DBSCAN in order to identify from AIS pings where congestion is happening and you can look at that real time, you can also look at it historically and also build a predictive model to understand if something is congested now historically how long did it take to break that congestion or is this something that we've not seen before like the Baltimore Bridge and suddenly there's congestion. So this is an interesting interplay of clustering to detect something both in present and in the past and then putting it into a prediction model in order to make a forecast, how long is this going to take? And in fact, we teach this in our supply chain masters program. You can make a single time series predictions, but you can also make a graph prediction and that's very important in the maritime sector because if you are congested in New York and then ocean carriers are making a particular circle route and the next route they hit is Norfolk, Virginia, well then Norfolk, Virginia is likely also going to be delayed if you're delayed in New York. So understanding this graph of how the ports work together for commodities is also a very nice application of different kinds of machine learning models that we use. - Very briefly before we close, if you look at your current work, what are maybe the one or two biggest limitations, if you wish, of these types of methods that keep you from doing what you would love to do right now? - There's nothing that keeps us actually from doing what we love to do. It's important to have good tracking data in order to be able to find where this congestion is happening. But this is what is advancing so rapidly now and it's so exciting to see that this data is becoming available which feeds these models. So I see a bright future ahead for the work of tracking cargo, whether that's by air, by ocean and global trade. Now the other question related to the pillar in sourcing and procurement, you have very heterogeneous systems of data that need to be put together. So here there's also a world to win in the procurement function of different kinds of data that suddenly we have visibility into in terms of being able to read and classify heterogeneous text and also mixed with tables too. For example, if you have a contract, you have text and tables, you want to extract that and also organizing well so that you can query appropriately text for terms and tables for maybe a price rate. So here there's also very rapid advancements being made. So it's really exciting to see that the things that you would like, oh, I'd love to be able to read this legal contract and separate the text from the tables. It's possible, you can put that into a database and you can query it and you can connect it to other databases. So it's all about organizing, having your data to be able to run these models on. And right now I just see advancements going very quickly, but there is one very important part, particularly in the procurement function that the category managers, they need to be on board. In the example which we tested at the company of upgrading their existing system, which was rule-based with a thing, this was very seamless for them because they had something that wasn't working and now suddenly it is working. They can just ask a question and they get the answer instead of having to click through dashboards and try to piece things together. In all of these cases, it makes the lives of the people who are working more efficient and saves them tedious work, which is not what their main function is. Our category manager should be procuring, they should be negotiating, they don't need to get dragged down in clicking through dashboards, but the tools we provide should be embedded in their work process. That's essential. If you are just providing an extra tool which is not embedded in their work process, it's useless. That's part of our success also, that when we're developing something, we're meeting with the people whose lives we like to make easier, right from day one to understand exactly what are their needs and what is their workflow and how this can facilitate it. - That's a strong message to end this on. Thank you so much for joining us, Elenna, look forward to seeing more of your research. Thank you so much for joining us Bryan, it's a pleasure to have you. So let's start this off easily. Can you explain to me, but more importantly to our audience what your current research focus is? - One of my big research's focus is really centered around a book that I just published, "How to Make AI Useful," very much focused around the human's role with complex social technical systems like AI. So we often think about AI in the context of its capabilities to do things for us, automation, but the real value of AI to me and the work that I'm doing is its ability to amplify human expertise. So improving the decision systems around us so that we can operate better where impartial information presents itself in the real world. Automation and machine intelligence is really good at black and white decisions, but when disruption happens, human expertise merged with that advanced decision making is really the area of focus. - Okay. Can you give us an example of how you're actually using AI in that particular way in your work, in your research at MIT and elsewhere? - I am increasingly using more and more GenAI as a part of my research focus, especially in writing. A lot of folks, especially in the academics who are setting all those things well, it's gotta be all human written and GenAI tools are a tool that doesn't sleep. So two o'clock in the morning instead of calling a colleague to help revise and edit something more importantly, often take a hundred words out of a thousand word essay. GenAI tools are incredibly useful at helping provide suggestions on how to merge and change texts, how to summarize information. But what's really key is that all AI is capable of doing today is providing me decision support, providing me insight and direction that I can use and honing either messages that I'm writing or how I'm thinking about framing things in front of an executive audience. We're also using AI increasingly to augment data flows through our systems, both more traditional machine learning approaches and increasingly to synthesize information that once required an enormous amount of human labor to go through. - That's a very interesting take, especially I was intrigued by the first point that you were making that you're using GenAI increasingly as your writing assistant or as someone who can provide you opinions that you as the human decision maker then can weigh. Don't you see a risk there though that the incentive is high to give more and more of the actual decision-making authority to the GenAI? Don't you see a risk that people take it too lightly and let a GenAI tool then, for instance, write an entire paper or a book and without the necessary checks and balances? - I think that is a huge fear and ultimate reality. Automation complacency, human's ability to get lazy in the advent of automation is a real thing. We hear increasingly in the business world around the context of work slop in essence, I can generate 10 pages of slop that satisfy a deliverable. We desperately need to be thinking about creating within organizations an intolerable policy against work slop. Unless we set the social standard here high where AI and especially GenAI today is used as a copilot, not as an autopilot, a good portion of the populace is going to get lazy with these systems. I think we're going to unfortunately see that occur, and in that world what we will see is the best and brightest learn to use AI as an accelerator of their capabilities. I think that you're going to see a number of people using this to accelerate, to move faster, to hone, to do impressive things that were not capable in the time that they had available. But I think there is a good segment of the working population that's going to get awful lazy, awful quick, and that needs to be intolerable in the workplace, intolerable in the educational setting. I want to know how I can use these tools to be better. You know, I had a project the other day I was doing and that I knew there was a document out there and I knew what the document generally said, but I went to Google to try to find that document and the document was around some, some federal guidance on automated vehicles from about eight years ago. So I spent about an hour failing with Google to find this document. So I said, oh wait a second, why the heck didn't I go over to ChatGPT and try? In the matter of two queries in less than three minutes I had a document I needed at hand because how I was framing this question was slightly different than how the document was written and ChatGPT is capable of making that leap. So the ability to harness these tools in the middle of the night in a moments' notice reshapes what work looks like and it allows people to become more creative and allows the folks that we would use in a daily basis to be helping us to do other creative things. - Obviously a lot of our research here at the center, but also like a lot of our audience is faced more or less on a daily basis with very numbers driven decision problems, very analytical work. Now let's say I had a large supply chain dataset and I knew nothing about it actually. I just know what the dataset contains, but I don't know whether there's anything interesting in there or not. Would you go out and actually still use a tool like ChatGPT or others to extract the interesting insights from such a dataset? So you would for instance, ask a research assistant today, hey, can you go through this data set, analyze it backwards and forwards, and tell me what's interesting or unexpected about it? Are we already at a point where we can trust an AI model to do that or is that still one level of complexity and analytical capability too far at least today? - That's a great question and I would absolutely want to see what the GenAI model tells me, but all it is telling me is an opinion that I have no ground truth for. So if I had the resources, I would be having my research assistant interrogate the data right, left up and down the same way I would've a year or two ago. At the same time, I would be asking ChatGPT to provide me an interpretation. Then given there is two humans in the loop and one machine who can't defend itself quite as well yet, I would take those two pieces of information. And much like we in the social sciences world used to do something called inter-coder reliability all the time, I had two different independent codings of this data that I want to try to extract a true meaning from. I would take those two interpretations and infuse my expert judgment on what do I think is right and what do I think is wrong, because the machine is not perfect and the human is not perfect. We are both non-perfect systems looking to provide answers based upon all the information that we've learned in the past. In a lot of cases today, the machine may have biases in other ways, any statistical model or is making assumptions with their data that as readers don't fully appreciate either. So I think it is very much using these opinions and fusing a model on top of that. Now given my trust in machine intelligence is not fully evolved yet, I might take my synthesis from the machine and go back to my analyst and say, hmm, do you see something similar in the data? And the hope would be is yeah, that's an interesting point that the machine generates. That's something I didn't pick up in the patterning of the data and that's exactly what we're hoping that that the combination of machine intelligence and human intelligence can bring us together. - And you've done some amazing work over the last couple of years in various fields including autonomous driving. And so your bar is relatively high when it comes to expectations towards AI capabilities of the future. But compared to what's possible today, what are the breakthroughs that you're hoping are going to be made possible by AI in the next couple of years? - I think that AI and automation is gonna solve immense things over the next century, but I don't think automation is going to have the big impact that we would all love in the next few years and all of a sudden transformatively shift the workforce. That's gonna take a little more time. And it's not that I fear unemployment all over the place. I think we are gonna reallocate human expertise throughout the system. I think we're gonna create many jobs as we re-shift the workforce around. If I look forward over the next few years, what I think AI enables is something that I was working on 25 years ago in the early stages of my doctoral work, decision support at scale, and providing me an opinion that I may decide is worthy of taking or I may dismiss because of some of my expertise, but at least I have some decision support information coming along in real time that I wouldn't have had 2, 3, 4, 5 years ago because of AI's capabilities of summarizing data. Is that information perfect? Probably not. Is it good enough to act on? Hopefully more times than it isn't, but often I think it will enhance our decisions in the real world, leaving most of the important decisions for the foreseeable future in the hands of human decision makers. - That makes sense. And if you look at the tools that are available to us right now, the methods that are currently being used, what do you see are currently the biggest hurdles or limitations or maybe even risks that come with these methods? - Yeah, that's a good question. GenAI based upon large language models in particular, has been a dominant point in the headlines for a few years now. I think that the future over the next several years is really small language models and domain specific AI tailoring the applications of AI to the specific needs of a smaller problem because in that context we can be much more confident in the answers it's giving us. So, you know, there's a lot of critiques when OpenAI released its latest model of, of the model is not performing as well as the last. And I think we're gonna continue to see that every time we release a new model, and part of that, it's because we are now using an ensemble approach of dividing the different queries up and sending 'em to different places based upon the context, trying to become more efficient with our resources energy and using smaller models to solve problems that they become much more strategically trained for. The whole area of domain specific AI is exploding. This, you know, reduces the compute needs, allows you to tailor the models to specifically the types of questions you're looking to solve today. - So we should start building a supply chain foundation model, if I hear you right? - Supply chain foundation model and maybe it looks at the movement of energy differently than the movement of bananas because even the supply chain may be just too big. That's where a lot of organizations are focusing and that's where maybe all of this background investment in large scale computing, maybe overshooting what our near and midterm needs are. You see the big tech companies being asked by Wall Street, what is the return on investment in continuing to build, build, build? And I think that's a good question because quite frankly we love investing in technology for technology's sake, but is it providing us the decision tools that are needed for the problems we have today? And I wonder if we're overshooting here about the promise of AI as opposed to grounding ourselves in, what is this going to do to help me and how am I gonna drive revenue, operational efficiencies using these tools? And I think we're gonna come back to that really quickly. - Thank you, Bryan. This has been a fascinating conversation and I'm sure we could go on forever, but for now I'll just look even more forward to reading your book. - That wraps up this episode of "Supply Chain Frontiers." A big thank you to Matthias Winkenbach, Elenna Dugundji, Willem Guter, and Bryan Reimer for sharing their expertise and insights into the current state of AI and supply chain research and applications. "Supply Chain Frontiers" is recorded on the MIT campus in Cambridge, Massachusetts. Our sound editors are Dave Lashinsky and Danielle Simpson at David Benjamin Sound. And our audio engineer today is Kurt Schneider of MIT Audio Visual Services. Our producer is myself, Mackenzie Berry. Be sure to check out previous episodes of "Supply Chain Frontiers" at ctl.mit.edu/podcast or search for us on your preferred podcast platform. I'm Mackenzie Berry, thanks for listening, and we'll catch you next time on "Supply Chain Frontiers."