New technologies and approaches are paving the way to more efficient use of AI systems.
By Ken Cottrill, editorial director, MIT Center for Transportation & Logistics, published in Supply Chain Management Review May/June 2025
The advent of AI as a widely available business tool has given rise to numerous applications that are proliferating at a dizzying pace. As we strive to stay current with the latest applications, it’s essential not to overlook the ongoing efforts to enhance existing ones.
The MIT Center for Transportation & Logistics’ 2025 Crossroads conference, titled Technology Advances & the Impact on SCM, on March 17, 2025, provided a glimpse of these efforts and how they enhance the effectiveness of AI and machine learning.
Inspired models
AI systems known as large language models (LLMs) utilize vast datasets and machine learning to process and manipulate human language. LLM applications, such as powering chatbots and providing answers to questions, have experienced impressive growth over recent years. However, their potential is more limited in edge applications such as robots and self-driving vehicles. As LLMs have increased in size, so have their memory and computational demands. Consequently, utilizing these models in mobile applications that lack the requisite capacity and cloud connectivity is challenging.
A new type of deep learning architecture called liquid neural networks (LNNs) could overcome this limitation. LNNs bring the information and language processing power of AI to the physical world in which robots and autonomous vehicles operate, explained Daniela Rus, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), one of MIT’s iconic research labs. A worm inspired the development of this new type of neural network. C. elegans is a tiny roundworm with a brain that accomplishes a great deal with minimal neural resources. Like human gray matter, the worm’s brain is composed of cells called neurons connected by synapses. However, the creature’s brain performs all the tasks it needs to survive using a mere 302 neurons and 8,000 connections.
By comparison, the human brain has around 100 billion neurons and 100 trillion connections. Research on the worm’s streamlined neural network architecture inspired Rus and her team to develop LNNs. These economy-size AI models are easier and more cost-effective to build than LLMs. Being simpler, their decision-making is easier to understand, a crucial advantage in applications where machines interact with humans.
Importantly, LNNs learn on the job and are extremely nimble and adaptable, especially when applied in dynamic, unpredictable environments. Hence, they can run on the relatively small computers found in robots and other mobile machines deployed widely in supply chains.
They also deliver key performance advantages, said Rus. For example, in tests conducted by her team, LNNs outperformed established neural networks in enabling drones to detect and locate objects placed in different settings. LNNs also perform well in situations involving image recognition. An example is enabling self-driving vehicles to recognize groups of pedestrians, which can be challenging for algorithms owing to the amorphous shape of such groups. According to Rus, LNNs are adept at staying focused on the road ahead and reacting to unexpected road hazards.
Her team has installed the technology in an autonomous ground vehicle operating in the container port of Singapore. The vehicle navigates busy lanes between stacks of containers smoothly and parks in spaces within a five-centimeter margin of accuracy.
More applications of LNNs are in development. Rus is co-founder of Liquid AI, an MIT spin-off. The enterprise has launched AI products utilizing pioneering models for the financial services, biotech, and consumer electronics industries, which are claimed to deliver improved performance with a significantly lower memory footprint.
Defter robots
The emergence of AI-powered humanoid robots is indicative of the rapid pace at which the design of these machines is advancing. Online videos of robots doing backflips and other acrobatics give the impression that the machines are becoming as agile and dexterous as humans.
However, as Pulkit Agrawal, associate professor at CSAIL, pointed out at the Crossroads conference, this is not the case. The robots that perform aerobatically do so in closed, controlled environments; expose them to the outside world and they falter. For example, a robot can be designed to fetch a sponge to clean up a spill, but probably lacks the dexterity to wipe up the mess.
The challenge, said Agrawal, is developing general-purpose robots that can match human adroitness in most everyday environments.
A significant obstacle to achieving this goal is the method used to train today’s robots. LLMs can learn by downloading vast volumes of data from the web, but this is not an option for robots, Agrawal said. The conventional method is imitation learning, where a human demonstrates the actions required to complete specific tasks or teleoperates a robot. Staging such demonstrations to generate the data needed for teaching the robot is costly and labor-intensive. Additionally, because a relatively small amount of task-specific data is used, robots trained to carry out a particular task may struggle if the environment or task changes.
CSAIL is developing more efficient robot training programs by utilizing large volumes of data from various sources, including computer simulations and camera images. Researchers can unify the multiple streams and use machine learning to process the data.
Enabling robots to replicate precise movements performed by human hands is particularly challenging, especially when handling and manipulating complex objects. The new training approach offers promising solutions to this problem. For example, researchers are using a silicone gel in conjunction with cameras to sense and record the depressions made by a human hand when manipulating the gel and map these intricate movements to robots.
In the future, companies may develop simulations for training robots to perform various tasks, or these programs could be included with the machines supplied by vendors.
Ideas machine
If, over the next two years, an employee’s use of generative predictive AI does not represent one-third to one-half of his or her’s professional day, that person is on an exit path or working for an enterprise that will become a zombie organization, maintained Crossroads speaker Michael Schrage, visiting scholar at the MIT Initiative on the Digital Economy.
However, to get the most out of this revolutionary technology, people must apply critical thinking. That means utilizing AI to challenge assumptions, scrutinize evidence, and stress-test conclusions, to make more precise, sound, and context-specific decisions.
Schrage emphasized that AI is an options engine rather than an answer machine. It is a powerful tool for assessing tradeoffs and generating ideas; a new-generation sounding board for ideas. AI is also extremely fast. He described a session where executives interrogated an AI model that yielded insights in four minutes that would have taken four weeks using traditional research, writing, and editing methods.
Yet too many people accept AI’s outputs without applying rigorous critical thinking, he argued. This must change if AI’s full potential is to be realized.
To avoid this mistake, users must ask the right questions. The better the prompt, the better the information on which to base decisions. Prompts shaped by adjectives and adverbs are less likely to elicit mediocre responses. And users should not be afraid to demand more meaningful answers from a model if its output seems too vague or not incisive enough, advised Schrage.
Some organizations conduct “promptathons” (similar to hackathons, where programmers and other tech professionals brainstorm to elicit ideas). These exercises are relatively inexpensive to stage and can be highly effective in competitive analysis and developing crossfunctional ideas. Promptathons can focus on specific challenges, such as the tradeoff between automation and augmentation. Organizations can create repositories of prompts to facilitate their use of AI.
Other approaches involve employing LLMs to analyze recorded conversations, such as Zoom transcripts, to extract insights and options, and utilizing AI models to stimulate debate and discussion.
Schrage urged organizations not to take AI at its word, but to use its outputs to foster ideation and challenge assumptions about how supply chains are designed and managed. We are only at the beginning of this revolution, he said. By the end of 2026, the potential of LLMs is expected to have increased by a factor of five compared to their current capacity.