June 01, 2017
News

New research focuses on developing systems for automated vehicles to perceive and identify objects in their environment and understand social interactions in traffic.

The MIT AgeLab, part of the MIT Center for Transportation & Logistics will build and analyze new deep learning based perception and motion planning methods for automated vehicles as part of the launch of the Toyota Collaborative Safety Research Center (CSRC) Next research initiative.  This is the next set of projects in a five-year-old ongoing relationship with Toyota.

The first phase of projects with Toyota CSRC was led by Bryan Reimer, a Research Scientist at MIT AgeLab who manages a multidisciplinary team of researchers and students focused on understanding how drivers respond to the increasing complexity of the modern operating environment.  He and his team studied the demands of modern in-vehicle voice interfaces and found that “voice” interfaces draw drivers’ eyes away from the road to a greater degree than expected, and that demands of these interfaces need to be considered in the time course optimization of systems. Reimer’s study eventually contributed to the redesign of the instrumentation of the current Toyota Corolla and the forthcoming, 2018 Toyota Camry.  (Read more in the 2017 Toyota CSRC report.)

Reimer and his team are building and developing prototypes of hardware and software systems that can be integrated into cars in order to detect everything about the state of the driver and the external environment.  Their prototypes are designed to work both with cars with minimal levels of autonomy and with cars that are fully autonomous. 

Computer scientist and member of Reimer’s team, Lex Fridman, leads a group of seven computer engineers working on computer vision, deep learning, and planning algorithms for semiautonomous vehicles.  The application of deep learning is used for understanding the world around the car and human behavior inside the car.

Fridman says, “The vehicle must first gain awareness of all entities in the driving scene, including pedestrians, cyclists, cars, traffic signals, and road markings. We use a learning-based approach for this perception task and also for the subsequent task of planning a safe trajectory around those entities.”

Fridman and his team, now firmly entrenched in the next phase of the project with Toyota CRSC, set up a stationary camera at a busy intersection on the MIT campus to automatically detect the micro-movements of pedestrians as they make decisions about crossing the street. Using deep learning and computer vision methods, Fridman automatically converts the raw video footage into millisecond level estimations of each pedestrian’s body position.  The program analyzes the movement of over 100,000 pedestrians’ heads, arms, feet, and full bodies. 

Fridman’s research also focuses on the world inside the car.  He says, “Just as interesting and complex is the integration of data inside the car to improve our understanding of automated systems and enhance their capability to support the driver.  This includes everything about the driver’s face, head position, emotion, drowsiness, attentiveness, and body language.”  The team is exploring with Toyota and other partners, the use of cameras positioned to capture the driver, and extract all those driver state factors from the raw video, turning them into useable data which promises to support future automotive industry needs. 

“What’s innovative about Lex’s work is that it uses state-of-the-art methods in computer science and artificial intelligence to study the complexities of human intent grounded in large-scale real-world data,” says Reimer.

Director of Toyota CSRC, Chuck Gulash says, “Our research leverages the AgeLab’s expertise in computer vision, state detection, naturalistic data collection and deep learning to focus on the challenges and opportunities of autonomous vehicle technologies.”  When asked how the research collaboration would affect the future of automotive technology, Gulash continued, “It will contribute to better computer-based perception of a vehicle’s environment as well as social interactions with other road users. What is unique about the AgeLab’s work is that it brings together advanced computer science with a human centered perspective on driver behavior. As with all CSRC projects, output from the AgeLab’s effort will be openly shared with industry, academia and government to contribute to future safe mobility.”

Joe Coughlin, Director of MIT AgeLab says, “AgeLab is using all of these technologies to do two things: understand human behavior in the driving context, and to design future systems that result in greater safety and expansion of mobility options for all ages.”