ResWORK Seminar by Distinguished Professors Dinesh Manocha & Ming Lin

Date of Event: 22 Jan 2026 - 22 Jan 2026
Talk 1: Robot Navigation in the Wild
Talk 1: Robot Navigation in the Wild

Talk 1: Robot Navigation in the Wild

Speaker: Professor Dinesh Manocha
Distinguished University Professor, Paul Chrisman Iribe Professor of Computer Science Electrical and Computer Engineering Computer Science, University of Maryland at College Park

The first part of the seminar by Professor Dinesh Manocha highlighted ongoing research in robotics aimed at enabling systems to operate effectively in complex, unstructured environments such as homes, dense traffic, outdoor terrains, and public spaces. The presentation showcased robust planning and navigation technologies that leverage advances in computer vision, sensor technologies, machine learning, and motion planning algorithms. New methods were introduced that integrate multi-modal observations from RGB cameras, 3D LiDAR, and robot odometry for scene perception, combined with deep reinforcement learning for reliable planning. This approach enables robots to compute dynamically feasible and spatially aware velocities while navigating among mobile obstacles and uneven terrains. These technologies have been successfully integrated into wheeled robots, home robots, and legged platforms, demonstrating strong performance in crowded indoor scenes, domestic environments, and dense outdoor terrains. The seminar concluded with a discussion on the benefits of these innovations for social navigation and their potential impact on future human-robot interactions.


Talk 2: Learning from User Feedback for Constructing Multimodal AI Agents for Education
Talk 2: Learning from User Feedback for Constructing Multimodal AI Agents for Education

Talk 2: Learning from User Feedback for Constructing Multimodal AI Agents for Education

Speaker: Professor Ming C. Lin
Barry Mersky & Capital One E-Nnovate Endowed Professor
Distinguished University Professor, Department of Computer Science, University of Maryland at College Park

The second part of the seminar explored the design and development of multimodal AI agents that integrate visual understanding of images and videos with text-based reasoning and human feedback from survey data. The session highlighted how these agents can respond to diverse external stimuli—such as public health campaign messaging—in ways that closely mirror human interaction. Discussions emphasized the potential applications of multimodal AI in decision-making, automation, and human-AI collaboration. Particular attention was given to the use of these agents in AR/VR environments, where they can enhance education and interactive training through immersive, human-like engagement.

The seminar brought together researchers, educators, and practitioners with shared interests in multimodal AI and human-AI interaction, fostering dialogue on future directions and practical applications in educational and training contexts.