Title: Exploration of 3D Robot Vision Across Multiple Image Modalities
Abstract: 3D robot vision across multiple image modalities has emerged as a powerful tool for enhancing perception and scene understanding in diverse applications. By integrating data from visible images, thermal imaging, LiDAR, and ground-penetrating radar (GPR), multi-modal 3D vision enables robust and comprehensive environmental mapping, object detection, and depth estimation. Each modality provides complementary information. For example, visible images offering rich textures, thermal imaging enhancing detection in low-light conditions, and LiDAR captures precise geometric structures. These capabilities drive advancements in robotics, autonomous driving, mobile computing, and plant science, enabling autonomous navigation, structural assessment, and environmental monitoring in complex and dynamic environments. This work explores the challenges and opportunities in fusing multi-modal data for 3D vision, addressing sensor alignment, data fusion strategies, and application-specific optimizations to improve real-world performance across domains.
Bio: Dr. Lu is an Associate Professor at SUNY ¶¶Òõ¶ÌÊÓÆµ. Before joining Binghamton, he was an Assistant Professor at the University of Georgia and Rochester Institute of Technology (RIT), a Research Scientist in autonomous driving at Ford, and a Research Engineer at Disney ESPN Advanced Technology Group. He has served as Principal Investigator (PI) for projects funded by NSF, USDA, DoD, the Georgia Department of Agriculture, Ford, GM, the Georgia Peanut Foundation, Qualcomm, Tencent, Mackinac, and others. His contributions have been recognized with several prestigious awards, including the NSF CAREER Award, USDA New Investigator Award, Aharon Katzir Young Investigator Award from the International Neural Network Society (INNS), Ford URP Award, Tencent Rhino-Bird Young Faculty Award, Frank A. Pehrson Award, and Erasmus Mundus Scholarship. He serves as the Chair of the IEEE Atlanta Signal Processing Society Chapter and the Co-Chair of the IEEE Robotics & Automation Society (RAS) Technical Committee on Agricultural Robotics and Automation. His research focuses on robotic perception, computer vision, and deep/machine learning.
Zoom Link:Â