Professor Song’s research focuses on robot spatial intelligence, which requires perceiving spatial information from multimodal sensory data and making decisions based upon it. Spatial intelligence is a fundamental ability for robots to perceive their environment and make motion plans to physically interact with it. His research focus includes algorithms for cross-modality perception and learning, robust navigation, scene representation and understanding, and tightly coupled perception and planning. All of the above are built on spatial and motion uncertainty analyses drawn from either explicit geometric/stochastic model-based approaches or data-driven machine learning (ML)-based approaches. Email
Interested in working with
our renowned faculty?
Fill out the below form and we will get back to you.