We explore cutting-edge technologies at the intersection of control theory, computer vision, and artificial intelligence to enable robust autonomous systems.
Our research in robust control focuses on developing advanced control strategies for nonlinear systems with uncertainties and disturbances. We utilize fuzzy systems, LMI-based control design, and neural adaptive control techniques to ensure stability and performance.
We investigate reinforcement learning algorithms for robotic systems, focusing on sample-efficient learning, safe exploration, and real-world deployment. Our work spans from theoretical foundations to practical implementations on physical robots.
Our computer vision research leverages deep learning to solve challenging perception problems for robotics. We develop algorithms for depth estimation, image restoration, and 3D pose estimation that are robust to real-world conditions.
We apply our theoretical research to real-world robotic systems. Our lab focuses on developing autonomous capabilities for various robotic platforms including legged robots, mobile manipulators, and unmanned aerial vehicles.
Our research philosophy centers on bridging the gap between theoretical advances and practical implementation. We believe that robust AI systems must be validated on real hardware and in real-world conditions. This "Robust Physical AI" approach ensures that our research contributions are both scientifically rigorous and practically relevant.
Mathematically sound foundations for all our methods
Real hardware implementation and testing
Active engagement with academic and industry partners
Discover our latest research findings and contributions to the field.
View Publications