Robotics and Intelligent Learning Lab

Recent Projects


The Robotics and Intelligent Learning Lab focuses on AI-driven neuromechanical simulation and modeling, robotics, control, and computational biomechanics. We strive to integrate computational biomechanics, neuroscience, and machine learning to tackle challenges in biomedical engineering and robotics.


Privileged learning-based, zero-shot transfer for bipedal robot control

This work proposes a privileged learning-based robot control method. The student policy endeavors to imitate the teacher’s actions to achieve assigned tasks in uncertain environments without reference motion or external sensors. This approach can significantly reduce dependence on predefined trajectories, minimize sensor usage, and eliminate the need for external perception or terrain models.


Personalized human-robot interaction simulation for wearable robot assistance


1/ 1

Rapidly achieving an optimal assistance profile tailored to each wearer’s body characteristics without extensive human-involved testing remains a significant challenge. The goal of this project is to leverage a personalized human-exoskeleton interaction simulation to rapidly generate personalized exoskeleton prototypes with an optimal assistance profile, without the need for any human-involved testing or parameter tuning.



AI computing-based, human-robot neuromechanical simulation

1 / 3
2 / 3
3 / 3

Paper

This research focuses on a specific subset of wearable robots: lower-limb robotic exoskeletons. We model human muscle response, musculoskeletal dynamics, and human-robot interaction using a Hill-type muscle model, Lagrangian equations, and a linear bushing interaction model, respectively. The closed-loop human-robot interaction simulation reduces the need for extensive real human testing and provides guidance for wearable robot design and control. This method also demonstrates remarkable scalability across a wide variety of assistive devices and can cater to both able-bodied and mobility-impaired individuals.


Simulation to reality: learning multi-gait wearable robot control for mobility assistance


1 / 2
2 / 2

Paper

Postdoc work at NCSU and NJIT. The controller learned completely in simulation can work in reality! Previous simulation-based methods typically performed poorly when deployed in the real world due to the limitations in replicating human responses to robot assistance, as well as the variability of human kinematics, which further adds to sim-to-real translation challenges. This long-standing challenge is known as the simulation-to-reality (sim2real) gap. In this study, we present a novel dynamics-aware, data-driven approach that leverages reinforcement learning in conjunction with a musculoskeletal model to learn an exoskeleton controller entirely in simulation. Our method aims at designing controllers to learn human-robot interaction and directly transferring the trained controllers from simulation to reality. This is my postdoc work conducted at Biomechatronics and Intelligent Robotics Lab in NCSU.