The RoboCup@Home league aims to develop service and assistive robot technology with high relevance for future personal domestic applications. It is the largest international annual competition for autonomous service robots and is part of the RoboCup initiative. A set of benchmark tests is used to evaluate the robots’ abilities and performance in a realistic non-standardized home environment setting. Focus lies on the following domains but is not limited to: Human-Robot Interaction and Cooperation, Navigation and Mapping in dynamic environments, Computer Vision and Object Recognition under natural light conditions, Object Manipulation, Adaptive Behaviors, Behavior Integration, Ambient Intelligence, Standardization, and System Integration. It is colocated with the RoboCup symposium.
The Robotics and Artificial Intelligence Lab (RAIL) of Tongji University was founded in 1992. Team TJArk of RAIL was founded in 2004 and participated in the RoboCup World Cup from 2006 to 2019. We achieved seven consecutive championships in the China RoboCup SPL and once won third place in the RoboCup 2018 SPL.
In December 2018, we founded a new energetic team to participate in the RoboCup@Home League, called TJArk@Home. The main goal of TJArk@Home is to explore how well robots can serve people in daily life at an acceptable cost. Guided by this vision, we conduct extensive research on our robot platform, including:
In April 2019, TJArk@Home participated in the China RoboCup@Home League for the first time and earned first place in the SSPL (Social Standard Platform League). During the competition, we demonstrated various abilities, including autonomous navigation, following an operator to a given position, speech recognition and response, and human detection with feature summarization.
Our robot requires an environmental model (map) to support tasks such as autonomous navigation. However, prior maps are often unavailable in dynamic home scenarios, making SLAM essential. Stereo camera data is used by a visual SLAM module, assisted by limited laser data and odometry inputs. Our VSLAM algorithm is based on RTAB-Map and modified for our robot’s sensors. Odometry and laser data provide initial pose estimation, while stereo data performs appearance-based loop closure to optimize local and global poses.
The VSLAM system generates a 3D point cloud, which we project into a 2D grid map for navigation. With the map and robot parameters, the DWA (Dynamic Window Approach) algorithm plans a safe path. Vision, laser, and sonar data are fused for real-time obstacle avoidance in dynamic environments. Motion commands are then sent through robot APIs to enable fully autonomous navigation.
Various vision algorithms are applied to support perception of objects and humans. Our robot can detect and recognize different people, remember names and features, and track them for interactive tasks. For object-level reasoning, we designed a detection framework based on YOLO for real-time indoor object recognition across 50 categories. Training data is collected from large public datasets, and adjacent frame matching is used to improve robustness.
As the robot’s built-in camera has a limited field of view, we expand visual perception by stitching images from multiple viewpoints, followed by fusion algorithms to reduce lighting and geometric inconsistencies. This enables more complete visual information for high-level processing.
For human pose estimation, we employ OpenPose to achieve robust, real-time multi-person body-part detection using a non-parametric representation.
We developed a Python-based framework that integrates ROS components. Team members can easily add custom modules (e.g., motion, detection, navigation) to the framework. During competitions, the robot can execute complex tasks using these modules. The framework is still under development, so it is not open-sourced yet.
Our research primarily focuses on the navigation domain, with a particular emphasis on Vision-and-Language Navigation (VLN). We have made significant breakthroughs and contributions in this area, resulting in several published works and open-source implementations. To date, our VLN team has published 15 papers in top-tier journals and conferences, and obtained or disclosed more than 10 patents. The proposed innovative approaches have achieved world-leading performance on benchmark datasets. Related work was shortlisted as a Finalist for the Best Cognitive Robotics Paper Award at IROS 2024. Relevant papers and code are provided in the Publications section. In addition, we have recently been extending our research toward vision-based manipulation and robotic grasping, exploring the integration of visual perception and action.
This qualification video demonstrates our robot’s performance in competition and real-world testing scenarios. It includes a live demonstration of the GPSR task at the 2025 RoboCup OPL (China), followed by an experimental showcase of mobile manipulation with a robotic arm.
Video chapters:
0:00 – 2:40 | GPSR task demonstration at 2025 RoboCup OPL (China)
2:40 – 4:49 | Mobile manipulation and arm grasping test in experimental environment
Full video link: https://www.youtube.com/watch?v=Mq5DDaTxdFg
| Name | Introduction |
|---|---|
| Chen Qijun | Professor, Department of Control Science and Engineering, Tongji University |
| Liu Chengju | Professor, Department of Control Science and Engineering, Tongji University |
| Yao Chenpeng | Assistant Professor, Department of Control Science and Engineering, Tongji University |
| Yan Qingqing | Postdoctoral Researcher, Department of Control Science and Engineering, Tongji University |
| Wang Liuyi | Postdoctoral Researcher, Department of Control Science and Engineering, Tongji University |
| Name | Introduction |
|---|---|
| Yang Xu | Leader, Graduate student, Department of Control Science and Engineering, Tongji University |
| Zhu Qijia | Graduate student, Department of Control Science and Engineering, Tongji University |
| Qian Liang | Graduate student, Department of Control Science and Engineering, Tongji University |
| Chen Yuzhe | Graduate student, Department of Control Science and Engineering, Tongji University |
| Sheng Kai | Senior student, Artificial Intelligence, Tongji University |
| Zhu Siya | Senior student, Mathematics, Tongji University |
| Wang Xiangyi | Junior student, Artificial Intelligence, Tongji University |
| Qi Chengkai | Junior student, Artificial Intelligence, Tongji University |
| Li Yingxuan | Sophomore student, Robotics, Tongji University |
Previous participation and rankings in RoboCup and local tournaments (China Region).
Learn more about our lab and the competitions: