Lihe -> lee-huh.
Li -> lee.
You can just call me Lee.
Hi there, thanks for visiting my website! I am a M.Sc. student (Sep. 2023 - Now) at the School of
Artificial Intelligence at Nanjing
University, where I am fortunate to be advised by Prof. Yang Yu and affiliated with the LAMDA Group led by Prof.
Zhi-Hua Zhou. Specifically, I am a member of the LAMDA-RL Group, which focuses on reinforcement learning research.
Prior to that, I obtained my bachelor's degree at the same school and university in June 2023.
Unity makes strength. Currently my research interest is Reinforcement Learning (RL), especially in Multi-agent
Reinforcement Learning (MARL) that enables agents efficiently, robustly and safely coordinate with other agents🤖 and even humans👨👩👧👦.
Please feel free to drop me an Email for any form of communication or collaboration!
Email:  lilh [at] lamda [dot] nju [dot] edu [dot] cn
 /
LLM-Assisted Semantically Diverse Teammate Generation for Efficient Multi-agent Coordination
Lihe Li,
Lei Yuan,
Pengsen Liu,
Tao Jiang,
Yang Yu The 42rd International Conference on Machine Learning
(ICML), 2025 pdf /
bibtex
@inproceedings{semdiv,
title = {LLM-Assisted Semantically Diverse Teammate Generation for Efficient Multi-agent Coordination},
author = {Lihe Li and Lei Yuan and Pengsen Liu and Tao Jiang and Yang Yu},
booktitle = {Proceedings of the Forty-second International Conference on Machine Learning},
year = {2025}
}
Instead of discovering novel teammates only at the policy level,
we utilize LLMs to propose novel coordination behaviors described in natural language,
and then transform them into teammate policies, enhancing teammate diversity and interpretability,
eventually learning agents with language comprehension ability and stronger collaboration skills.
@inproceedings{core3,
title = {Continual Multi-Objective Reinforcement Learning via Reward Model Rehearsal},
author = {Lihe Li and Ruotong Chen and Ziqian Zhang and Zhichao Wu and Yi-Chen Li and Cong Guan and Yang Yu and Lei Yuan},
booktitle = {Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence},
pages = {4434--4442},
year = {2024}
}
We study the problem of multi-objective reinforcement learning (MORL) with continually evolving
learning objectives, and propose CORe3 to enable the MORL agent rapidly learn new objectives and
avoid catastrophic forgetting about old objectives lacking reward signals.
@inproceedings{macop,
title = {Learning to Coordinate with Anyone},
author = {Lei Yuan and Lihe Li and Ziqian Zhang and Feng Chen and Tianyi Zhang and Cong Guan and Yang Yu and Zhi-Hua Zhou},
booktitle = {Proceedings of the Fifth International Conference on Distributed Artificial Intelligence},
year = {2023}
}
We propose Multi-agent Compatible Policy Learning (MACOP), where we adopt an agent-centered
teammate generation process that gradually and efficiently generates diverse teammates covering the
teammate policy space, and we use continual learning to train the ego agents to coordinate with them
and acquire strong coordination ability.
@article{macpro,
title = {Multi-agent Continual Coordination via Progressive Task Contextualization},
author = {Lei Yuan and Lihe Li and Ziqian Zhang and Fuxiang Zhang and Cong Guan and Yang Yu},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
volume = {36},
number = {4},
pages = {6326-6340},
year = {2025}
}
We formulate the continual coordination framework and propose MACPro to enable agents to
continually coordinate with each other when the dynamic of the training task and the multi-agent
system itself changes over time.
@article{survey,
title = {A Survey of Progress on Cooperative Multi-agent Reinforcement Learning in Open Environment},
author = {Lei Yuan and Ziqian Zhang and Lihe Li and Cong Guan and Yang Yu},
journal = {Science China Information Sciences (SCIS)},
year = {2023}
}
We review multi-agent cooperation from closed environment to open environment settings, and provide
prospects for future development and research directions of cooperative MARL in open environments.
@inproceedings{lapse,
title = {Learning to Reuse Policies in State Evolvable Environments},
author = {Ziqian Zhang and Bohan Yang and Lihe Li and Yuqi Bian and Ruiqi Xue and Feng Chen and Yi-Chen Li and Lei Yuan and Yang Yu},
booktitle = {Proceedings of the Forty-second International Conference on Machine Learning},
year = {2025}
}
We addresse the performance degradation of RL policies when state features (e.g., sensor data) evolve unpredictably
by proposing Lapse, a method that reuses old policies by combining them with a state reconstruction model for vanished sensors and leverages past policy experience for offline training of new policies.
@inproceedings{madits,
title = {Efficient Multi-agent Offline Coordination via Diffusion-based Trajectory Stitching},
author = {Lei Yuan and Yuqi Bian and Lihe Li and Ziqian Zhang and Cong Guan and Yang Yu},
booktitle = {The Thirteenth International Conference on Learning Representations},
year = {2025}
}
We propose a data augmentation technique for offline cooperative MARL, utlizing diffusion models to improve the quality of the datasets.
@inproceedings{madoc,
title = {Multi-Agent Domain Calibration with a Handful of Offline Data},
author = {Tao Jiang and Lei Yuan and Lihe Li and Cong Guan and Zongzhang Zhang and Yang Yu},
booktitle = {Advances in Neural Information Processing Systems 38},
pages = {69607--69636},
year = {2024}
}
We formulate domain calibration as a cooperative MARL problem to improve efficiency and fidelity.
Education
Nanjing University 2023.09 - present
M.Sc. in Computer Science and Technology Advisor: Prof. Yang Yu
Nanjing University 2019.08 - 2023.07
B.E. in Artificial Intelligence Advisor: Prof. Yang Yu
Guangdong Zhaoqing Middle School 2016.09 - 2019.06
I have the fortune to work with brilliant people during my
research journey and I am truly grateful for their guidance and help!
My Chinese name, 李立和 (Li Lihe), can be pronounced as /liː ˈliː hɜː/ in Mandarin or /lei
ˈlʌb wɔː/ in Cantonese. 李 is one of the most common surnames in China, 立 means "stand" or "establish", and 和 means "harmony" and "peace".