About me
Liang Chen is a third-year Ph.D. student in the Department of Systems Engineering and Engineering Management at The Chinese University of Hong Kong (CUHK), advised by Prof. Kam-Fai Wong. He received his master and bachelor degrees from Peking University and Northwestern Polytechnical University, respectively. Currently, he is a visiting researcher at LMU Munich, working with Prof. Hinrich Schütze.
His research interests lie in natural language processing and machine learning, with a focus on trustworthy large language models (LLMs). To this end, he develops novel algorithms to ensure the reliability of LLMs across training (ICML 2025, ICLR 2025), inference (ACL 2024), and evaluation (EMNLP 2023). Recently, he has worked on large reasoning models (LRMs). Further details can be found in the research portfolio.
News
- [05/2025] Gave a talk at LMU Munich on robust LLMs.
- [05/2025] Our paper on safety alignment is accepted at ICML 2025.
We reveal that some alignment examples are more prone to forgetting, and propose to upweight and reinforce them to improve safety retention.
- [02/2025] Our paper on robust finetuning is accepted at ICLR 2025.
We propose an instruction tuning method that helps LLMs better handle unordered inputs — making them more robust in tasks like ICL and RAG.
- [08/2024] One collaborative paper on model editing is accepted at EMNLP 2024.
- [05/2024] Our paper on text watermarking is accepted at ACL 2024.
We propose a novel LM decoding method that embeds watermarks by exploiting lexical redundancy, minimizing impact on text quality.
Publications (Full List)
Liang Chen, Xueting Han, Li Shen, Jing Bai, Kam-Fai Wong.
Vulnerability-Aware Alignment: Mitigating Uneven Forgetting in Harmful Fine-Tuning
ICML 2025Liang Chen, Li Shen, Yang Deng, Xiaoyan Zhao, Bin Liang, Kam-Fai Wong.
PEARL: Towards Permutation-Resilient LLMs
ICLR 2025Liang Chen, Yatao Bian, Yang Deng, Deng Cai, Shuaiyi Li, Peilin Zhao, Kam-Fai Wong.
WatME: Towards Lossless Watermarking Through Lexical Redundancy
ACL 2024Liang Chen, Yang Deng, Yatao Bian, Zeyu Qin, Bingzhe Wu, Tat-Seng Chua, Kam-Fai Wong.
Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators
EMNLP 2023Liang Chen, Hongru Wang, Yang Deng, Wai Chung Kwan, Zezhong Wang, Kam-Fai Wong.
Towards Robust Personalized Dialogue Generation via Representation Regularization Findings of ACL 2023Shuaiyi Li, Yang Deng, Deng Cai, Hongyuan Lu, Liang Chen, Wai Lam.
Consecutive Model Editing with Batch alongside HooK Layers
EMNLP 2024Zezhong Wang, Fangkai Yang, Lu Wang, Pu Zhao, Hongru Wang, Liang Chen, Qingwei Lin, Kam-Fai Wong.
SELF-GUARD: Empower the LLM to Safeguard Itself
NAACL 2024Yang Deng, Lizi Liao, Liang Chen, Hongru Wang, Wenqiang Lei, Tat-Seng Chua.
Proactive Dialogue Systems in the Era of Large Language Models: Evaluating from a Prompting Perspective
Findings of EMNLP 2023
Talks
- Towards Trustworthy LLMs: Improving Robustness via Post-Training Optimization
PhD Seminar, LMU Munich – May 2025
Teaching
I have served as a teaching assistant for the following courses:
- Operations Research II (SEEM3440) – Covers advanced optimization techniques, including non-linear, integer, and dynamic programming.
- Engineering Innovation and Entrepreneurship (SEEM3450) – A hands-on course focused on identifying engineering opportunities and developing business plans.
Internships
- Microsoft Research Asia, Systems Research Group
- Tencent AI Lab, Machine Learning Center
Community Service
- Reviewer for ICML, ICLR, NeurIPS, AISTATS, ACL, EMNLP, and NAACL.
Honors & Scholarships
- Postgraduate Studentship, CUHK
- School Scholarship, PKU
- First-Class Scholarship (×3), NWPU
Miscellaneous
Outside of research, I enjoy walking in the park, as well as sports like swimming, hiking, and table tennis.
During my time at NWPU, I was the runner-up in the Freshmen Cup table tennis singles match and won the team championship three times.
“I don't want to achieve immortality through my work; I want to achieve immortality through not dying.”
— Woody Allen