Jixuan Leng - MSML @ Carnegie Mellon University
Jixuan Leng
I am a second-year MSML student at Carnegie Mellon University advised by Prof. William W. Cohen. I completed my B.S. in Computer Science at University of Rochester with Prof. Jiebo Luo and collaborated with Prof. Haohan Wang (UIUC DREAM Lab) and Prof. Jiaxin Huang (WashU).
My research interests focus on efficient training and inference, and model alignment for both LLMs and VLMs. I am currently a Student Researcher at Google Research advised by Dr. Si Si.
News
Recent highlights
- 2026.02 Self-Calibration accepted by ICLR 2026.
- 2025.09 Survey paper on Reliable and Responsible Foundation Models accepted by TMLR.
- 2025.06 CrossWordBench accepted by COLM 2025.
- 2025.06 New preprints on semi-structured LLM auditing and speculative decoding.
- 2025.05 Joining Google Research as a Student Researcher for summer 2025.
- 2025.02 Taming Overconfidence in LLMs accepted by ICLR 2025 (poster).
- 2024.09 S2FT accepted by NeurIPS 2024 (poster).
- 2024.08 Started M.S. in Machine Learning at Carnegie Mellon University.
Education
Academic path
- 2024 - Now M.S. in Machine Learning, Carnegie Mellon University, USA.
- 2020 - 2024 B.S. in Computer Science, University of Rochester, USA.
Research
Experience
- 2025.06 - Now Student Researcher, Google Research (advisor: Dr. Si Si).
- 2025.01 - 2025.05 Independent Study, Carnegie Mellon University (advisor: Prof. William W. Cohen).
- 2024.06 - Now Visiting Researcher, Washington University in St. Louis (advisor: Prof. Jiaxin Huang).
- 2023.12 - 2024.05 Honor Independent Study, VIStA Lab, University of Rochester (advisors: Prof. Jiebo Luo and Dr. Rajat K. Jain).
- 2022.12 - 2024.05 Research Intern, DREAM Lab (advisor: Prof. Haohan Wang).
Service
Community
- 2025 - Now Reviewer, ICLR.
- 2024 - Now Reviewer, TMLR.
- 2022 - 2023 Teaching Assistant, CSC261/461: Database System, University of Rochester.
Papers
Selected publications
2026
2025
- Reliable and Responsible Foundation Models
- Taming Overconfidence in LLMs: Reward Calibration in RLHFInternational Conference on Learning Representations (ICLR) 2025 | [ arXiv Code OpenReview Preview ]
2024
- SΒ²FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured SparsityConference on Neural Information Processing Systems (NeurIPS) 2024 | [ arXiv Code OpenReview Preview ]
- Choosing Wisely and Learning Deeply: Selective Cross-Modality Distillation via CLIP for Domain Generalization
Get in touch
Let's collaborate
Feel free to reach out if you would like to chat about research or collaboration.