🥳🥳 Self-Calibration accepted by ICLR 2026.
Jixuan Leng
Research on efficient, reliable, and aligned foundation models.
I am a second-year M.S. in Machine Learning student at Carnegie Mellon University advised by Prof. William W. Cohen. Before CMU, I completed my B.S. in Computer Science at the University of Rochester, where I was advised by Prof. Jiebo Luo.
My research focuses on efficient training and inference, model alignment, and evaluation for large language and vision-language models. I am currently a Student Researcher at Google Research advised by Dr. Si Si, and I also collaborate with Prof. Jiaxin Huang and Prof. Haohan Wang.
- LLMs
- VLMs
- Efficiency
- Alignment
- Reasoning
News
Updates from papers, collaborations, and current roles.
🥳🥳 Survey paper on Reliable and Responsible Foundation Models accepted by TMLR.
🥳🥳 CrossWordBench accepted by COLM 2025.
New preprints on semi-structured LLM auditing and speculative decoding.
🎓🎓 Joined Google Research as a Student Researcher for summer 2025.
🥳🥳 Taming Overconfidence in LLMs accepted by ICLR 2025.
Trajectory
Education
-
2024 - Now
Carnegie Mellon UniversityM.S. in Machine Learning, USA -
2020 - 2024
University of RochesterB.S. in Computer Science, USA
Research Experience
-
2025.06 - Now
Student Researcher, Google Research (advisor: Dr. Si Si).
-
2025.01 - 2025.05
Independent Study, Carnegie Mellon University (advisor: Prof. William W. Cohen).
-
2024.06 - Now
Visiting Researcher, Washington University in St. Louis (advisor: Prof. Jiaxin Huang).
-
2023.12 - 2024.05
Honor Independent Study, VIStA Lab, University of Rochester (advisors: Prof. Jiebo Luo and Dr. Rajat K. Jain).
-
2022.12 - 2024.05
Research Intern, DREAM Lab (advisor: Prof. Haohan Wang).
Service
-
2026 - Now
Reviewer, ICML.
-
2025 - Now
Reviewer, ICLR.
-
2024 - Now
Reviewer, TMLR.
-
2022 - 2023
Teaching Assistant, CSC261/461: Database System, University of Rochester.
Selected publications
-
Efficient Test-Time Scaling via Self-Calibration
International Conference on Learning Representations (ICLR)
-
Reliable and Responsible Foundation Models
Transactions on Machine Learning Research (TMLR)
-
Semi-structured LLM Reasoners Can Be Rigorously Audited
Preprint
-
CrossWordBench: Evaluating the Reasoning Capabilities of LLMs and LVLMs with Controllable Puzzle Generation
Second Conference on Language Modeling (COLM)
-
Taming Overconfidence in LLMs: Reward Calibration in RLHF
International Conference on Learning Representations (ICLR)
-
S2FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity
Conference on Neural Information Processing Systems (NeurIPS)
Get in touch
The fastest way to reach me is by email. You can also find publication updates on Google Scholar, code on GitHub, and professional details on LinkedIn or DBLP.