Jixuan Leng - MSML @ Carnegie Mellon University

Jixuan Leng

I am a second-year MSML student at Carnegie Mellon University advised by Prof. William W. Cohen. I completed my B.S. in Computer Science at University of Rochester with Prof. Jiebo Luo and collaborated with Prof. Haohan Wang (UIUC DREAM Lab) and Prof. Jiaxin Huang (WashU).

My research interests focus on efficient training and inference, and model alignment for both LLMs and VLMs. I am currently a Student Researcher at Google Research advised by Dr. Si Si.

Jixuan Leng
Jixuan Leng
Carnegie Mellon University - Pittsburgh, USA
LLMs VLMs Efficiency Reasoning

News

Recent highlights

Education

Academic path

  • 2024 - Now M.S. in Machine Learning, Carnegie Mellon University, USA.
  • 2020 - 2024 B.S. in Computer Science, University of Rochester, USA.

Research

Experience

Service

Community

  • 2025 - Now Reviewer, ICLR.
  • 2024 - Now Reviewer, TMLR.
  • 2022 - 2023 Teaching Assistant, CSC261/461: Database System, University of Rochester.

Papers

Selected publications

View all

2026

  1. Efficient Test-Time Scaling via Self-Calibration
    Chengsong Huang , Langlin Huang , Jixuan Leng , Jiacheng Liu , Jiaxin Huang
    International Conference on Learning Representations (ICLR) 2026 | [ arXiv Code Preview ]

2025

  1. Reliable and Responsible Foundation Models
    Xinyu Yang , Junlin Han , Rishi Bommasani , Jinqi Luo , Wenjie Qu , Wangchunshu Zhou , Adel Bibi , Xiyao Wang
     + 44 more Jaehong Yoon , Elias Stengel-Eskin , Shengbang Tong , Lingfeng Shen , Rafael Rafailov , Runjia Li , Zhaoyang Wang , Yiyang Zhou , Chenhang Cui , Yu Wang , Wenhao Zheng , Huichi Zhou , Jindong Gu , Zhaorun Chen , Peng Xia , Tony Lee , Thomas P Zollo , Vikash Sehwag , Jixuan Leng , Jiuhai Chen , Yuxin Wen , Huan Zhang , Zhun Deng , Linjun Zhang , Pavel Izmailov , Pang Wei Koh , Yulia Tsvetkov , Andrew Gordon Wilson , Jiaheng Zhang , James Zou , Cihang Xie , Hao Wang , Philip Torr , Julian McAuley , David Alvarez-Melis , Florian TramΓ¨r , Kaidi Xu , Suman Jana , Chris Callison-Burch , Rene Vidal , Filippos Kokkinos , Mohit Bansal , Beidi Chen , Huaxiu Yao
    (full list hidden)
    Transactions on Machine Learning Research (TMLR) 2025 | [ arXiv OpenReview Preview ]
  2. POSS: Position Specialist Generates Better Draft for Speculative Decoding
    Langlin Huang , Chengsong Huang , Jixuan Leng , Di Huang , Jiaxin Huang
    Preprint 2025 | [ arXiv Code Preview ]
  3. Semi-structured LLM Reasoners Can Be Rigorously Audited
    Jixuan Leng , Cassandra A. Cohen , Zhixian Zhang , Chenyan Xiong , William W. Cohen
    Preprint 2025 | [ arXiv Code Preview ]
  4. CrossWordBench: Evaluating the Reasoning Capabilities of LLMs and LVLMs with Controllable Puzzle Generation
    Jixuan Leng , Chengsong Huang , Langlin Huang , Bill Yuchen Lin , William W. Cohen , Haohan Wang , Jiaxin Huang
    Second Conference on Language Modeling (COLM) 2025 | [ arXiv Code Dataset Preview ]
  5. Taming Overconfidence in LLMs: Reward Calibration in RLHF
    Jixuan Leng , Chengsong Huang , Banghua Zhu , Jiaxin Huang
    International Conference on Learning Representations (ICLR) 2025 | [ arXiv Code OpenReview Preview ]

2024

  1. SΒ²FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity
    Xinyu Yang , Jixuan Leng , Geyang Guo , Jiawei Zhao , Ryumei Nakada , Linjun Zhang , Huaxiu Yao , Beidi Chen
    Conference on Neural Information Processing Systems (NeurIPS) 2024 | [ arXiv Code OpenReview Preview ]
  2. Development of UroSAM: A Machine Learning Model to Automatically Identify Kidney Stone Composition from Endoscopic Video
    Jixuan Leng , Junfei Liu , Galen Cheng , Haohan Wang , Scott Quarrier , Jiebo Luo , Rajat Jain
    Journal of Endourology 2024 | [ arXiv Website ]
  3. Choosing Wisely and Learning Deeply: Selective Cross-Modality Distillation via CLIP for Domain Generalization
    Jixuan Leng , Yijiang Li , Haohan Wang
    Transactions on Machine Learning Research (TMLR) 2024 | [ arXiv Code OpenReview ]

Get in touch

Let's collaborate

Feel free to reach out if you would like to chat about research or collaboration.