Ling Zhang

Ling Zhang

Senior Researcher at Microsoft Research Asia

About Me

I am currently a Senior Researcher at Microsoft Research Asia. I received my PhD in 2024 from the University of Washington, advised by Prof. Baosen Zhang. My research focuses on learning-based optimization for power system operations under high renewable penetration and uncertainty. I develop methods that guarantee hard constraint satisfaction by embedding optimization structures directly into the learning process. At MSRA, I also explore applying LLMs to large-scale combinatorial optimization (e.g., logistics) and improving their alignment with downstream tasks.

My broader vision lies at the intersection of AI and safety-critical systems such as power grids. I aim to build reliable AI systems for decision-making under hard physical constraints and uncertainty. More recently, I have been investigating the interaction between hyperscale AI data centers and power grids, examining both how data centers can serve as strategic grid assets and how they can scale without compromising grid stability or public welfare.

Selected Publications

Large Language Models & Combinatorial Optimization

  • Holdout-Loss-Based Data Selection for LLM Finetuning via In-Context Learning
    L. Zhang, X. Yang, J. Yu, P. Cheonyoung, M. Lee, L. Song, and J. Bian.
    ICLR, 2026. (To Appear)

    Proposes a principled data selection strategy based on holdout loss to improve LLM alignment and finetuning.

  • HeurAgenix: Leveraging LLMs for Solving Complex Combinatorial Optimization Challenges
    X. Yang, L. Zhang, H. Qian, L. Song, and J. Bian.
    arXiv preprint, 2025.

    Employs LLMs to automatically generate and evolve heuristic programs for large-scale combinatorial optimization.

Learning-based Optimization for Power Systems

  • An Efficient Learning‑based Solver for Two‑stage DC Optimal Power Flow with Feasibility Guarantees
    L. Zhang, D. Tabas, and B. Zhang.
    arXiv preprint, 2023.

    Embeds gauge mapping within neural solvers to provide feasibility guarantees for two-stage stochastic LP.

  • An Iterative Approach to Improving Solution Quality for AC Optimal Power Flow Problems
    L. Zhang and B. Zhang.
    ACM e-Energy, 2022. — 🏆 Best Paper Finalist

    Develops a Lagrangian-based warm-start method to improve solution quality in non-convex AC-OPF.

  • A Convex Neural Network Solver for DCOPF with Generalization Guarantees
    L. Zhang, Y. Chen, and B. Zhang.
    IEEE TCNS, 2021.

    Leverages duality and KKT conditions to guarantee linear constraint satisfaction in neural solvers.