I am a PhD student at Texas A&M University. My advisor is Dr. Tianbao Yang. Before transferring to Texas A&M, I spent three years in the Applied Mathematical and Computational Sciences program at University of Iowa as a PhD student and recieved my master's degree in mathematics. I recieved my bachelor's degree in mathematics from the Pennsylvania State University.
Research Interests
Optimization for machine learning.
Publications
- Single-loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions [Preprints]
Quanqi Hu, Qi Qi, Zhaosong Lu, Tianbao Yang.
To appear in Conference on Neural Information Processing Systems (NeurIPS), 2024.
- Non-Smooth Weakly-Convex Finite-sum Coupled Compositional Optimization [Preprints]
Quanqi Hu, Dixian Zhu, Tianbao Yang.
In Conference on Neural Information Processing Systems (NeurIPS), 2023.
- Blockwise Stochastic Variance-Reduced Methods with Parallel Speedup for Multi-Block Bilevel Optimization [Preprints]
Quanqi Hu, Zi-Hao Qiu, Zhishuai Guo, Lijun Zhang, Tianbao Yang.
In International Conference on Machine Learning (ICML), 2023.
- Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization [Preprints]
Zi-Hao Qiu, Quanqi Hu, Zhuoning Yuan, Denny Zhou, Lijun Zhang, Tianbao Yang.
In International Conference on Machine Learning (ICML), 2023.
- Multi-block Min-max Bilevel Optimization with Applications in Multi-task Deep AUC Maximization [Preprints]
Quanqi Hu, Yongjian Zhong, Tianbao Yang.
In Conference on Neural Information Processing Systems (NeurIPS), 2022.
- Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence [Preprints]
Zi-Hao Qiu*, Quanqi Hu*, Yongjian Zhong, Lijun Zhang, Tianbao Yang.
In International Conference on Machine Learning (ICML), 2022.
- A Stochastic Momentum Method for Min-max Bilevel Optimization [PDF]
Quanqi Hu, Bokun Wang, Tianbao Yang.
In NeurIPS Workshop on Optimization for Machine Learning (OPT), 2021.
* Denotes Equal Contribution.
Preprints
- Provable Optimization for Adversarial Fair Self-supervised Contrastive Learning [Preprints]
Qi Qi, Quanqi Hu, Qihang Lin, Tianbao Yang.
- Randomized Stochastic Variance-Reduced Methods for Multi-Task Stochastic Bilevel Optimization [Preprints]
Zhishuai Guo, Quanqi Hu, Lijun Zhang, Tianbao Yang.
Experience
- AI/ML PhD Intern, GE Healthcare, May - Aug. 2024
- Utilized a latent diffusion model to generate synthetic 3D cardiac CT volumes along with their corresponding segmentation masks.
- Enhanced segmentation performance on 3D cardiac CT volumes by incorporating the generated synthetic volumes and masks into the training process.
- Further improved segmentation accuracy through the application of a self-supervised learning pretraining procedure.
- Research Scientist Intern, KLA, May - Aug. 2023
- Improved the defect detection performance on scanning electron microscope (SEM) images using self-supervised pretrained Vision Transformer (ViT).
- Applied Low-Rank Adaptation (LoRA) to further improve the training efficiency.
Teaching Assistant
- MATH:1560 Engineer Math II: Multivariable Calculus, University of Iowa, Fall 2021
- MATH:1260 The Mathematics of Pokemon Go, University of Iowa, Spring 2021
- MATH:2560 Engineer Math IV: Differential Equations, University of Iowa, Fall 2020
- MATH:1440 Mathematics for the Biological Sciencess, University of Iowa, Fall 2020
- MATH:1460 Calculus for the Biological Sciences, University of Iowa, Spring 2020
- MATH:1380 Calculus and Matrix Algebra for Business, University of Iowa, Spring 2020
- MATH:3800 Elementary Numerical Analysis, University of Iowa, Fall 2019
- MATH:3600 Intro to Ordinary Differential Equations, University of Iowa, Fall 2019
Selected Awards
- Travel Award, Conference on Neural Information Processing Systems (NeurIPS), 2022
- Travel Award, International Conference on Machine Learning (ICML), 2022
- Steven and Sherry McCrystal Award, Penn State, April 2018
- Women in Math Scholarship, Penn State, April 2018
Services
- Conference Reviewer of ICML, NeurIPS.