Yongxin Chen
I am looking for motivated graduate students interested in Systems and control, Machine learning, Robotics, and Optimization.
Solid background in math and/or coding is required. Our strategic areas for Fall 2024 include generative AI and robot learning. Applicants major in computer science or automation are of high priority. If you are interested in working with me, please feel free to contact me and send me your CV and transcripts. Visiting scholars are also welcome to contact me, but please be aware that the paperwork normally takes two months. There are currently no funded postdoctoral positions in my group. We would only consider applicants with external fellowship funding. Some opportunities include GT President's Postdoc, GT AE Postdoc Fellowship, ASEE eFellows, and NSF Postdoc Fellowship.
Biosketch
Yongxin Chen was born in Ganzhou, Jiangxi, China. He received his BSc in Mechanical Engineering from
Shanghai Jiao Tong university, China, in 2011, and a Ph.D. degree in Mechanical Engineering, under the supervision of
Tryphon Georgiou, from University of Minnesota in 2016. He is currently an Associate Professor in the School of Aerospace Engineering at Georgia Institute of Technology. Before joining Georgia Tech, he had a one-year Research Fellowship in the Department of Medical Physics at Memorial Sloan Kettering Cancer Center with Allen Tannenbaum from 2016 to 2017 and was an Assistant Professor in the Department of Electrical and Computer Engineering at Iowa State University from 2017 to 2018. He received the George S. Axelby Best Paper Award (IEEE Transaction on Automatic Control) in 2017 for his joint work ‘‘Optimal steering of a linear stochastic system to a final probability distribution, Part I’’ with Tryphon Georgiou and Michele Pavon and the SIAM Journal on Control and Optimization Best Paper Award in 2023. He received the NSF CAREER Award in 2020, the Simons-Berkeley research fellowship in 2021, the A.V. ‘Bal’ Balakrishnan Award in 2021, and the Donald P. Eckman Award in 2022. He delivered plenary talks at the 2023 American Control Conference and the 2024 International Symposium on Mathematical Theory of Networks and Systems.
News
Aug 2024: I am deeply honored to have the opportunity to deliver a plenary talk titled ‘‘Stochastic Diffusions in Control, Inference, and Learning’’ at the 2024 International Symposium on Mathematical Theory of Networks and Systems held in Cambridge UK.
Sep 2023: Our paper ‘‘Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability’’ is accepted to the 2023 Conference on Neural Information Processing Systems. We proposed a novel diffusion model-based method to generate adversarial samples that are imperceptible to human eyes. Our algorithm is advantageous over existing methods in terms of stealthiness, controllability, and transferability.
Aug 2023: Our paper ‘‘Generative Skill Chaining: Long-Horizon Skill Planning with Diffusion Models’’ is accepted to the 2023 Conference on Robot Learning. In this work, we proposed a novel diffusion model-based method to learn individual skills from expert data and connect them strategically to realize complex task sequences. Our method is among the first methods that can accomplish long-horizon tasks with only short-horizon training data.
June 2023: I am deeply honored to have the opportunity to deliver a plenary talk titled ‘‘A Journey through Diffusions in Control, Inference, and Learning’’ at the 2023 American Control Conference held in San Diego. It is a privilege to share my insights and experiences on the topics of stochastic diffusions, control, statistical inference, and machine learning. Here is the slides.
Feb 2023: New manuscript ‘‘Improved dimension dependence of a proximal algorithm for sampling’’ is posted on arXiv. It achieves, for the first time, sqrt(d) complexity to sample from strongly-log-concave distributions in the high accuracy regime. Our algorithm has the best sample complexity for all classical settings: strongly-log-concave, log-concave, Logarithmic Sobolev inequality, Poincare inequality, as well as nonstandard settings: semi-smooth, composite potentials.
Feb 2023: Our papers ‘‘DEIS: Fast Sampling of Diffusion Models with Exponential Integrator’’ and ‘‘gDDIM: Generalized Denoising Diffusion Implicit Models’’ are accepted to ICLR 2023. We present an efficient training-free algorithm to sample from an arbitrary diffusion models. Our algorithm enables sampling high quality samples from large-scale diffusion models within 10 steps. Code is available DEIS, gDDIM
Apr 2021: Our paper ‘‘Optimal Transport in Systems and Control’’ is accepted by Annual Review of Control, Robotics, and Autonomous Systems. It is an introduction of optimal transport to the control community.
|