Karan Singh
email:
Karan Singh is a postdoctoral researcher at Microsoft Research in the Reinforcement
Learning group.
In November 2021, he completed his PhD in Computer Science at
Princeton University (dissertation), advised by Elad Hazan. While at Princeton, Karan was awarded the
Porter Ogden Jacobus Fellowship (press,
more),
Princeton University's highest graduate student honor. Before that, he completed his bachelors at Indian Institute of Technology (IIT) Kanpur, where he received the
President's Gold Medal (press)
for the best academic performance in the graduating class.
Karan's research addresses statistical and computational challenges in feedback-driven interactive learning, spanning both prediction and control. His results draw from the algorithmic toolkits of optimization and online learning, together with techniques from dynamical systems and control theory.
His PhD dissertation work on Nonstochastic Control proposes an algorithmic (vs. traditionally analytic) foundation for control theory, and outlines provably efficient instance-optimal control algorithms (1, 2, 3, 4, 5) that go beyond both average-case notions of optimal control and worst-case notions in robust control.
His prior works delineate principled approaches (6, 7, 8) for learning and prediction in dynamical systems that do not "forget" (exhibit long-term correlations). Recently, he has also been investigating a plausible systems-level theory of machine learning (e.g. 9, but broader), where guarantees on the aggregate could be synthesized from functional (rather than behavioral) characteristics of individual subsystems.
Email |
CV |
Google Scholar
|
|
Peer-reviewed Publications
|
All publications list authors in the alphabetical order, except those indicated with †.
Boosting for Online Convex Optimization
with Elad Hazan
International Conference on Machine Learning (ICML), 2021
proceedings | arXiv
|
A Regret Minimization Approach to Iterative Learning Control
with Naman Agarwal,
Elad Hazan,
Anirudha Majumdar
International Conference on Machine Learning (ICML), 2021
proceedings | arXiv
|
Improper Learning for Nonstochastic Control†
with Max Simchowitz,
Elad Hazan
Conference on Learning Theory (COLT), 2020
proceedings | arXiv
|
No-Regret Prediction in Marginally Stable Systems
with Udaya Ghai, Holden Lee, Cyril Zhang, Yi Zhang
Conference on Learning Theory (COLT), 2020
proceedings | arXiv
|
The Nonstochastic Control Problem
with Elad Hazan, Sham Kakade
Algorithmic Learning Theory (ALT), 2020
proceedings | arXiv
|
Logarithmic Regret for Online Control
with Naman Agarwal, Elad Hazan
Neural Information Processing Systems (NeurIPS), 2019 Oral Presentation
(<0.5% of submissions)
Also, Best Paper Award at the OptRL workshop at NeurIPS 2019
proceedings
| arXiv
|
Online Control with Adversarial Disturbances
with Naman Agarwal, Brian Bullins, Elad Hazan, Sham Kakade
International Conference on Machine Learning (ICML), 2019
proceedings | arXiv
|
Provably Efficient Maximum Entropy Exploration
with Elad Hazan, Sham Kakade, Abby Van Soest
International Conference on Machine Learning (ICML), 2019
proceedings | arXiv
|
Efficient Full-Matrix Adaptive Regularization
with Naman Agarwal, Brian Bullins, Xinyi Chen, Elad
Hazan, Cyril Zhang, Yi Zhang
International Conference on Machine Learning (ICML), 2019
proceedings
|
arXiv
|
Spectral Filtering for General Linear Dynamical Systems
with Elad Hazan, Holden Lee, Cyril
Zhang, Yi Zhang
Neural Information Processing Systems (NeurIPS), 2018 Oral Presentation
(<0.5% of submissions)
proceedings
| arXiv
|
Learning Linear Dynamical Systems
via Spectral Filtering
with Elad Hazan, Cyril
Zhang
Neural Information Processing Systems (NeurIPS), 2017 Spotlight (<5% of submissions)
Also, Spotlight Prize at New York Academy of Sciences' ML Symposium, 2018
proceedings
|
arXiv
|
The Price of Differential Privacy for Online Learning
with Naman Agarwal
International Conference on Machine Learning (ICML), 2017
proceedings
|
arXiv
|
Efficient Regret Minimization in Non-Convex Games
with Elad Hazan, Cyril
Zhang
International Conference on Machine Learning (ICML), 2017
proceedings
|
arXiv
|
Preprints and Technical Reports
|
A Boosting Approach to Reinforcement Learning
with Nataly Brukhim, Elad Hazan
Prelim version at ICML Workshop on RL Theory, 2021
|
Dynamic Learning System
with Elad Hazan, Cyril Zhang
US Patent 11,138,513 B2, approved Oct 2021
|
Machine Learning for Mechanical Ventilation Control†
with Daniel Suo, Cyril Zhang, Paula Gradu, Udaya Ghai, Xinyi Chen, Edgar Minasyan, Naman Agarwal, Julienne LaChance, Tom Zajdel, Manuel Schottdorf, Daniel Cohen, Elad Hazan
Machine Learning for Health (ML4H), 2021 Workshop Track
Featured in Princeton Engineering news.
|
Deluca -- A Differentiable Control Library: Environments, Methods, and Benchmarking†
with Paula Gradu, John Hallman, Daniel Suo, Alex Yu, Naman Agarwal, Udaya Ghai, Cyril Zhang, Anirudha Majumdar, Elad Hazan
NeurIPS Workshop on Differentiable Computer Vision & Physics, 2020 Oral Presentation
|
Towards Provable Control for Unknown Linear Dynamical Systems
with Sanjeev Arora, Elad Hazan, Holden Lee, Cyril
Zhang, Yi Zhang
International Conference on Learning Representatios (ICLR), 2018 Workshop Track
|
Dynamic Task Allocation for Crowdsourcing†
with Irineo Cabreros, Angela Zhou
ICML Workshop on Data Efficient Machine Learning, 2016
|
|