Bruce Hajek

Bruce Hajek

Applied Probability Trust (APT) Plenary Lecture

Monday, June 30th at 9 am

Bruce Hajek is the Leonard C. and Mary Lou Hoeft Endowed Chair in Engineering at the Electrical and Computer Engineering at the University of Illinois Urbana-Champaign. He is also a professor of the department and a research professor of the Coordinated Science Laboratory (CSL). His research interests are communication networks, auction theory, stochastic analysis, combinatorial optimization, machine learning, information theory, and bioinformatics. He was awarded multiple honors, including the ACM Sigmetrics Achievement Award in 2015; he gave the Markov Lecture at INFORMS 2006, he got the IEEE Kojo Kobayashi Computers and Communications Award in 2003, and has been part of the UIUC List of Teachers Rated Excellent for several years. 

To find out more about Bruce's research and activities, you may visit his website here.

 

Title 

On Estimation of ROC Curves from Likelihood Ratio Observations

Abstract

The optimal receiver operating characteristic (ROC) curve, giving the maximum probability of detection vs. the probability of false alarm, is a key information-theoretic indicator of the difficulty of a binary hypothesis testing problem (BHT).  The optimal ROC curve for a given BHT, corresponding to the likelihood ratio test, is theoretically determined by the probability distribution of the observed data under each of the two hypotheses.  In some cases, these two distributions may be unknown or computationally intractable, but independent samples of the likelihood ratio can be observed.  This raises the problem of estimating the optimal ROC for a BHT from such samples.  

Four estimators of the ROC curve will be discussed.  (1) The empirical estimator, based on separate estimates of the likelihood ratio distribution under each hypothesis, (2) The maximum likelihood estimator, and (3-4) two variants of the maximum likelihood estimator we call the split and fused estimators.  All four are shown to be consistent and finite sample size bounds are given for the empirical estimator and the variants of the maximum likelihood estimator.   An application to causal inference will be discussed.

This talk is based on joint work with Xiaohan Kang.
 

Nike Sun

Nike Sun

IMS Medallion Lecture

Monday, June 30th at 5:30 pm

Nike Sun is a Professor of Mathematics at MIT, as of July 2024. She joined the department as Associate Professor with tenure in September 2018. Her research interest is at the intersection of probability, statistical physics, and theory of computing. She completed B.A. in Mathematics and M.A. in Statistics degrees at Harvard in 2009, and an MASt in Mathematics at Cambridge in 2010. She received her Ph.D. in Statistics from Stanford University in 2014 under the supervision of Amir Dembo. She subsequently held a Schramm fellowship at Microsoft New England and MIT Mathematics in 2014-2015, and a Simons postdoctoral fellowship at Berkeley in 2016. She was an Assistant Professor at the Berkeley Statistics Department from 2016 to 2018. She received the 2017 Rollo Davidson Prize (shared with Jian Ding) and the 2020 Wolfgang Doeblin Prize.

To find out more about Bruce's research and activities, you may visit her website here.

 

Title

Algorithmic Threshold for Perceptron Models

Abstract

We consider the problem of efficiently optimizing random (spherical or Ising) perceptron models with general bounded Lipschitz activation. We focus on a class of algorithms with Lipschitz dependence on the disorder: this includes constant-order methods such as gradient descent, Langevin dynamics, and AMP on dimension-free time-scales. Our main result exactly characterizes the optimal value ALG such algorithms can attain in terms of a one-dimensional stochastic control problem. Qualitatively, ALG is the largest value whose level set contains a certain "dense solution cluster." Quantitatively, this characterization yields both improved algorithms and hardness results for a variety of asymptotic regimes, which are sharp up to absolute constant factors.

This is joint work (in progress) with Brice Huang and Mark Sellke.

Gérard Ben Arous

Gérard Ben Arous

Tuesday, July 1st at 9 am

A specialist of probability theory and its applications, Gérard Ben Arous arrived to NYU's Courant Institute as a Professor of Mathematics in 2002.  He was appointed Director of the Courant Institute and Vice Provost for Science and Engineering Development in September 2011.  A native of France, Professor Ben Arous studied Mathematics at École Normale Supérieure and earned his PhD from the University of Paris VII (1981). He has been a Professor at the University of Paris-Sud (Orsay), at École Normale Supérieure, and more recently at the Swiss Federal Institute of Technology in Lausanne, where he held the Chair of Stochastic Modeling. He headed the department of Mathematics at Orsay and the departments of Mathematics and Computer Science at École Normale Supérieure. He also founded a Mathematics research institute in Lausanne, the Bernoulli Center. He is the managing editor (with Amir Dembo, Stanford) of one of the main journals in his field, Probability Theory and Related Fields.

Professor Ben Arous works on probability theory (stochastic analysis, large deviations, random media and random matrices) and its connections with other domains of mathematics (partial differential equations, dynamical systems), physics (statistical mechanics of disordered media), or industrial applications.  He is mainly interested in the time evolution of complex systems, and the universal aspects of their long time behavior and of their slow relaxation to equilibrium, in particular how complexity and disorder imply aging. He is a Fellow of the Institute of Mathematical Statistics (as of August 2011) and an elected member of the International Statistical Institute. He was a plenary speaker at the European Congress of Mathematics, an invited speaker at the International Congress of Mathematics, received a senior Lady Davis Fellowship (Israel), the Rollo Davison Prize (Imperial College, London) and the Montyon Prize (French Academy of Sciences).

 

Title

High-dimensional optimization: summary statistics, effective dynamics and dynamical spectral transitions

Abstract

I will survey recent progress  in the understanding of the optimization dynamics for important tasks for Machine Learning or high dimensional statistics. We will see how these dynamics are in fact ruled by the so-called "effective dynamics" of much lower dimensional systems. 

I will also show how this dynamical dimension reduction is related to the so-called BBP spectral transition of Random Matrix Theory,  appearing dynamically along the algorithm path. I will illustrate these phenomena in multi-spike Tensor PCA, XOR, and classification of Gaussian mixtures.

This talk is based on joint works with Reza Gheissari (Northwestern), Jiaoyang Huang (Wharton), Aukosh Jagannath (Waterloo), and on joint works with Cedric Gerbelot (Courant) and Vanessa Piccolo (EPFL).

Peter Glynn's picture

Peter Glynn

Marcel Neuts Lecture

Wednesday, July 2nd at 9 am

Peter W. Glynn is the Thomas Ford Professor in the Department of Management Science and Engineering (MS&E) at Stanford University, and also holds a courtesy appointment in the Department of Electrical Engineering. He received his Ph.D in Operations Research from Stanford University in 1982. He then joined the faculty of the University of Wisconsin at Madison, where he held a joint appointment between the Industrial Engineering Department and Mathematics Research Center, and courtesy appointments in Computer Science and Mathematics. In 1987, he returned to Stanford, where he joined the Department of Operations Research. From 1999 to 2005, he served as Deputy Chair of the Department of Management Science and Engineering, and was Director of Stanford's Institute for Computational and Mathematical Engineering from 2006 until 2010. He served as Chair of MS&E from 2011 through 2015. He is a Fellow of INFORMS and a Fellow of the Institute of Mathematical Statistics, and was an IMS Medallion Lecturer in 1995, a Lunteren Lecturer in 2007, the INFORMS Markov Lecturer in 2014, an Infosys-ICTS Turing Lecturer in 2019, and gave a Titan of Simulation talk at the 2019 Winter Simulation Conference. He was co-winner of the Outstanding Publication Awards from the INFORMS Simulation Society in 1993, 2008, and 2016, was a co-winner of the Best (Biannual) Publication Award from the INFORMS Applied Probability Society in 2009, was the co-winner of the John von Neumann Theory Prize from INFORMS in 2010, and gave the INFORMS Philip McCord Morse Lecture in 2020. In 2012, he was elected to the National Academy of Engineering, and in 2021 he received the Lifetime Professional Achievement Award of the INFORMS Simulation Society. He was Founding Editor-in-Chief of Stochastic Systems and served as Editor-in-Chief of Journal of Applied Probability and Advances in Applied Probability from 2016 to 2018. His research interests lie in simulation, computational probability, queueing theory, statistical inference for stochastic processes, and stochastic modeling.

To find out more about Peter's research and activities, you may visit his website here.

 

Title

Computation for Large Markov Chains via Numerical Linear Algebra

Abstract

Marcel Neuts is known, in part, for his early contributions to the use of numerical linear algebra as a vehicle for the analysis of stochastic models, with particular emphasis on exploiting the special structure present in matrix-geometric models. In this talk, I will describe recent theoretical progress and computational developments for Markov chains that build on numerical linear algebra as a foundation. The first part of the talk discusses the fact that when a state space truncation method implies convergence of stationary distributions, it automatically also implies convergence of many other expectations and probabilities, so that convergence of stationary distributions is indeed the central question when truncating a state space. This has connections to stationary distribution interchange questions for sequences of Markov chains and processes. We then discuss the difference between a priori and a posteriori error bounds for state space truncation, and discuss recent work on a posteriori error estimates that build upon existence of known stochastic Lyapunov functions and excursion representations for stationary distributions. In the third part of the talk, we discuss how one can use simulation to estimate the contributions from excursions outside the truncation set, thereby eliminating the need for error bounds related to the truncation. This class of algorithms, known as COSIMLA, optimally combines simulation and linear algebra.

This talk is based on joint work with Alex Infanger and Zeyu Zheng.

René Carmona

René Carmona

IMS Medallion Lecture

Wednesday, July 2nd at 4:30 pm

René Carmona, Ph.D., is the Paul M. Wythes ’55 Professor of Engineering and Finance at Princeton University in the department of Operations Research and Financial Engineering. He is an associate member of the Department of Mathematics, a member of the Program in Applied and Computational Mathematics, and Director of Graduate Studies of the Bendheim Center for Finance where he oversees the Master in Finance program. He obtained a Ph.D. in Probability from Marseille University where he held his first academic job. After time spent at Cornell and a couple of stints at Princeton, he moved to the University of California at Irvine in 1981 and eventually Princeton University in 1995.

Dr Carmona is a Fellow of the Institute of Mathematical Statistics (IMS) since 1984, of the Society for Industrial and Applied Mathematics (SIAM) since 2009 and of the American Mathematical Society (AMS) since 2020. He is the founding chair of the SIAM Activity Group on Financial Mathematics and Engineering, a founding editor of the Electronic Journal & Communications in Probability, and the SIAM Journal on Financial Mathematics. He is on the editorial board of several peer-reviewed journals and book series. He was/is on the scientific board of several research institutes, more recently, the NSF Institute for Mathematical and Statistical Innovation (IMSI) in Chicago.

His publications include over one hundred fifty articles and eleven books in probability, statistics, mathematical physics, signal analysis and financial mathematics. He also developed computer programs for teaching and research. He has worked on the commodity and energy markets as well as the credit markets, and he is recognized as a leading researcher and consultant in these areas. Over the last decade his research focused on the development of a probabilistic approach to Mean Field Games and Mean Field Control. His two-volume book on the subject, co-authored with F. Delarue, was the recipient of the J.L. Doob Prize awarded every three years by the American Mathematical Society.

In 2020 he was awarded a competitive ARPA-E grant under the Performance-based Energy Resource Feedback, Optimization and Risk Management (PERFORM) program, and together with colleagues from Princeton University, U.C. Santa Barbara, and Scoville Risk Partners, leads thewebpush research team Operational Risk Financialization of Electricity Under Stochasticity (ORFEUS).

To find out more about René's research and activities, you may visit his website here.

 

Title

Optimal Control of Conditional Processes: Old and New

Abstract

In this talk, we consider the conditional control problem introduced by P.L. Lions in his lectures at the College de France in November 2016. As originally stated, the problem does not fit in the usual categories of control problems considered in the literature, so its solution requires new ideas, if not new technology. In his lectures, Lions emphasized some of the major differences with the analysis of classical stochastic optimal control problems and in so doing, raised the question of the possible differences between the value functions resulting from optimization over the class of Markovian controls as opposed to the general family of open loop controls. While the equality of these values is accepted as a "folk theorem" in the classical theory of stochastic control,  optimizing an objective function whose values strongly depend upon the past history of the controlled trajectories of the system is a strong argument in favor of differences between the optimization results over these two different classes of control processes. The goal of the talk is to elucidate this quandary and provide responses to Lions' original conjecture, both in the case of “soft killing” (R.C. - Lauriere - Lions, Illinois Journal of Math) and in the case of hard killing (R.C. - Lacker, arxiv). We shall also present a new form of Fokker-Planck-Kolmogorov equation for the evolution of the conditional distributions, and discuss the challenges posed by this non-local, non-linear PDE in Wasserstein’s space.

David Gamarnik

David Gamarnik

Thursday, July 3rd at 9 am

David Gamarnik is a Nanyang Technological University Professor of Operations Research at the Operations Research and Statistics Group, Sloan School of Management of Massachusetts Institute of Technology (MIT). He received a B.A. in Mathematics from New York University in 1993 and a Ph.D. in Operations Research from MIT in 1998. Since then, he was a research staff member of IBM T.J. Watson Research Center, before joining MIT in 2005.

His research interests include discrete probability, optimization and algorithms, quantum computing, statistics and machine learning, stochastic processes and queueing theory. He is a fellow of the American Mathematical Society, the Institute for Mathematical Statistics and the Institute for Operations Research and Management Science. He was a recipient of the Erlang Prize and the Best Publication Award from the Applied Probability Society of INFORMS, and was a finalist in Franz Edelman Prize competition of INFORMS. He has co-authored a textbook on queueing theory, and currently serves as an area editor for the Mathematics of Operations Research journal. In the past, he served as an area editor of the Operations Research journal, and as an associate editor of the Mathematics of Operations Research, the Annals of Applied Probability, Queueing Systems and the Stochastic Systems journals.

To find out more about David's research and activities, you may visit his website here.

 

Title 

Turing in the Shadows of Nobel and Abel: An Algorithmic Story Behind Two Recent Prizes

Abstract

The 2021 Nobel Prize in physics was awarded to Parisi “for the discovery of the interplay of disorder and fluctuations in physical systems from atomic to planetary scales.” The 2024 Abel Prize in mathematics was awarded to  Talagrand “for his groundbreaking contributions to probability theory and functional analysis, with outstanding applications in mathematical physics and statistics.” What remained largely absent in the popular descriptions of these prizes, however, is the profound contributions the works of both individuals have had to the field of algorithms and computation. The ideas first developed by Parisi and his collaborators relying on remarkably precise physics intuition, and later confirmed by Talagrand and others by no less remarkable mathematical techniques, have revolutionized the way we think algorithmically about optimization problems involving randomness.

In the talk we will highlight these developments and explain how the ideas pioneered by Parisi and Talagrand have led to a remarkably precise characterization of which optimization problems admit fast algorithms, versus those which do not, and furthermore to explain why this characterization holds true. The talk will largely follow a recent general purpose article written by the speaker for AMS Notices.