Schedule

Slides are available after clicking on a talk title.

Monday, May 2nd

09:30 Bernoulli Center opens, pick-up your badge

10:00 Opening remarks and introductions

10:30 Lene Favrholdt and Joan Boyar, Online Algorithms with Advice

11:30 Coffee break

12:00 Spyros Angelopoulos, Online Computation with Untrusted and Noisy Advice

13:00 Lunch in FoodLab

14:30 Open problems presentations

15:30 Coffee break

16:00 Open problems collaboration

Tuesday, May 3rd

09:30 Nicole Megow, Online Routing and Network Design with Predictions

10:30 Coffee break

11:00 Open problems collaboration

13:00 Lunch in FoodLab

14:30 Ashok Cutkosky, Online Learning with Hints

15:30 Coffee break

16:00 Sahil Singla, Bandit Algorithms for Multi-Stage Stochastic Optimization

17:00 Open problems collaboration

Wednesday, May 4th

09:30 Yossi Azar, Flow Time Scheduling with Uncertain Processing Time

10:30 Coffee break

11:00 Panel discussion with Sid Banerjee, Daniel Dadush, Anupam Gupta, Kim Larsen, and Seffi Naor.

12:00 Open problems collaboration

13:00 Lunch in FoodLab

14:15 Excursion to Chateau de Chillon

19:00 Workshop dinner, Le Vieux-Lausanne restaurant

Thursday, May 5th

09:30 Anupam Gupta, Learning Techniques for Online Algorithms

10:30 Coffee break

11:00 Open problems collaboration

13:00 Lunch in FoodLab

14:30 Seffi Naor, Online Virtual Machine Allocation with Predictions

15:30 Coffee break

16:00 Open problems collaboration

Friday, May 6th

09:30 Debmalya Panigrahi, Online Algorithms with Multiple Advice

10:30 Coffee break

11:00 Antonios Antoniadis, Learning-Augmented Algorithms via Algorithm-Switching

12:00 Sid Banerjee, Constant Regret via Model-Predictive Control

13:00 Lunch in FoodLab

Abstracts

Monday, 10:30, Joan Boyar and Lene Favrholdt, Online Algorithms with Advice
We present the topic of online algorithms with advice, sometimes called advice complexity, along with its relation to online algorithms with predictions. Since one can look at the main purpose of advice complexity as giving general lower bounds that apply to many types of advice, we present a lower bound technique using reductions. We also present some problems, where previous online algorithms with advice have been the basis of online algorithms with untrusted advice. We end with a generalization: Advice has also been considered in connection with priority algorithms, which are not online, but are used to model greedy algorithms.
Monday, 12:00, Spyros Angelopoulos, Online Computation with Untrusted and Noisy Advice
In the last decade, a large number of online problems have been studied under the advice complexity model, in which the algorithm is enhanced by some error-free, and often overly powerful information concerning its input. In this presentation I will survey some recent contributions towards more realistic models that consider advice as information elicited by imperfect binary queries. In particular I will discuss the untrusted advice model, in which the focus is on Pareto tradeoffs between the consistency and the robustness of the algorithm, and the noisy advice model, in which the performance is evaluated in terms of the advice error. The presentation will highlight new theoretical results, both from the point of view of upper and lower bounds, as well as findings from the experimental evaluation of the algorithms.
Tuesday, 09:30, Nicole Megow, Online Routing and Network Design with Predictions
We initiate the study of learning-augmented algorithms for online routing problems such as  the Online Traveling Salesperson Problem and the Online Dial-a-Ride Problem, where  (transportation) requests arrive over time as locations in a metric space and a server has to be moved to serve all requests. We propose a novel prediction model and a general framework for deriving learning-augmented algorithms using known online algorithms as a “black box”. Thereby, we improve upon known worst-case barriers, when given quite accurate predictions, at the cost of slightly increased worst-case bounds when given predictions of arbitrary quality.

As a key contribution, we introduce a novel error measure for input predictions based on the minimum cost edge cover in a suitably defined graph. It generalizes previously defined error measures and, most importantly, it captures errors in (i) the location of predicted
requests, ii) the release dates of predicted requests and (iii) the error due to not predicted requests. While metric problems have been studied before, this seems to be the first error measure capturing a timing component. Further, it captures errors in the location better
than previously proposed measure. By defining the edge cost for the underlying edge cover problem properly, this error measure can be applied to arbitrary metric problems with or without timing aspect. In particular, we achieve refined results for previously studied network
design problems such as online Steiner tree and facility location.

Joint work with Giulia Bernardini, Alexander Lindermayr, Alberto Marchetti-Spaccamela, Leen Stougie, and Michelle Sweering.
Tuesday, 14:30, Ashok Cutkosky, Online Learning with Hints
Online linear optimization is a popular framework for designing iterative learning algorithms. Typical online linear optimization algorithms consider purely adversarial cost vectors, but in reality we may be able to partially predict costs ahead of time. In this talk, I will discuss some recent work on how to incorporate hints into the online optimization algorithms. We consider a setting in which we have some prediction or “hint” about each cost vector available ahead of time that may or may not be accurate. The goal is to design algorithms that improve regret from the typical sqrt(T) to only log(T) when the hints are accurate, while gracefully decaying to standard bounds when the hints are inaccurate. Accuracy is measured by correlation: a hint vector is accurate if it is well-correlated with the corresponding cost vector. These results improve prior work on optimistic online learning: the regret bound is better, and the condition for being an accurate hint is weaker. Surprisingly, this same result holds (in expectation) even if we are only allowed to look at sqrt(T) hints.
Tuesday, 16:00, Sahil Singla, Bandit Algorithms for Multi-Stage Stochastic Optimization
A standard model in Stochastic/Bayesian Optimization is that we are given probability distributions of n independent random variables and the goal is to design a policy (a multi-stage algorithm) to optimize the expected objective value. Examples include Prophet Inequalities, Pandora’s Box, Stochastic Probing, and Fixed-Price Auctions. In this work we want to relax the assumption that the underlying distributions are known to the algorithm.

Inspired by the field of Multi-Armed Bandits, we study stochastic problems that are played over T days: on each day t our algorithm plays some policy x_t and receives a partial feedback on the performance of x_t for the underlying unknown-but-fixed distributions. The goal is to minimize the expected regret, which is the difference in the expected value of the optimal algorithm that knows the underlying distributions vs. the average value of our algorithm (over T days) that learns the distributions from partial feedback.
Wednesday, 09:30, Yossi Azar, Flow Time Scheduling with Uncertain Processing Time
We consider the problem of online scheduling on a single machine to minimize unweighted and weighted flow time. The existing algorithms for these problems require exact knowledge of the processing time of each job. This assumption is crucial, as even a slight perturbation of the processing time would lead to polynomial competitive ratio. However, this assumption very rarely holds in real-life scenarios. We present a competitive algorithm (the competitive ratio is a function of the distortion) for unweighted flow time that does not require knowledge of the distortion in advance. For the weighted flow time we present competitive algorithms but, in this case, we need to know (an upper bound on) the distortion in advance.

This is joint work with Stefano Leonardi and Noam Touitou based on papers appearing in STOC 21 and SODA 2022.
Thursday, 09:30, Anupam Gupta, Learning Techniques for Online Algorithms
The use of techniques from machine learning has been very fruitful for online algorithm design. In this talk I will discuss two such problems (load balancing, and time permitting set cover) in the online-with-a-sample model, where these ideas lead to algorithms that use a (small) sample of the input to get better-than-worst-case algorithms.

These are joint works with C.J. Argue, Greg Kehne, Roie Levin, and Chris Seiler.
Thursday, 14:30, Seffi Naor, Online Virtual Machine Allocation with Predictions
The cloud computing industry has grown rapidly over the last decade, and with this growth there is a significant increase in demand for compute resources. Demand is manifested in the form of Virtual Machine (VM) requests, which need to be assigned to physical machines in a way that minimizes resource fragmentation and efficiently utilizes the available machines. This problem can be modeled as a dynamic version of the bin packing problem with the objective of minimizing the total usage time of the bins (physical machines). Earlier works on dynamic bin packing assumed that no knowledge is available to the scheduler and later works studied models in which lifetime/duration of each “item” (VM in our context) is available to the scheduler. This extra information was shown to improve exponentially the achievable competitive ratio.

Motivated by advances in Machine Learning that provide good estimates of workload characteristics, we study the effect of having extra information regarding future (total) demand. In the cloud context, since demand is an aggregate over many VM requests, it can be predicted with high accuracy (e.g., using historical data). We show that the competitive factor can be dramatically improved by using this additional information; in some cases, we achieve constant competitiveness, or even a competitive factor that approaches 1. Along the way, we design new offline algorithms with improved approximation ratios for the dynamic bin-packing problem.

Joint work with: Niv Buchbinder, Yaron Fairstein, Konstantina Mellou, and Ishai Menache.
Friday, 09:30, Debmalya Panigrahi, Online Algorithms with Multiple Advice
The bulk of the literature on online algorithms with ML advice focuses on a single predictor, which may or may not be correct. But, in reality, multiple ML models are often used to make future predictions for the same application, and their predictions/advice can differ from one another. In this case, how should an online algorithm choose among these different suggestions? This talk will focus on models, problems, and algorithms to address this question. The talk will include some recent results with Keerti Anand, Rong Ge, and Amit Kumar, and survey older results including an ICML ’19 paper with Sreenivas Gollapudi.
Friday, 11:00, Antonios Antoniadis, Learning-Augmented Algorithms via Algorithm-Switching
Learning augmented algorithms is an emerging area attempting to combine the effectiveness of machine learning approaches on practically relevant inputs with the performance guarantees provided by classical worst-case analysis of online algorithms. For many problems machine learning approaches can be readily applied to generate predictions about the structure of the input. An algorithm can then be designed to incorporate such predictions in its decision process and obtain improved performance as long as these predictions are sufficiently accurate. However, in some cases the predictions may be highly inaccurate and decision-making systems that are based on such predictors need to achieve a decent performance in that case too.

Several algorithmic results in the area can, at a high level, be interpreted as a meta-algorithm for “combining” a number of different algorithms at runtime. The individual algorithms usually trust the predictions to a different degree, ranging from blind trust to completely ignoring them. How these algorithms are combined generally depends on the particular problem at hand and how well each individual algorithm performed on the input encountered so far. In this talk, we present different variations of this approach from the literature, highlight differences between these variations, and discuss what properties of problems and algorithms were crucial in the analysis of each particular use case.
Friday, 12:00, Sid Banerjee, Constant Regret via Model-Predictive Control
I will consider a large class of finite-horizon control problems, where we see a random stream of arrivals, need to select actions in each step, and where the final objective depends only on the aggregate type-action counts; this includes many widely-studied control problems including online resource-allocation, dynamic pricing, generalized assignment, online bin packing, and bandits with knapsacks. For such settings, I will introduce a simple policy based on predicting the actions of an offline controller, and provide general conditions under which these algorithms achieve constant regret, i.e., additive loss compared to the hindsight optimal solution which is independent of the horizon and state-space. These results stem from a novel (but elementary) coupling argument, which may prove useful for many other questions in online decision-making. Time permitting, I will illustrate this by showing how we can use this technique to incorporate historical data (and achieve constant regret with as little as a single data trace), as well as how to use this to tradeoff between efficiency and fairness.