The event took place virtually (on gather.town) on March 14-15 2022, two weeks before the ALT 2022 conference. Our lineup included: Shipra Agrawal, Clément Canonne, Rachel Cummings, Costis Daskalakis, Sam Hopkins, Nicole Immorlica, Akshay Krishnamurthy, Jamie Morgenstern, Aaditya Ramdas, and Csaba Szepesvári. See full schedule below.

Mentorship Workshop

Event Schedule

Surbhi Goel

Welcome Remarks. Surbhi Goel

Venue: Lounge (gather.town)

Nicole Immorlica

How-to Talk 1: Peer-Review. Nicole Immorlica

[Slides] [Notes]

Venue: Lounge (gather.town)

This session will cover the key aspects of peer review, including writing constructive reviews, bidding on papers, writing rebuttals, receiving reviews, and incorporating feedback into your paper.

Break.

Aaditya Ramdas

How-to Talk 2: Reading Papers. Aaditya Ramdas

[Slides]

Venue: Seminar Room (gather.town)

This session will cover topics such as the key questions to keep in mind while reading a paper and how to get the most out of a paper (with a quick glance, with limited time, or with a deep dive).

Mentoring Tables.

Venue: Lounge (gather.town)

This session is intended to be free time to socialize with senior members of the community as well as other participants. There will be tables with set topics where a senior member would be available to seek advice and answer your questions!

Shipra Agrawal

Ask Me Anything. Shipra Agrawal

Moderator: Thodoris Lykouris

Venue: Seminar room (gather.town)

Participants will be encouraged to “ask anything” of our featured senior member of the learning theory community. The questions could be about choosing research directions, conducting interdisciplinary research, experiencing success and failure in research, inclusion and diversity, etc.

Break.

Akshay Krishnamurthy

Favorite Concept/Technique. Akshay Krishnamurthy

[Video]

Venue: Seminar Room (gather.town)

Title: Fundamentals of Tabular Reinforcement Learning

Abstract: Reinforcement learning (RL) involves an agent interacting with an unknown environment in a sequential fashion to accomplish some task. These problems require the agent to address several challenges including: credit assignment to attribute outcomes to decisions and exploration to collect information about the environment. In this talk I will introduce the Markov Decision process, the most standard model for the design and analysis of RL algorithms, and I will cover some of the basic results in RL theory. In particular, I will discuss the E-cubed algorithm of Kearns and Singh, which is the first statistically efficient reinforcement learning method. The talk will be self-contained and no background knowledge about RL will be necessary.

Rachel Cummings

Favorite Concept/Technique. Rachel Cummings

[Video]

Venue: Seminar Room (gather.town)

Title: Differential Privacy - State of the Art and Challenges

Abstract: Privacy concerns are becoming a major obstacle to using data in the way that we want. It's often unclear how current regulations should translate into technology, and the changing legal landscape surrounding privacy can cause valuable data to go unused. In this talk, we will explore differential privacy as a tool for providing strong privacy guarantees, while still making use of potentially sensitive data. Differential privacy is a parameterized notion of database privacy that gives a mathematically rigorous worst-case bound on the maximum amount of information that can be learned about an individual's data from the output of a computation. In the past decade, the privacy community has developed algorithms that satisfy this privacy guarantee and allow for accurate data analysis in a wide variety of computational settings, including machine learning, optimization, statistics, and economics. This talk will first give an introduction to differential privacy, and then survey recent advances and future challenges in the field of differential privacy.

Social Hour

Venue: Lounge/Beach (gather.town)

This session will be a social gathering with virtual board games and dancing.

Ellen Vitercik

Welcome Remarks. Ellen Vitercik

Venue: Seminar Room (gather.town)

Sam Hopkins

How-to Talk 1: Reading Papers. Sam Hopkins

[Slides]

Venue: Seminar Room (gather.town)

This session will cover topics such as the key questions to keep in mind while reading a paper and how to get the most out of a paper (with a quick glance, with limited time, or with a deep dive).

Break.

Csaba Szepesvári

How-to Talk 2: Peer-Review. Csaba Szepesvári

[Slides]

Venue: Seminar Room (gather.town)

This session will cover the key aspects of peer review, including writing constructive reviews, bidding on papers, writing rebuttals, receiving reviews, and incorporating feedback into your paper.

Mentoring Tables.

Venue: Lounge (gather.town)

This session is intended to be free time to socialize with senior members of the community as well as other participants. There will be tables with set topics where a senior member would be available to seek advice and answer your questions!

Constantinos Daskalakis

Ask Me Anything. Constantinos Daskalakis

Moderator: Vidya Muthukumar

Venue: Lounge (gather.town)

Participants will be encouraged to “ask anything” of our featured senior member of the learning theory community. The questions could be about choosing research directions, conducting interdisciplinary research, experiencing success and failure in research, inclusion and diversity, etc.

Break.

Jamie Morgenstern

Favorite Concept/Technique. Jamie Morgenstern

[Video]

Venue: Seminar Room (gather.town)

Title: The i.i.d. Assumption

Abstract: In this talk, I’ll give a brief overview of the `standard’ assumption about having testing and training data drawn i.i.d. from the same distribution, and why this assumption is often necessary to give the sharpest guarantees on performance of ML algorithms. Unfortunately, in many applications of interest, data distributions change over time. I will then survey some of the common (and less common) approaches to handling shifting distributions and their strengths and weaknesses.

Clément Canonne

Favorite Concept/Technique. Clément Canonne

[Slides] [Video]

Venue: Seminar Room (gather.town)

Title: Concentration Inequalities

Abstract: A concentration inequality is, roughly speaking, a quantitative bound on "the probability that a random variable deviates from its expected behavior." They come, of course, very handy in the analysis of randomized algorithms, data, and pop up pretty much everywhere. However, it's often unclear which one to use, and whether another one could give a stronger bound (if only we knew it!). In this short tutorial, we will cover some of the usual suspects in concentration inequalities: Markov, Chebyshev, Hoeffding, Chernoff... rather than going through an exhaustive list, we will focus on how they relate to each other, and give a few "tricks" to know which one to use (and where to look for fancier ones, if the ones at hand don't suffice).

Social Hour

Venue: Lounge/Beach (gather.town)

This session will be a social gathering with virtual board games and dancing.

Team

Organizers

Surbhi Goel

Postdoc Researcher, Microsoft Research NYC

Thodoris Lykouris

Assistant Professor, MIT Sloan

Vidya Muthukumar

Assistant Professor, GaTech ECE and ISyE

Ellen Vitercik

Postdoc, UC Berkeley