Watermarking language models

Watermarking of language models is a challenging and important problem, where theory and algorithms play an essential role. Rohith Kuditipudi guides us through some of the latest advancements in the area. The provenance problem Language models today are able to mass produce fluent, human-like text. This reality poses unprecedented...

Incentivizing “Desirable” Effort in Strategic Classification

Today’s post, by Diptangshu Sen and Juba Ziani, is about strategic classification: the interesting setting where incentives and rational agents enter the learning process. Read on to learn more! A gentle introduction to Strategic Classification Machine learning systems are ubiquitous in many aspects of our lives. In recent years,...

Testing Assumptions of Learning Algorithms

Today’s technical post is by Arsen Vasilyan. This focuses on the very exciting new “testable learning” he introduced with Rubinfeld in a 2023 paper. There’s been a flurry of work since then, so this is a good chance to catch up in case you’re behind! 1. The Goal: Learning...

Structure-Agnostic Causal Estimation

We have another new technical blog post, courtesy Jikai Jin and Vasilis Syrgkanis, about optimality of double machine learning for causal inference. An introduction to causal inference Causal inference deals with the fundamental question of “what if”, trying to estimate/predict the counterfactual outcome that one does not directly observe....

One-Inclusion Graphs and the Optimal Sample Complexity of PAC Learning: Part 2

We’re back with the second blog post by Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, and Nikita Zhivotovskiy, continuing on the optimal sample complexity of PAC learning. If you missed the first post, check it out here. In the last blog post, we saw the transductive model of learning, the one-inclusion graph (OIG) algorithm...

The Curious Landscape of Multiclass Learning

Welcome to the second installment of the Learning Theory Alliance blog post series! In case you missed the first installment on calibration, check it out here. This time, Nataly Brukhim and Chirag Pabbaraju tell us about some exciting breakthrough results on multiclass PAC learning. The gold standard of learning...

Calibration for Decision Making: A Principled Approach to Trustworthy ML

The Learning Theory Alliance is starting a new initiative, with invited blog posts highlighting recent technical contributions to the field. The goal of these posts is to share noteworthy results with the community, in a more broadly accessible format than traditional research papers (i.e., self-contained and readable by a...

ALT Highlights – A Report on the First ALT Mentoring Workshop

Welcome to ALT Highlights, a series of blog posts spotlighting various happenings at the recent conference ALT 2021, including plenary talks, tutorials, trends in learning theory, and more! To reach a broad audience, the series will be disseminated as guest posts on different blogs in machine learning and theoretical computer...