One-Inclusion Graphs and the Optimal Sample Complexity of PAC Learning: Part 2
We’re back with the second blog post by Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, and Nikita Zhivotovskiy, continuing on the optimal sample complexity of PAC learning. If you missed the first post, check it out here.
In the last blog post, we saw the transductive model of learning, the one-inclusion graph (OIG) algorithm and how this led to optimal in-expectation bounds for learning VC classes. But, we also saw that the OIG algorithm itself does not achieve non-trivial high probability risk bounds. In this blog post, we will see how to modify the one-inclusion graph algorithm to achieve high probability bounds. Towards this, we will see a connection to online learning and a technique known as online-to-batch.
Further, we will connect this back to the notion of exchangeability which, as we mentioned before, will play a key role.
High Probability PAC Bounds
From the previous blog post, recall that the one-inclusion graph algorithm achieved optimal results in expectation over the training data. That is, the algorithm achieved
It is natural to ask whether this would imply optimal results in the PAC model. That is, when we require
where and are parameters. Unfortunately, we saw in the previous blog post that directly implementing the one-inclusion graph algorithm provably does beat just applying Markov’s inequality to the expected risk, that is, we get .
However, we will now describe a general framework that enables transferring many of these same guarantees to the PAC setting as well. The framework is a variant of the online-to-batch technique to convert strong predictors in the online setting to the batch setting of PAC learning. A key aspect of this online-to-batch conversion is that rather than building on worst case online guarantees, we will need to rely on the average performance of the online algorithm in order to get meaningful PAC guarantees.
Online-to-batch: Deterministic Error Bound
To describe the online-to-batch framework, we will use the same notation with and being input and label spaces, and being a function class of mappings from to . In the online setting, a learner observes a series of samples with the guarantee that the sequence is realizable with respect to , i.e. there exists a function such that . For each , the task of the learner is to predict the label of given only the data points up to and is rewarded if they get it right or penalized if the prediction is incorrect. That is, a learner, , aims to minimize:
Note that key difference between the online and the PAC setting is that can depend fully on the predictions of the learner in the past and on the previous examples . This setting corresponds to one in which an adversary, at each time picks with full knowledge of the entire history of the interaction so far, potentially with the aim to maximize the number of mistakes made by the learner. A learner that achieves low error in this setting learns the class in a very strong sense.
Suppose, now that we have access to a good prediction strategy for the online setting. Formally, a strategy, , that satisfies for any realizable sequence for some deterministic and fixed :
Can we now use this algorithm to derive one for the PAC setting? To see how such results transfer to the PAC setting, consider now the simpler setting where the sequence is such that are drawn i.i.d. from distribution with for some function . Now, suppose we were to run our algorithm on this randomly generated sequence and obtain a series of predictions , how would these perform on the true underlying distribution ? As a first attempt towards this, consider the average risk of the classifiers where we use the fact that the is a strong online predictor:
The first term above corresponds to the generalization error. Given the above decomposition, we are left with two additional issues. The first is that the above would simply bound the average risk of the . The second issue is obtaining a bound on the Generalization Term above. As we will see, the first issue is easy to resolve. However, for the second, it is not immediately clear why the Generalization Term is small. We see that it is in expectation. To see why, note that since are independent of and thus, evaluating on is exactly evaluating the risk. However, why should it concentrate? In the expression, each term in the sum is the difference between the population risk of , and loss evaluated on a single fresh sample drawn from the distribution, . Since the sample is drawn independently in each step, this can be utilized to establish the concentration of the generalization term.
Lemma: Let be a sequence of random variables adapted to a filtration . Then, we have:
with probability at least .
As a consequence, we get:
with probability at least . Hence, beyond the average risk of the classifiers being small in expectation, it is also small with high probability. To construct a predictor from the , we can simply take the majority vote. To see why has small risk, observe the following:
as the majority classifier only makes an error when more than half of the make an error. This online-to-batch conversion was first discovered by Nick Littlestone a few decades ago [Lit89]. 1 Given the above result, we would be done if there existed good prediction strategies for the online setting. Unfortunately, this is not the case.
Even for simple hypothesis classes, such linear thresholds in one dimension any online prediction strategy for the simplest setting of threshold classifiers must incur error for some realizable sequence.2 This precludes the possibility of obtaining good PAC guarantees using worst-case guarantees of online learning algorithms. In the next section, we discuss how this limitation can be overcome in the PAC setting, by exploiting the exchangeability of the input sequences, allowing us to take advantage of beyond worst-case nature of our sequences.
Online-to-batch: PAC Error Bound
Given the worst-case error guarantees of an online learning algorithm, it is natural to conclude that the online-to-batch framework cannot yield non-trivial conclusions for the PAC setting. However, a more careful analysis of the argument from the previous section shows we only require an online learning algorithm that performs well on sequences generated i.i.d. from a distribution. One might think that one immediately needs to worry about “average-case” instances that come from a distribution and this would potentially lead to a PAC guarantee. But it turns out in most cases, the choice of the instances are less important than the order in which they are presented to the learner. But for the online-to-batch framework in the previous section, the fact that the error bound works for any sequence (in any order) of examples is crucial. Another way of seeing this is to note that we had to bound
with high probability over the choice of . If one tried to apply standard martingale type arguments to this, one would be required to bound
in terms of the VC dimension. Note that in this argument, we would need to condition on . This would nullify any simple hope that one might have to use the average case nature of the .
But, let us try to recall what type of guarantees we have already seen from transductive learning. Note that the one can interpret the OIG as an algorithm that gave a guarantee akin to
The sum is over all permutations. But, this seems to be insufficient for what we need since naively to apply the martingale argument, we don’t get to average over the choice of test point again.
But, the above way of thinking of problem is still useful. The first high level point is that, in order to evaluate the loss and use the average case nature of the instances, ideally we would like to condition on as little as possible. This would allow for us preserve more randomness in order to use the average case nature of the problem. So, what should be condition on?
The first observation is a key aspect that mathematically characterizes why i.i.d. sequences are unlikely to behave like worst-case ones. This is the notion of exchangeability i.e. that i.i.d. sequences are equivalent to be presented in random order. Recall that a sequence of random variables is exchangeable if for any permutation , has the same distribution as . Since each element of an i.i.d. sequence is independently drawn from , these sequences are also exchangeable. This hints at the fact that in order to evaluate quantities under an i.i.d sequence it might suffice to condition on the values in the sequence but not their order.
We have observed that for , the order does not matter. But this alone is not sufficient. Ideally, whatever we decide to condition on, we still would like to able to evaluate the loss . This a prior could depend on the order of since the algorithm is allowed use the data set in whatever order it would like. But wait! We noted earlier that the OIG algorithm, does not depend on the order i.e. is permutation invariant. Thus, it seems like we should condition on the data points , and importantly not the order.
To formalize this notion, we introduce some notation that clarifies the conditioning in our argument. Let be a random permutation. Let be some prediction algorithm that we will choose later. Further, define
The random quantity in the above expression concerns the specific elements in the first positions of but crucially not on their order nor the element. In order for the above conditioning to be useful, ideally depends only on the set of elements and not the order.
This would happen if is permutation invariant. Hence, conditioning on , the value of is determined as it does not depend on the ordering of the first elements.
We could now apply martingale arguments to . In order to do this, we need to evaluate . On the other hand, we also have:
where is the error of on a randomly chosen test point in .
This is due to the fact that conditioned on , the test point for , , is chosen uniformly among the elements in the set . The key observation is that this exactly corresponds to the error of in the transductive model. Thus, we just need to chose a permutation-invariant function that works in the transductive model. As we saw earlier, this is exactly the one-inclusion graph algorithm for which we have the bound:
Furthermore, since each is a function only of the set of elements , and the test point , we may apply our concentration inequality again to obtain the following result for the risks:
with probability at least . But, this does not seem like it improves over the bound for ERM that we discussed earlier.
One final trick that we need to fix this issue is considering only for . This still allows us to average over predictors, but we have
Hence, by simply exploiting the permutation-invariance of the distribution generating the input sequence, we are able to obtain a high-probability concentration inequality for the mistake bound that is substantially better than the worst-case bound incurred for arbitrary realizable input sequences. This mistake bound along with the discussion in the previous section now yields PAC learners with optimal high-probability guarantees.
Theorem 3 [Optimal Sample Complexity of PAC Learning, [AACSZ23a]] Let be a domain and be a class of functions from to with VC dimension . Let . Let
Let be a sample of size drawn i.i.d. from and . There is an prediction strategy such that, with probability at least over the choice of the training set , we have
Conclusions
In this pair of blogposts, we discussed the problem binary classification in both the PAC and transductive models of learning. We talked about an elegant learning algorithm in the transductive learning model based on the idea of one-inclusion graphs. We then saw how the classic notion of online-to-batch conversion can be modified in a simple way in order to port strong learning guarantees from the transductive model to the PAC model in an optimal way. Though the focus of this blogpost was on binary classification, the ideas we discussed naturally extend to other settings such as multiclass classification and regression and thus these techniques can be seen as a unifying perspective on optimal PAC bounds in various models of realizable learning. For more details on this, check out the full paper [AACSZ23a] (and maybe also [AACSZ23b]).
An interesting direction for future study is whether the techniques presented here could lead to proofs of optimal PAC bounds in the agnostic setting, where all known such bounds go through so-called chaining techniques. It would be interesting to develop techniques that unify approaches to obtaining optimal PAC bounds in both the realizable and agnostic cases.
References
- [Lit89] Nick Littlestone: From on-line to batch learning. In Proceedings of the Second Annual Workshop
on Computational Learning Theory, pages 269–284. Morgan Kaufmann, 1989. - [AACSZ23a] Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, and Nikita Zhivotovskiy. Optimal PAC bounds without uniform convergence, In Foundations of Computer Science (FOCS), 2023.
- [AACSZ23b] Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, and Nikita Zhivotovskiy. The one-inclusion graph algorithm is not always optimal. In Conference on Learning Theory (COLT), 2023.
- The argument presented here is slightly simpler. In his original argument, Littlestone used a small holdout set to pick the best hypothesis in the sequence instead of taking a majority vote. ↩︎
- The notion of complexity, analogous to the VC dimension, that captures this setting is known as the Littlestone dimension. It turns out that even for simple classes such as the set of thresholds, this complexity is unbounded. ↩︎