- Machine Learning Algorithms(Second Edition)
- Giuseppe Bonaccorso
- 499字
- 2021-07-16 18:00:46
PAC learning
In many cases, machine learning seems to work seamlessly, but is there any way to formally determine the learnability of a concept? In 1984, the computer scientist L. Valiant proposed a mathematical approach to determine whether a problem is learnable by a computer. The name of this technique is probably approximately correct (PAC). The original formulation (you can read it in A Theory of the Learnable, Valiant L., Communications of the ACM, Vol. 27, No. 11, Nov. 1984) is based on a particular hypothesis, however, without a considerable loss of precision, we can think about a classification problem where an algorithm A has to learn a set of concepts. In particular, a concept is a subset of input patterns X that determine the same output element. Therefore, learning a concept (parameterically) means minimizing the corresponding loss function restricted to a specific class, while learning all possible concepts (belonging to the same universe) means finding the minimum of a global loss function.
However, given a problem, we have many possible (sometimes, theoretically infinite) hypotheses, and a probabilistic trade-off is often necessary. For this reason, we accept good approximations with high probability based on a limited number of input elements that are produced in polynomial time. Therefore, an algorithm A can learn the class C of all concepts (making them PAC learnable) if it's able to find a hypothesis H with a procedure O(nk) so that A, with a probability p, can classify all patterns correctly with a maximum allowed error me. This must be valid for all statistical distributions on X and for a number of training samples which must be greater than or equal to a minimum value depending only on p and me.
The constraint to computation complexity is not a secondary matter. In fact, we expect our algorithms to learn efficiently in a reasonable time when the problem is quite complex. An exponential time could lead to computational explosions when the datasets are too large or the optimization starting point is very far from an acceptable minimum. Moreover, it's important to remember the so-called curse of dimensionality, which is an effect that often happens in some models where training or prediction time is proportional (not always linearly) to the dimensions, so when the number of features increases, the performance of the models (that can be reasonable when the input dimensionality is small) gets dramatically reduced. Moreover, in many cases, in order to capture the full expressivity, it's necessary to have a very large dataset. However, without enough training data, the approximation can become problematic (this is called the Hughes phenomenon). For these reasons, looking for polynomial-time algorithms is more than a simple effort, because it can determine the success or the failure of a machine learning problem. For these reasons, in the following chapters, we're going to introduce some techniques that can be used to efficiently reduce the dimensionality of a dataset without a problematic loss of information.