- Machine Learning Algorithms(Second Edition)
- Giuseppe Bonaccorso
- 276字
- 2021-07-16 18:00:54
Summary
Feature selection is the first (and sometimes the most important) step in a machine learning pipeline. Not all of these features are useful for our purposes, and some of them are expressed using different notations, so it's often necessary to preprocess our dataset before any further operations.
We saw how we can split the data into training and test sets using a random shuffle and how to manage missing elements. Another very important section covered the techniques used to manage categorical data or labels, which are very common when a certain feature only assumes a discrete set of values.
Then, we analyzed the problem of dimensionality. Some datasets contain many features that are correlated with each other, so they don't provide any new information but increase the computational complexity and reduce the overall performances. The PCA is a method to select only a subset of features that contain the largest amount of total variance. This approach, together with its variants, allows you to decorrelate the features and reduce the dimensionality without a drastic loss in terms of accuracy. Dictionary learning is another technique that's used to extract a limited number of building blocks from a dataset, together with the information needed to rebuild each sample. This approach is particularly useful when the dataset is made up of different versions of similar elements (such as images, letters, or digits).
In the next chapter, Chapter 4, Regression Algorithms, we're going to discuss linear regression, which is the most diffused and simplest supervised approach to predicting continuous values. We'll also analyze how to overcome some limitations and how to solve non-linear problems using the same algorithms.