Machine Learning - Supervised Learning - Overview Tutorial
Considering a Long List of Machine Learning Algorithms, given a Data Set, How Do You Decide Which One to Use?
There is no master algorithm for all situations. Choosing an algorithm depends on the following questions:
- How much data do you have, and is it continuous or categorical?
- Is the problem related to classification, association, clustering, or regression?
- Predefined variables (labeled), unlabeled, or mix?
- What is the goal?
Based on the above questions, the following algorithms can be used:
Cross-Validation
Cross-Validation in Machine Learning is a statistical resampling technique that uses different parts of the dataset to train and test a machine learning algorithm on different iterations. The aim of cross-validation is to test the model’s ability to predict a new set of data that was not used to train the model. Cross-validation avoids the overfitting of data.
K-Fold Cross Validation is the most popular resampling technique that divides the whole dataset into K sets of equal sizes.
If your company gives you 10GB of Data and you have a 4 GB Ram machine, the company doesn’t have much money, then what you will do?
You cannot do subsampling(as it will lose data) and cloud computing(the company doesn’t have money)
Then you can train the model on jupyter notebook on the same machine without the use of a cloud platform using Out Of Core ML - Vaex Library.
1] Stream Data (loading data in the form of chunks)
2] Extract Features
3] Train Model (It only works for an algorithm that has partial_fit method such as SGD Regressor, Naïve Bayes, and Passive aggressor classifier)
Supervised Learning
- Linear regression [Regression] – Simple, Multiple, Polynomial, Lasso, Ridge and ElasticNet
- Logistic regression [Classification and Regression]
- Support Vector Machine/ Support Vector Regressor [Classification and Regression]
- Naive Bayes [Classification] –mostly used for text data
- Linear discriminant analysis
- Decision Tree Classifier [Classification]
- k-nearest neighbor algorithm
Ensemble Learning
Voting Ensemble / Voting Classifier
- MaxVoting [Classification]
- Averaging [Regression]
Bagging
- Bagging Classifier [Classification and Regression]
- Random Forest [Classification]
- Extra Trees Classifier [Classification]
Boosting
- AdaBoost Classifier [Classification]
- Gradient Boosting Classifier [Classification]
- XGB Classifier [Classification]
Stacking
- Neural Networks (Multilayer perceptron)
- Similarity learning