Menu bar

09/08/2021

14 Different Types of Learning in Machine Learning

There are 14 types of learning that you must be familiar with as a practitioners; they are:

Learning Problem

1.Supervised Learning 
2.Unsupervised Learning
3.Reinforcement Learning

Hybrid Learning Problem

4.Semi-Supervised Learning
5.Self-Supervised Learning
6.Multi-Instance Learning

Statistical Inference

7.Inductive Learning
8.Deductive Inference
9.Transductive Learning

 Learning Techniques

10.Multi-Task Learning 
11.Active Learning
12.Online Learning
13.Transfer Learning
14.Ensemble Learning 


A. Learning Problem

1. Supervised Learning 

Models are fit on training data comprised of inputs and outputs and used to make predictions on test sets where only inputs are provided and the outputs from model are compared to the withheld target variables and used to estimate the skill of model.

There are two main types of supervised learning problems.

Classification: predicting a class label
Regression: predicting a numerical label

2. Unsupervised Learning

Unsupervised Learning describes a class of problems that involves using a model to describe or extract relationship on data.

Unsupervised Learning operates upon only the input data without outputs or target variable.

There are two main problems:

Clustering: finding groups in data
Density Estimate: summarizing the distribution of data.

Clustering and Density Estimate may be performed to learn about patterns of data.

3. Reinforcement Learning

Reinforcement Learning describes a class of problems where an agent operates in an environment and must learn to operate using feedback.


B. Hybrid Learning Problem

The lines between supervised learning and unsupervised learning is blurry.

4. Semi-Supervised Learning

Semi-Supervised Learning is supervised learning where the training data contains very few labeled examples and a large number of unlabeled examples.

The goal of semi-supervised learning model is to make effective use of all of the available data, not just the labeled data like supervised learning.

Making effective use of unlabeled data may require the use of inspiration from unsupervised methods such as clustering and density estimation. Once groups or patterns are discovered, supervised methods or ideas from supervised learning may be used to label the unlabeled examples.

5. Self-Supervised Learning

Self-supervised learning refers to an unsupervised learning problem that is framed as a supervised learning in order to apply supervised learning to solve it.

Autoencoders
Generative adversarial networks (GAN)

6. Multi-Instance Learning

Multi-Instance Learning is a supervised learning problem where individual examples unlabeled; instead, bags or groups of samples are labeled.

Instances are in "bags" rather than sets because a given instance may be present one or more times, e.g. duplicates.

In supervised multi-instance learning, a class label is associated with each bag, and the goal of learning is to determine how the class can be inferred from the instances that make up the bag.


C. Statistical Inference

Inference refers to reaching an outcome or decision.


7. Inductive Learning

Learning a general model from specific examples.

8. Deductive Inference

Using a model to make predictions.

9. Transductive Learning

Using a specific example to make predictions.


Relationship Between Induction, Deduction, and Transduction
Taken from The Nature of Statistical Learning Theory.


D. Learning Techniques

10. Multi-Task Learning

Multi-task learning is a type of supervised learning that involves fitting a model on one dataset that addresses multiple related problems.

We may want to learn multiple related models at the same time, which is known as multi-task learning. This will allow us to "borrow statistical strength" from tasks with lots of data and to share its with tasks with little data.  

11. Active Learning

Active learning is a technique where the model is able to query a human user operator during learning process in order to resolve ambiguity.

12. Online Learning

Traditionally machine learning is performed offline, which means we have a batch of data, and we optimize an equation. However, if we have streaming data, we need to perform online learning, so we can update our estimates as each new data point arrives rather than waiting until "the end".


13. Transfer Learning

Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task.

14. Ensemble Learning

Ensemble learning is an approach where two or more models are fit on the same data and the predictions from each model are combined.








References:

 

 

No comments:

Post a Comment