Dimensionality reduction is an important approach in machine learning. A large number of features available in the dataset may result in overfitting of the learning model. To identify the set of significant features and to reduce the dimension of the dataset, there are three popular dimensionality reduction techniques that are used.
In this article, we will discuss the practical implementation of these three dimensionality reduction techniques:
- Principal Component Analysis (PCA)
- Linear Discriminant Analysis (LDA), and
- Kernel PCA (KPCA)
- Comparison of PCA, LDA and Kernel PCA
Principal Component Analysis (PCA) is the main linear approach for dimensionality reduction. It performs a linear mapping of the data from a higher-dimensional space to a lower-dimensional space in such a manner that the variance of the data in the low-dimensional representation is maximized.
PCA is an unsupervised Machine learning technique, we do not need to provide a target variable with category labels.
PCA works through the following steps:
- Calculate the covariance of the matrix of features
- Calculate the eigenvectors and eigenvalues of the covariance matrix
- Sort the eigenvectors in descending order by the eigenvalues
- Choose the first k eigenvectors. This k value will be the k new dimensions
- Transform the original data into k dimensions
In this implementation, we have used the wine classification dataset, which is publicly available on Kaggle.
#1. Import the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#2. Import the dataset
dataset = pd.read_csv('Wine.csv')
X = dataset.iloc[:, 0:13].values
y = dataset.iloc[:, 13].values
#3. Split the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
#4. Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
#5. Apply PCA
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
explained_variance = pca.explained_variance_ratio_
#6. Fit the Logistic Regression to the Training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
#7. Predict the Test set results
y_pred = classifier.predict(X_test)
#8. Make the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
-----Result-----
Confusion Matrix |
B. Linear Discriminant Analysis
Linear Discriminant Analysis (LDA) is used to find a linear combination of features that characterizes or separates two or more classes of objects or events. It explicitly attempts to model the difference between the classes of data. It works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis.
Unlike PCA, LDA requires you to provide features and class labels for your target. Hence, despite being a dimensionality reduction technique similar to PCA, it sits within the supervised branch of Machine Learning.
The goal of LDA is to maximize the separability of the known categories in our target variable.
LDA works through the following steps:
- Calculate the distance between the mean of different features. This is known as the between feature variance
- Calculate the distance between the mean and sample of each feature. This is known as the within feature variance
- Construct the lower-dimensional space to maximizes the between feature variance and minimize the within feature variance
#1. Import the librarie
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#2. Import the dataset
dataset = pd.read_csv('Wine.csv')
X = dataset.iloc[:, 0:13].values
y = dataset.iloc[:, 13].values
#3. Split the dataset into Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
#4. Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
#5. Apply LDA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components = 2)
X_train = lda.fit_transform(X_train, y_train)
X_test = lda.transform(X_test)
#6. Fit Logistic Regression to the Training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
#7. Predict the Test set results
y_pred = classifier.predict(X_test)
#8.Make the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
-----Result-----
C. Kernel Principal Component Analysis
Kernel Principal Component Analysis (KPCA) is an extension of PCA that is applied in non-linear applications by means of the kernel trick. It is capable of constructing nonlinear mappings that maximize the variance in the data.
#1. Import the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#2. Import the dataset
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, [2, 3]].values
y = dataset.iloc[:, 4].values
#3. Split the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
#4. Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
#5. Apply Kernel Kernel PCA
from sklearn.decomposition import KernelPCA
kpca = KernelPCA(n_components = 2, kernel = 'rbf')
X_train = kpca.fit_transform(X_train)
X_test = kpca.transform(X_test)
#6. Fit Logistic Regression to the Training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
#7. Predict the Test set results
y_pred = classifier.predict(X_test)
#8. Make the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
-----Result-----
Confusion Matrix |
D. Comparison of PCA, LDA and Kernel PCA
All of these dimensionality reduction techniques are used to maximize the variance in the data but these all three have a different characteristic and approach of working.
1. The difference in Strategy
The PCA and LDA are applied in dimensionality reduction when we have a linear problem in hand that means there is a linear relationship between input and output variables.
On the other hand, the Kernel PCA is applied when we have a nonlinear problem in hand that means there is a nonlinear relationship between input and output variables.
So the PCA and LDA can be applied together to see the difference in their result. But the Kernel PCA uses a different dataset and the result will be different from LDA and PCA.
Although PCA and LDA work on linear problems, they further have differences. The LDA models the difference between the classes of the data while PCA does not work to find any such difference in classes. In PCA, the factor analysis builds the feature combinations based on differences rather than similarities in LDA. The discriminant analysis as done in LDA is different from the factor analysis done in PCA where eigenvalues, eigenvectors and covariance matrix are used.
2. The difference in Results
As we have seen in the above practical implementations, the results of classification by the logistic regression model after PCA and LDA are almost similar. The main reason for this similarity in the result is that we have used the same datasets in these two implementations. Because there is a linear relationship between input and output variables. The task was to reduce the number of input features. Both dimensionality reduction techniques are similar but they both have a different strategy and different algorithms.
On the other hand, a different dataset was used with Kernel PCA because it is used when we have a nonlinear relationship between input and output variables. The result of classification by the logistic regression model re different when we have used Kernel PCA for dimensionality reduction.
References:
No comments:
Post a Comment