Showing posts with label 5-Deep Learning. Show all posts
Showing posts with label 5-Deep Learning. Show all posts
11/09/2021
11/08/2021
Loss and Loss Functions for Training Deep Learning Neural Networks
Neural network are trained using stochastic gradient descent and require that you choose a loss function when designing and configuring your model.
There are many loss functions to choose from and it can be challenging to know what to choose, or even what a loss function is and the role it plays when training a neural network.
06/08/2021
How to Accelerate Learning of Deep Neural Networks With Batch Normalization
Batch Normalization is a technique design to automatic
standardize the inputs to a layer in a deep learning neural network.
Once implemented, batch normalization has the effect of dramatically accelerating the training process of a neural network, and in some cases improves the performance of the model via a modest regularization effect.
05/08/2021
Lenet-5 Architecture
Lenet-5 is one of the easiest
pre-trained models proposed by Yann Lecun in the year 1998.
The network has 5 layers with learnable parameters and hence named Lenet-5. It has three sets of convolution layers with a combination of average pooling. After convolution and average pooling layers, we have two fully connected layers. At last, a Softmax classifier which classifies the images into respective class.
The network has 5 layers with learnable parameters and hence named Lenet-5. It has three sets of convolution layers with a combination of average pooling. After convolution and average pooling layers, we have two fully connected layers. At last, a Softmax classifier which classifies the images into respective class.
Convolutional Neural Networks for Machine Learning
These networks preserve the spatial structure of the
problem.
CNNs are popular because people are achieving state-of-the-art-results on difficult computer vision and natural language processing tasks.
Given a dataset of gray scale images with the standardized size of 32x32 pixels each, a traditional feedforward neural network would require 1024 input weights (plus one bias).
CNNs are popular because people are achieving state-of-the-art-results on difficult computer vision and natural language processing tasks.
Given a dataset of gray scale images with the standardized size of 32x32 pixels each, a traditional feedforward neural network would require 1024 input weights (plus one bias).
03/08/2021
When to use MLP, CNN, RNN Neural Networks?
What neural network is appropriate for your predictive
modeling problem?
When to Use Multilayer Perceptrons?
- Tabular datasets
- Classification prediction problems
- Regression prediction problems.
01/08/2021
How to choose an Activation Function for Deep Learning
Activation function are a critical part of the design of a
neural network.
The choice of activation function in the hidden layer will
control how well the network model learns the training dataset. The choice of
activation function in the output layer will define the type of predictions the
model can make.
Subscribe to:
Comments (Atom)