Gradient Descent Algorithm

 

Gradient Descent Algorithm for Artificial Neural Networks in Machine Learning – 17CS73

Video Tutorial

Gradient Descent and Delta Rule in ANN

Gradient Descent and the Delta Rule is used separate the Non-Linearly Separable data.

Weights are updated using the following rule,

Where,

Gradient Descent Algorithm

Gradient descent is an important general paradigm for learning.

It is a strategy for searching through a large or infinite hypothesis space that can be applied whenever

  1. the hypothesis space contains continuously parameterized hypotheses (e.g., the weights in a linear unit), and
  2. the error can be differentiated with respect to these hypothesis parameters.

The key practical difficulties in applying gradient descent are

  1. Converging to a local minimum can sometimes be quite slow (i.e., it can require many thousands of gradient descent steps), and
  2. If there are multiple local minima in the error surface, then there is no guarantee that the procedure will find the global minimum.

Summary

This tutorial discusses the Gradient Descent Algorithm in Machine Learning. If you like the tutorial share it with your friends. Like the Facebook page for regular updates and YouTube channel for video tutorials.

Leave a Comment

Your email address will not be published. Required fields are marked *