Coursify
Create New CourseGalleryContact

Artificial Intelligence

Unit 1
Optimization
Introduction to OptimizationGradient DescentStochastic Gradient DescentAdam Optimization
Unit 1 • Chapter 2

Gradient Descent

Video Summary

Gradient descent is an iterative optimization algorithm for finding the minimum of a function. It works by starting with an initial guess at the solution, and then iteratively moving in the direction of the negative gradient of the function until a local minimum is reached. The algorithm is simple to implement, but it can be slow to converge, especially for functions with many local minima. Here is the pseudocode for gradient descent: ``` function gradient_descent(f, x0, learning_rate, max_iter): # Initialize the current point and the number of iterations. x = x0 i = 0 # Iterate until the maximum number of iterations is reached or the # function converges. while i < max_iter and not f.is_converged(x): # Compute the gradient of the function at the current point. g = f.gradient(x) # Update the current point in the direction of the negative gradient. x = x - learning_rate * g # Increment the number of iterations. i += 1 # Return the current point. return x ```

Knowledge Check

Which of the following is not a step in gradient descent?

Which of the following is not an optimization algorithm?

What is the goal of gradient descent?