DOCSLIB.ORG
Explore
Sign Up
Log In
Upload
Search
Home
» Tags
» Backtracking line search
Backtracking line search
Final Exam Guide Guide
A Line-Search Descent Algorithm for Strict Saddle Functions with Complexity Guarantees∗
Newton Method for Stochastic Control Problems Emmanuel Gobet, Maxime Grangereau
A Field Guide to Forward-Backward Splitting with a Fasta Implementation
Finding Critical and Gradient-Flat Points of Deep Neural Network Loss Functions
Arxiv:2108.10249V1 [Math.OC] 23 Aug 2021 Adepoints
CS260: Machine Learning Algorithms Lecture 3: Optimization
Composing Scalable Nonlinear Algebraic Solvers
Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates
1 Gradient-Based Optimization
A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning
Line Search Algorithms
Benchmarking of Bound-Constrained Optimization Software Introduction
Line Search Algorithms for Locally Lipschitz Functions on Riemannian Manifolds
Approximately Exact Line Search∗
Gradient Descent Revisited
Numerical Methods for Unconstrained Optimization and Nonlinear Equations
New Quasi-Newton Optimization Methods for Machine Learning
Top View
Part 2. Gradient and Subgradient Methods for Unconstrained Convex Optimization
Arxiv:1808.05160V2 [Math.OC] 4 Apr 2019
Positive Semidefinite Matrix Factorization Based on Truncated
September 13 5.1 Topics Covered 5.2 Recap of Previous Lecture
Introduction to Optimization
Lecture 5: Gradient Desent Revisited 5.1 Choose Step Size
Arxiv:1908.02246V1 [Stat.ML]
A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning
Optimization by General-Purpose Methods
Choosing the Step Size: Intuitive Line Search Algorithms with Efficient Convergence
Big Batch SGD: Automated Inference Using Adaptive Batch Sizes
The Adaptive Sampling Gradient Method Optimizing Smooth Functions with an Inexact Oracle