Exercises

Test what we’ve learned about optimization algorithms, SciPy, vectors, and matrices.

Exercise 1: Extending binary search

We learned how to generalize gradient descent and Newton’s method to deal with more variables, but what about binary search? Well, let’s discover it ourselves. Adapt the binary search method we saw in the first lesson of this section to solve a problem with two variables. Then, solve the following problem:

minx,yx+ys.t.:x+y>1\min_{x, y} x + y \\ s.t.: x + y > 1

Press + to interact
def f(x, y):
return x + y
def constraint(x, y):
return x + y > 1
def binary_search(a1, b1, a2, b2, f, cons, tol):
'''
Now we need two intervals, one for each variable.
'''
# Remove the following line and complete the code.
pass

Excercise 2: Maximizing

We stated that gradient descent and Newton’s method can be easily adapted to solve maximization problems. But what are the restrictions that the problem should fulfill so we can maximize it with these algorithms? Change the methods and solve the following problem with both of them:

maxx(x+3)2+5\max_x -(x + 3)^2 + 5

Press + to interact
import numpy as np
# This would be a gradient ascent instead of a descent
def gradient_ascent(start, gradient, max_iter, learning_rate, tol=0.01):
# Remove the following line and complete the code
pass

Exercise 3: Implementing linear regression

Linear regression is one of the most popular machine learning algorithms. Given a function and a set of inputs and outputs of that function, we want to approximate it with a linear function as accurately as we can.

We call ff the original function and f^\hat{f} the approximation we’re looking for. The input is a vector x=(x1,x2,...,xm)\mathbf{x} = (x_1, x_2, ..., x_m) ...