Trace a Boundary

Explore why a straight line created by a perceptron as a boundary decision is not enough.

We spent most of this course building classifiers: first a perceptron, and now a full-fledged neural network in the last few chapters. And yet, we might struggle to grasp intuitively what makes classifiers work. Why do perceptrons work well on some datasets and not on others? What do neural networks have that perceptrons do not? It’s hard to answer these questions because it’s hard to paint a mental image of a classifier doing its thing.

The next few lessons are all about that mental image. A new concept, called the decision boundary, will help us visualize how perceptrons and neural networks see the world. This insight is not only going to nourish our intellect, that will also make it easier for us to build and tune neural networks in the future.

Let’s start with revision of the perceptron.

Draw the boundary for linearly separable data

If we want to understand classification intuitively, then we need a dataset that we can visualize easily. MNIST, with its mind-boggling hundreds of dimensions, is way too complex for that. Instead, we’ll use a simpler, brain-friendly dataset:

Input_A Input_B Label
-0.470680718301 -1.905835436960 1
0.9952553595720 1.4019246363100 0
-0.903484238413 -1.233058043620 1
-1.775876322450 -0.436802254656 1

Get hands-on with 1200+ tech skills courses.