# Neural Network

In this lesson, we briefly introduce the Neural Network.

## We'll cover the following

Although `sklearn`

focuses on traditional Machine Learning, it still provides some methods to build a simple forward neural network. In this lesson, we give a simple introduction about how to use it.

## Modeling with `MLPClassifier`

Let’s skip data loading and splitting, and create an `MLPClassifier`

object from the `neural_network`

module. The `MLP`

stands for a multilayer perceptron. As you can see, the NN requires a lot of parameters. If you are familiar with Deep Learning, you may know that fine-tuning is a very important and time-consuming task in a neural network.

Following are some parameters we set below:

`batch_size`

: In general, a neural network uses stochastic optimizers, so every time it uses a mini-batch sample to train.`solver`

: Optimizer is another big topic in Deep Learning, here we just choose the simplest one.`shuffle`

: Whether to shuffle samples in each iteration, this can increase the randomness of training and improve the training efficiency.`tol`

: Convergence condition, which means the change of loss between two iterations is less than a certain threshold.`max_iter`

: The maximum number of iteration.`learning_rate_init`

: Learning rate is the most important hyperparameter in Deep Learning, which is also a very big topic. When`solver="sgd"`

, you can choose the learning rate schedule for parameter updates. The default setting is`constant`

. Here we only set learning_rate_init, which means the learning rate would always be`0.001`

during the training.

The code below shows how to create a Neural Network with these parameters. The complete code would be shown at the end of this lesson.

```
from sklearn.neural_network import MLPClassifier
nn = MLPClassifier(batch_size=32,
hidden_layer_sizes=(64, 32),
solver="sgd",
shuffle=True,
tol=1e-3,
max_iter=300,
learning_rate_init=0.001)
```

Get hands-on with 1200+ tech skills courses.