Related Course:
Deep Learning with TensorFlow 2 and Keras

A perceptron represents a simple algorithm meant to perform binary classification or simply put: it established whether the input belongs to a certain category of interest or not.

Moreover, it is rather important in the history of neural networks and artificial intelligence due to the fact that it was characterized by Frank Rosenblatt as a device rather than an algorithm.

A perceptron represents a linear classifier that is able to classify input by separating two categories with a line.

Thus, the input is usually viewed as a feature vector X multiplied by weights W and added to a bias B: y=W * x + b.

This classifier delivers a unique output based on various real-valued inputs by setting up a linear combination based on its input weights.

Single vs Multi-Layer perceptrons

Rosenblatt set up a single-layer perceptron a hardware-algorithm that did not feature multiple layers, but which allowed neural networks to establish a feature hierarchy. Hence, it represented a vague neural network, which did not allow his perceptron to perform non-linear classification.

On the other hand, a multilayer perceptron or MLP represents a vast artificial neural network, meaning simply that it features more than one perceptron. This gathering of perceptrons is established from an input layer meant to receive the signal, an output layer responsible for a decision or prediction in regards to the input, and an arbitrary number of hidden layers that represent the true computational power of the MLP.

Usually, multilayer perceptrons are used in supervised learning issues due to the fact that they are able to train on a set of input-output pairs and learn to depict the dependencies between those inputs and outputs.

Training requires the adjustment of parameters of the model with the sole purpose of minimizing error.

Furthermore, backpropagation is required to make those weigh and bias adjustments, while the error resulted can be established in a multitude of ways, such as the root mean squared error or RMSE.

MLPs are basically linked to two motions, one that goes back and one that goes forth. In the forward phase, the signal travels from the input layer through the hidden layers towards the output layer, while the decision of the output layer is established in regards to the ground truth labels. In the backwards pass, the multitude of weights and biases are backpropagated through the MLP. The landscape of error is provided by differentiation, which can be done by making use of any gradient-based optimisation algorithm.

Multi-layer perceptron sklearn

A multi-layer perceptron model is able to solve any problem, but it is still difficult to understand if you compare it to other models more user-friendly such as linear regression.

from sklearn.neural_network import MLPClassifier

X = [[0, 0], [1, 1]]
y = [0, 1]

# create mutli-layer perceptron classifier
clf = MLPClassifier(solver='lbfgs', alpha=1e-5,
hidden_layer_sizes=(5, 2), random_state=1)

# train
clf.fit(X, y)

# make predictions
print( clf.predict([[2., 2.]]) )
print( clf.predict([[0, -1]]) )
print( clf.predict([[1, 2]]) )