Perceptron
The Perceptron is inspired by the information processing of a single neural cell called a neuron.
A neuron accepts input signals via its dendrites, which pass the electrical signal down to the cell body.
In a similar way, the Perceptron receives input signals from examples of training data that we weight and combined in a linear equation called the activation.
$activation = sum(weight_i * x_i) + bias$
The activation is then transformed into an output value or prediction using a transfer function, such as the step transfer function.
$prediction = 1.0$ if activation >= 0.0 else 0.0
In this way, the Perceptron is a classification algorithm for problems with two classes (0 and 1) where a linear equation (like or hyperplane) can be used to separate the two classes.
It is closely related to linear regression and logistic regression that make predictions in a similar way (e.g. a weighted sum of inputs).
The weights of the Perceptron algorithm must be estimated from your training data using stochastic gradient descent.
A perceptron model, in Machine Learning, is a supervised learning algorithm of binary classifiers. A single neuron, the perceptron model detects whether any function is an input or not and classifies them in either of the classes.
Representing a biological neuron in the human brain, the perceptron model or simply a perceptron acts as an artificial neuron that performs human-like brain functions. A linear ML algorithm, the perceptron conducts binary classification or two-class categorization and enables neurons to learn and register information procured from the inputs.
Invented by Frank Rosenblatt in 1957, the perceptron model is a vital element of Machine Learning as ML is recognized for its classification purposes and mechanism.
There are 4 constituents of a perceptron model. They are as follows-
Input values
Weights and bias
Net sum
Activation function
The perceptron model enables machines to automatically learn coefficients of weight which helps in classifying the inputs. Also recognized as the Linear Binary Classifier, the perceptron model is extremely efficient and helpful in arranging the input data and classifying the same in different classes.
Single Layer Perceptron- The Single Layer perceptron is defined by its ability to linearly classify inputs. This means that this kind of model only utilizes a single hyperplane line and classifies the inputs as per the learned weights beforehand.
Multi-Layer Perceptron- The Multi-Layer Perceptron is defined by its ability to use layers while classifying inputs. This type is a high processing algorithm that allows machines to classify inputs using various more than one layer at the same time.
Comments
Post a Comment