Fork me on GitHub

Visualizing Rosenblatt perceptron training

Training of a single layer feed forward binary classifier using the Heaviside step function

0 {{ (n == 0 || n == 1) ? n : n.toFixed(1) }} {{ (n == 0 || n == 1) ? n : n.toFixed(1) }} {{ point.name }} {{ point.name }}

Our Perceptron is trained!

It is important to notice that it will converge to any solution that satisfies the training set. Try to retrain to see if it changes.

Now we can use it to categorize samples it's never seen.

According to the perceptron, something with {{ ~~(predictedSet[predictedSet.length-1].x1 * maxX) }} teeth and {{ ~~(predictedSet[predictedSet.length-1].x2 * maxY) }}cm looks {{ predictedSet[predictedSet.length-1].cat ? 'nice' : 'NOT nice' }}

{{predictedSet.length ? "But w" :"W"}}ould something with: {{ ~~(randomValue[0]*maxX) }}teeth and a size of {{ ~~(randomValue[1]*maxY) }} cm, likely to be nice?
p.predict({{ randomValue }})

Training the Perceptron

We feed it a suite of (hypothetical) inputs and outputs according to this table* :

Entity Teeth Size (cm) Friendly
{{item.name}} {{item.x1}} {{item.x2}} {{item.cat ? 'yes' : 'no'}}
* I know it's not accurate but it's for the sake of the demonstration.

We normalize our values between [0;1] before sending them through training:


const p = new Perceptron();
{{ code }}
p.learn(updateGraph);
          

The perceptron will try to find a linear separation between the positive and negative examples

Train