What is a Neural Network?
Let's get physical
Sometimes, nothing beats holding a copy of a book in your hands. Writing in the margins, highlighting sentences, folding corners. So this book is also available from Amazon as a paperback.
A Neural Network has a basis in biology. This is a neuron, our brains have a least one of these:
In the center is the Body, it has some Dendrites flowing in and some Axons flowing out. If enough electricity flows in via the Dendrites, the Body gets triggered and pushes some electricity out through the Axons. If we stick enough of these together (about 100 billion^{[1]}), we get a Brain.
That’s pretty much it.
If you code this up in JavaScript, you might conceptually build something like this.
We create an Acyclic Graph^{[2]}, big words, but that means a node
acts as our Body with some edges
(Dendrites and Axons) going in and out and the edges can’t go back around and point back at itself.
We pump a number into each edge; this is the amount of electricity we are sending through it, it’s perhaps the strength signal from a sensor in our Body.
But some electricity is more important that other electricity. For example, if this neuron was my should I open my umbrella neuron the signal for I’m feeling rain on my head is more important than the signal I see clouds. To model this in our artificial neuron, we use weights
; this is a number that we assign to each edge.
To model how the Body decides if enough electricity has been pumped in and how much to pump back out, we use something called an activation function
.
To understand what happens next, take a look at this equation:

We multiply the
electricity
by theweight
and add them all up for the inputs (the Dendrites). 
To model the Body we pass that number into our
activation function
and whatever number that returns is what we pass out through the output edges, the Axons.
There are lots of different types of activation functions, and this is a simple binary one:
If the input signal is above 0, we pass out 1. If the input signal is less than 0, we pass out 1. It’s simple, it’s easy to calculate, but it results in sharp changes to the output for minimal changes to the input.
There are others, another popular one is the hyperbolic tanh, resulting in a much smoother change in the output signal for a change in the input signal.
Another popular one is RELU, Rectified Linear Unit. This has some nonlinearity but is much easier to calculate than TanH by a computer so is often used as a compromise.
Layered Neural Network
If you connect enough of these artificial neurons, you get a neural network.
This is what we call a simple fourlayer feedforward densely connected neural network:
That’s a lot of words, let’s break it down step by step.
4 Layer
There are four layers of neurons in this network. The Purple input layer on the left, the Red output layer on the right, and two what we call Hidden layers in between.
Feed Forward
Feed forward^{[3]} means that at no point do the connections form a cycle; each layer connects only to the next layer.
There is nothing tricky where an output edge snakes back behind and becomes an input in an earlier node. There are Neural Networks that are architected like this, for instance, a Recurrent Neural Network^{[4]}, but we won’t be covering that in this course.
Densely Connected
Densely Connected means that each node in one layer connects to each node in the next layer. For example, this is densely connected:
This is sparsely connected:
Using a Neural Network
Imagine this neural network is now a more sophisticated version of my how wet am I about to get? predictor.
The input layer takes in, as signals, some information.

The number of raindrops detected per second on a sensor.

How cloudy it is on a scale of 0 to 10, 10 being thunderstorm and 0 being a warm sunny day.

The current temperature.
The output node pumps out the predicted cm of rain that will fall in the next hour.
For all the activation functions, we used TanH
.
If you have a fully trained Neural Network, then it’s simple.
We pump in the current values of our signals into the input layer, it gets multiplied by the weights, passed through activation functions all the way through until it outputs a number.
Important
I mentally think of it like so:
function rainPredictor(
rainDropsPerSecond,
cloudyness,
temperature)
{
let cmsOfRainInNextHour = 0;
// maths stuff involving all those weights
return cmsOfRainInNextHour;
}
The critical thing is that the stuff in the code comment above is a single mathematical expression rather than a complex workflow with conditionals, you won’t find if’s and else’s in a Neural Network.
That’s it, that’s all you need to do to use a Neural Network.
Training a Neural Network
The real question is what should we use as the weights?. The weights are crucial to everything, the weights are the Neural Network.
When we first create our network, we initialize the weights with random numbers, usually between 0 and 1.
So an untrained neural network is going to do a pretty lousy job of predicting rain.
Training a Neural Network is the process of tuning those weights so that the neural network gets better at pumping out useful numbers on the other side.
There are several ways of training a neural network, but the most popular and most accessible is to use supervised learning. This is like learning by trial and error. To do this, we need what’s called a labeled data set.
We need some historical data where for a given set of inputs we know what the output was, like so:
Drops/s  Cloudy  Temp  Rain CM 

0 
1.2 
23 
0 
1.2 
4.5 
15 
1.2 
The first line is one example, we have 0 drops per second, it’s not very cloudy, and it’s pretty sunny, and we can see an hour later that 0 cm of rain fell.
The second line shows a more rainy example.
The first three columns are what we call features; these are the inputs to our neural network model, to our function. The last column is our label; this is what we would want our correctly trained neural network to output if we pumped in the input features.
For each line of our example data, we feed those numbers into our untrained model; we let it multiply all the way through and see what it outputs on the other side.
That’s our untrained model’s prediction; we compare that to the real CM of rainfall from the last column in the dataset.
We calculate how wrong our Neural Network is, we need that number we call that the loss.
We pump in all our examples and get a total loss.
We then use a technique called Back Propagation; this adjusts the weights depending on how wrong our Neural Network was and tweaks the weights slightly in the right direction.
That’s one epoch, one iteration, of training through all the data. We go through all our data again and again and do the back propagation again and again. Each time it tweaks the weights in the right direction to make our neural network more accurate.
Eventually, we end up with a trained neural network that’s pretty accurate, so when I’m out and about, I can use it to predict the rain and whether I should open my umbrella.
Summary
That was a pretty simple Neural Network. They can get much more complicated than that, but the concepts and ideas remain the same.

A neural network is a mathematical function, an expression, the challenge is how to represent your problem as a mathematical function.

It contains some weights, which we initialize with random numbers; that’s why an untrained model doesn’t do a very good job.

Training a neural network is the process of tuning those weights, so it gets better at predicting the right results.

We initially some nodes with edges and weights.

The technique we will be using (the simplest and the most popular) is supervised learning, where we take sample data with known results and train the neural network using that.
Advanced JavaScript
This unique course teaches you advanced JavaScript knowledge through a series of interview questions. Bring your JavaScript to the 2021's today.
[🌲,🌳,🌴].push(🌲)If you find my courses useful, please consider planting a tree on my behalf to combat climate change. Just $4.50 will pay for 25 trees to be planted in my name. Plant a tree!