Let’s start Deep Learning with Neural Networks.
In this tutorial you’ll learn how to make a Neural Network in tensorflow.

Related Course:
Deep Learning with TensorFlow 2 and Keras

Training

The network will be trained on the MNIST database of handwritten digits. Its used in computer vision.

The Mnist database contains 28x28 arrays, each representing a digit. You can view these 28x28 digits as arrays.

These are flattened, the 28x28 array into a 1-d vector: 28 x 28 = 784 numbers. You’ll see the number 784 later in the code. This is where it comes from.

This dataset is included in tensorflow by default:

from tensorflow.examples.tutorials.mnist import input_data

Example

Introduction

Build a simple neural network.
First import the required modules:

from __future__ import print_function

# Load MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

import tensorflow as tf

Then define the model: how many nodes per hidden layer, how many classes in the dataset (10: 0 to 9) and finally the batch size.

nrNodesHiddenLayer1 = 64
nrNodesHiddenLayer2 = 64
nrNodesHiddenLayer3 = 64
nrClasses = 10
batchSize = 100

Then define the placeholders.

x = tf.placeholder('float', [None, 784])
y = tf.placeholder('float')

All constants are defined. You can now create the neural network.
Define weights and biases. A bias is a vlue that’s added to the sums, before the activation function.

def neuralNetworkModel(data):
hidden1Layer = {'weights':tf.Variable(tf.random_normal([784, nrNodesHiddenLayer1])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer1]))}

hidden2Layer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer1, nrNodesHiddenLayer2])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer2]))}

hidden3Layer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer2, nrNodesHiddenLayer3])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer3]))}

outputLayer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer3, nrClasses])),
'biases':tf.Variable(tf.random_normal([nrClasses]))}

This is not the model yet, these are only random numbers.
The method tf.random_normal creates random numbers. We just made it in the shape that we need.

Lets create the model:

l1 = tf.add(tf.matmul(data,hidden1Layer['weights']), hidden1Layer['biases'])
l1 = tf.nn.relu(l1)

l2 = tf.add(tf.matmul(l1,hidden2Layer['weights']), hidden2Layer['biases'])
l2 = tf.nn.relu(l2)

l3 = tf.add(tf.matmul(l2,hidden3Layer['weights']), hidden3Layer['biases'])
l3 = tf.nn.relu(l3)

output = tf.matmul(l3,outputLayer['weights']) + outputLayer['biases']
return output

Values are added into layer one. The input (data) is multiplied by the weights. Then the bias is added with tf.add().
This process is repeated in each layer. (layer1 into layer2, layer2 into layer 3). Then return the output layer.

The model needs to be trained. Lets create a function for that. Make a prediction with the model, then calculate the cost variable. The cost variable measures how wrong the prediction is. This function can be called loss function.

def trainNeuralNetwork(x):
prediction = neuralNetworkModel(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y) )

Cost needs to be optimized, pick some optimizer for it.

optimizer = tf.train.AdamOptimizer().minimize(cost)

Lets begin training. This is similar to the training we did on the linear regression model.

no_epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())

for epoch in range(no_epochs):
epoch_loss = 0

for _ in range(int(mnist.train.num_examples/batchSize)):
epoch_x, epoch_y = mnist.train.next_batch(batchSize)
_, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y})
epoch_loss += c

print('Epoch', epoch, 'completed out of',no_epochs,'loss:',epoch_loss)

Then after training

correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:mnist.test.images, y:mnist.test.labels}))

Lets call the training function!

trainNeuralNetwork(x)

If you want to get a single prediction (given image, which class?):

x_in = np.expand_dims(mnist.test.images[0], axis=0)
classification = sess.run(tf.argmax(prediction, 1), feed_dict={x:x_in})
print(classification)

Neural network example

The complete code of the above description below. You can tweak the number of neurons to optimize the accuracy of the network.
(I have slightly altered it).


from __future__ import print_function

# Load MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

import tensorflow as tf

# Define model
nrNodesHiddenLayer1 = 64 # 1st layer of neurons
nrNodesHiddenLayer2 = 64 # 2nd layer of neurons
nrNodesHiddenLayer3 = 64
nrClasses = 10
batchSize = 128

# Placeholders
x = tf.placeholder('float', [None, 784])
y = tf.placeholder('float')

# Create neural network model
def neuralNetworkModel(data):
hidden1Layer = {'weights':tf.Variable(tf.random_normal([784, nrNodesHiddenLayer1])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer1]))}

hidden2Layer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer1, nrNodesHiddenLayer2])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer2]))}

hidden3Layer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer2, nrNodesHiddenLayer3])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer3]))}

outputLayer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer3, nrClasses])),
'biases':tf.Variable(tf.random_normal([nrClasses]))}

# create flow
l1 = tf.add(tf.matmul(data,hidden1Layer['weights']), hidden1Layer['biases'])
l1 = tf.nn.relu(l1)

l2 = tf.add(tf.matmul(l1,hidden2Layer['weights']), hidden2Layer['biases'])
l2 = tf.nn.relu(l2)

l3 = tf.add(tf.matmul(l2,hidden3Layer['weights']), hidden3Layer['biases'])
l3 = tf.nn.relu(l3)

output = tf.matmul(l3,outputLayer['weights']) + outputLayer['biases']
return output

def trainNeuralNetwork(x):
prediction = neuralNetworkModel(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y) )
optimizer = tf.train.AdamOptimizer(learning_rate=0.1).minimize(cost)
no_epochs = 10

init = tf.global_variables_initializer()

with tf.Session() as sess:
sess.run(init)

for epoch in range(no_epochs):
epoch_loss = 0

for _ in range(int(mnist.train.num_examples/batchSize)):
epoch_x, epoch_y = mnist.train.next_batch(batchSize)
_, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y})
epoch_loss += c

print('Epoch', epoch, 'completed out of',no_epochs,'loss:',epoch_loss)

correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:mnist.test.images, y:mnist.test.labels}))


trainNeuralNetwork(x)