TensorBoard comes with TensorFlow by default. But it may be that it doesn’t run automatically. To show its location:

pip3 show tensorboard pip show tensorboard

If the command tensorboard doesn’t do anything, setup an alias in the shell

alias tensorboard='python3 PATH/tensorboard/main.py'

Where PATH is the location shown by ‘pip show tensorboard’. In my case:

/home/linux/.local/lib/python3.5/site-packages/

tensorboard

TensorBoard can create nice graphs from event files. You can create an event file from within your existing code. Add one line of code to the program you made before:

import tensorflow as tf

a = tf.add(2, 6)

with tf.Session() as ses: writer = tf.summary.FileWriter('./graphs', ses.graph) print(ses.run(a))

Save as tensor2.py

Then run the commands:

python tensor2.py tensorboard --logdir="./graphs"

It will then bootup a server:

TensorBoard 1.9.0 at http://linux:6006 (Press CTRL+C to quit)

Variables must always be initialized. If you don’t you’ll run into an error like this:

tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value Variable_1

Luckily initializing variables is a piece of cake.

A simple example that adds two scalars below:

import tensorflow as tf

x = tf.Variable(1) y = tf.Variable(6) op1 = tf.add(x,y)

init = tf.variables_initializer([x,y], name="init") with tf.Session() as ses: ses.run(init) print(ses.run(op1))

You may also initialize variables globally:

init = tf.global_variables_initializer()

If you have multiples sessions, each session has a copy of the variables.

TensorFlow Linear Regression

Linear Regression in TensorFlow is easy to implement.

In the Linear Regression Model: The goal is to find a relationship between a scalar dependent variable y and independent variables X.

The model is based on real world data and can be used to make predictions. Of course you can use random data, but it makes more sense to use real world data.

Logisitic regression uses the sigmund function for classification problems. What is this function exactly?

The sigmund function is:

1 / (1 + e^-t)

It’s an s-shaped curve.

Why use the signmund function for prediction? The s-shaped curve is kind of strange, isn’t it?

Classifications in prediction problems are probabilistic. The model shouldn’t be below zero or higher than one, the s-shaped curve helps to create that. Because of the limits, it can be used for binary classification.

Sigmund function in logistic regression

The function can be used to make predictions.

p(X) = e^(b0 + b1*X) / (1 + e^(b0 + b1*X))

The variable b0 is the bias and b1 is the coefficient for the single input value (x) This can be rewritten as

ln(odds) = b0 + b1 * X

or

odds = e^(b0 + b1 * X)

To make predictions, you need b0 and b1.

These values are found with the training data. Initially we set them to zero:

W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10]))

Tensorflow will take care of that.

Then the model (based on formula) is:

pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax

Lets use logistic regression for handwriting recognition. The MNIST datset contains 28x28 images of handwritten numbers. Each of those is flattened to be a 784 size 1-d vector.

The problem is:

X: image of a handwritten digit

Y: the digit value

Recognize the digit in the image

The model:

logits = X * w + b

Y_predicted = softmax(logits)

loss = cross_entropy(Y, Y_predicted)

The same in code:

pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))

Loss is sometimes called cost.

The code below runs the logistic regression model on the handwriting set. Surprisingly the accuracy is 91.43% for this model. Simply copy and run!

from __future__ import print_function

import tensorflow as tf

# Import MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

# Initialize the variables (i.e. assign their default value) init = tf.global_variables_initializer()

# Start training with tf.Session() as sess:

# Run the initializer sess.run(init)

# Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={x: batch_xs, y: batch_ys}) # Compute average loss avg_cost += c / total_batch # Display logs per epoch step if (epoch+1) % display_step == 0: print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))

The iris dataset is split in two files: the training set and the test set. The network has a training phase. After training is completed it can be used to predict.

What does the iris dataset contain?

It 3 contains classes of plants (0,1,2) which is the last parameter of the file. It has 4 attributes:

sepal length in cm

sepal width in cm

petal length in cm

petal width in cm

In short: grabbed a bunch of plants of different types and measured. This is then stored in text files. You can download the iris dataset on github.

The traning set is a simple file that looks like this:

Create the neural network with one line of code. As second parameter the number of hidden units per layers are told. All layers are fully connected. [5,10] means the first layer has 5 nodes, the second layer has 10 nodes.

Then specify the number of possible classes with n_classes. In our dataset we have only 3 types of flowers (0,1,2).

The network will be trained on the MNIST database of handwritten digits. Its used in computer vision.

The Mnist database contains 28x28 arrays, each representing a digit. You can view these 28x28 digits as arrays.

These are flattened, the 28x28 array into a 1-d vector: 28 x 28 = 784 numbers. You’ll see the number 784 later in the code. This is where it comes from.

This dataset is included in tensorflow by default:

from tensorflow.examples.tutorials.mnist import input_data

Example

Introduction

Build a simple neural network. First import the required modules:

from __future__ import print_function

# Load MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

import tensorflow as tf

Then define the model: how many nodes per hidden layer, how many classes in the dataset (10: 0 to 9) and finally the batch size.

x = tf.placeholder('float', [None, 784]) y = tf.placeholder('float')

All constants are defined. You can now create the neural network. Define weights and biases. A bias is a vlue that’s added to the sums, before the activation function.

This is not the model yet, these are only random numbers. The method tf.random_normal creates random numbers. We just made it in the shape that we need.

Values are added into layer one. The input (data) is multiplied by the weights. Then the bias is added with tf.add(). This process is repeated in each layer. (layer1 into layer2, layer2 into layer 3). Then return the output layer.

The model needs to be trained. Lets create a function for that. Make a prediction with the model, then calculate the cost variable. The cost variable measures how wrong the prediction is. This function can be called loss function.

Lets begin training. This is similar to the training we did on the linear regression model.

no_epochs = 10 with tf.Session() as sess: sess.run(tf.global_variables_initializer())

for epoch in range(no_epochs): epoch_loss = 0

for _ in range(int(mnist.train.num_examples/batchSize)): epoch_x, epoch_y = mnist.train.next_batch(batchSize) _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y}) epoch_loss += c

print('Epoch', epoch, 'completed out of',no_epochs,'loss:',epoch_loss)

The complete code of the above description below. You can tweak the number of neurons to optimize the accuracy of the network. (I have slightly altered it).

from __future__ import print_function

# Load MNIST data from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

import tensorflow as tf

# Define model nrNodesHiddenLayer1 = 64# 1st layer of neurons nrNodesHiddenLayer2 = 64# 2nd layer of neurons nrNodesHiddenLayer3 = 64 nrClasses = 10 batchSize = 128

# Placeholders x = tf.placeholder('float', [None, 784]) y = tf.placeholder('float')

init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init)

for epoch in range(no_epochs): epoch_loss = 0

for _ in range(int(mnist.train.num_examples/batchSize)): epoch_x, epoch_y = mnist.train.next_batch(batchSize) _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y}) epoch_loss += c