Python

Tag: Deep Learning

TensorBoard

TensorBoard is part of the TensorFlow suite. Yes, TensorFlow is actually a suite: it has TensorFlow (the module), TensorBoard and TensorServing.

TensorBoard is graph vizualization software. You can make nice visualizations with this.

If tensorflow runs some operations, it creates event files. TensorBoard can convert these event files to graphs.

Related Course:
Complete Guide to TensorFlow for Deep Learning with Python

install tensorboard run

TensorBoard comes with TensorFlow by default. But it may be that it doesn’t run automatically.
To show its location:

pip3 show tensorboard
pip show tensorboard

If the command tensorboard doesn’t do anything, setup an alias in the shell

alias tensorboard='python3 PATH/tensorboard/main.py'

Where PATH is the location shown by ‘pip show tensorboard’.
In my case:

/home/linux/.local/lib/python3.5/site-packages/

tensorboard

TensorBoard can create nice graphs from event files. You can create an event file from within your existing code. Add one line of code to the program you made before:

import tensorflow as tf

a = tf.add(2, 6)

with tf.Session() as ses:
writer = tf.summary.FileWriter('./graphs', ses.graph)
print(ses.run(a))

Save as tensor2.py

Then run the commands:

python tensor2.py
tensorboard --logdir="./graphs"

It will then bootup a server:

TensorBoard 1.9.0 at http://linux:6006 (Press CTRL+C to quit)

In any case that’s localhost, so http://127.0.0.1:6006

It will look something like this:

tensorboard

The TensorBoard screen should show up in your browser. Open your browser, it won’t popup automatically.

TensorFlow Graphs

TensorFlow can create more advanced graphs. A graph doesn’t have to be just 3 nodes.

You can create large graphs and graphs with subgraphs. You can visualize this with TensorBoard.

Related Course:
Complete Guide to TensorFlow for Deep Learning with Python

Graph

In tensorflow we define graphs. These graphs show the operations in the session. The example below creates a graph with more nodes:

import tensorflow as tf

x = 1
y = 6

op1 = tf.add(x,y)
op2 = tf.add(x,y)
op3 = pow(op1, op2)

with tf.Session() as ses:
writer = tf.summary.FileWriter('./graphs', ses.graph)
print(ses.run(op3))

Save it and run it with Python. Then open it with TensorBoard and this graph should show up:

tensorflow graph

Subgraphs

Subgraphs are also possible, you could have something like:

op1 = tf.add(x,y)
op2 = tf.add(x,y)
op3 = tf.pow(op1, op2)
thing = tf.multiply(x, op1)

A subgraph is possible to break into chunks, each can be run on a different CPU or GPU.

Can you make more than one graph?

You can build more than one graph, but that doesn’t work distributed.

The session runs the default graph. You should have disconnected subgraphs in a graph.

Tensorflow

TensorFlow is a deep learning module. It’s created by Google and open-source. It has a Python API and can be used with one or more CPUs or GPUs.

It runs on Windows, iOS, Linux, Raspberry Pi, Android and server farms.

There are many other deep learning libraries (Torch, Theano, Cafe, CNTK), but TensorFlow is the most popular.

Related Course:
Complete Guide to TensorFlow for Deep Learning with Python

install tensorflow

The TensorFlow module is available in the PyPi repository and can be installed with pip.

To start, write this line of code:

import tensorflow as tf

That’s all to load the module.

Whats a tensor

Tensors are data.
A tensor is an n-dimensional array

0-d tensor: scalar (number)
1-d tensor: vector
2-d tensor: matrix

tensorflow session

Lets start with a simple program and introducing the concept of sessions. Create a program that adds two numbers (2,6).

Tensorflow has a method tf.add(x,y). The parameters x and y are tensors. The method returns a tensor.

We can visualize our advanced mathematics (2+6) in a data flow graph:

graph

Tensorflow (TF) automatically gives names to nodes, in this case x=2 and y=6.

Code shown below:

import tensorflow as tf

x = tf.add(2, 6)
print(x)

If you run it you’ll see:

Tensor("Add:0", shape=(), dtype=int32)

That’s not the value of x, so how do you get it?

Create a session. Within the session, you can get the variable x.

import tensorflow as tf

a = tf.add(2, 6)

ses = tf.Session()
print(ses.run(a))
ses.close()

This will then give you

>> 8

You can also write this as:

import tensorflow as tf

a = tf.add(2, 6)

with tf.Session() as ses:
print(ses.run(a))

So what does ts.Session() do?

A session object evaluates Tensor objects and Operation objects are executed.

TensorFlow Data Types

TensorFlow has its own data types. We’ll discuss data types in tensorflow and how to use variables.

TensorFlow accepts Python native types like booleans, strings and numeric (int, float). But you should use the tensorflow data types instead.

Why? Because TensorFlow has to infer with Python type.

Related Course:
Complete Guide to TensorFlow for Deep Learning with Python

Data types

There are many data types available, both 32 bit, 64 bit numbers and others. Variables must be initialized (more on that later in the article).

The Tensorflow data types include:

typetensorflow
floating point:tf.float32, tf.float64
integers:tf.int8, tf.int16, tf.int32, tf.int64
unsigned integers:tf.uint8, tf.unit16
strings:tf.string
booleans:tf.bool
complex numbers:tf.complex64, tf.complex128
integer with quantuized ops:tf.qint8, tf.qint32, tf.quint8

TensorFlow data types intergrate seamlessly with numpy:

tf.int64 == np.int64 # True

Tensors

Tensors are a big part of tensorflow. You can create different types of tensors: 0-d tensor (scalar), 1-d tensor (vector) or 2-d tensor (matrix)*.

Optionally you can also assign a name to your variables. That looks nice in tensorboard but isn’t required.

To create a 0-d tensor:

a = tf.Variable(1, name="scalar")

Making a 1-d tensor is just as easy

b = tf.Variable([1,2,3], name="vector")

To make a 2x2 tensor (matrix)

t_2 = tf.Variable([[0,1,0],[1,1,0],[1,0,1]], name="matrix")

Must initialize variables

Variables must always be initialized. If you don’t you’ll run into an error like this:

tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value Variable_1

Luckily initializing variables is a piece of cake.

A simple example that adds two scalars below:

import tensorflow as tf

x = tf.Variable(1)
y = tf.Variable(6)
op1 = tf.add(x,y)

init = tf.variables_initializer([x,y], name="init")
with tf.Session() as ses:
ses.run(init)
print(ses.run(op1))

You may also initialize variables globally:

init = tf.global_variables_initializer()

If you have multiples sessions, each session has a copy of the variables.

TensorFlow Linear Regression

Linear Regression in TensorFlow is easy to implement.

In the Linear Regression Model:
The goal is to find a relationship between a scalar dependent variable y and independent variables X.

The model is based on real world data and can be used to make predictions. Of course you can use random data, but it makes more sense to use real world data.

Related Course:
Complete Guide to TensorFlow for Deep Learning with Python

The model

Consider this example:

X: gdp per capita
Y: life expectancy
Predict Y from X

The model is:

Y_predicted = X * w + b

Linear regression with tensorflow

You can create a linear regression prediction model in a few steps.
If you want you can see the graph with TensorBoard.

You can find the complete code and dataset in this repo.

  1. read the data
    This can be a simple text file with tab separated values.
  2. create placeholders for inputs

    X = tf.placeholder(tf.float32, name='X')
    Y = tf.placeholder(tf.float32, name='Y')
  3. create weights and bias

    w = tf.get_variable('weights', initializer=tf.constant(0.0))
    b = tf.get_variable('bias', initializer=tf.constant(0.0))
  4. make model to predict

    Y_predicted = w * X + b
  5. define loss function

    # can use square error as loss function
    loss = tf.square(Y - Y_predicted, name='loss')
  6. create optimizer

    optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.0003).minimize(loss)
  7. train model (initialize variables, run optimizer)

  8. plot (optional)

TensorFlow Logistic Regression

Logistic regression is borrowed from statistics. You can use this for classification problems. Given an image, is it class 0 or class 1?

The word “logistic regression” is named after its function “the logistic”. You may know this function as the sigmoid function.

Related Course:
Complete Guide to TensorFlow for Deep Learning with Python

Introduction

Sigmund function

Logisitic regression uses the sigmund function for classification problems. What is this function exactly?

The sigmund function is:

1 / (1 + e^-t)

It’s an s-shaped curve.

Why use the signmund function for prediction? The s-shaped curve is kind of strange, isn’t it?

Classifications in prediction problems are probabilistic. The model shouldn’t be below zero or higher than one, the s-shaped curve helps to create that. Because of the limits, it can be used for binary classification.

Sigmund function in logistic regression

The function can be used to make predictions.

p(X) = e^(b0 + b1*X) / (1 + e^(b0 + b1*X))

The variable b0 is the bias and b1 is the coefficient for the single input value (x)
This can be rewritten as

ln(odds) = b0 + b1 * X

or

odds = e^(b0 + b1 * X)

To make predictions, you need b0 and b1.

These values are found with the training data.
Initially we set them to zero:

W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

Tensorflow will take care of that.

Then the model (based on formula) is:

pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax

Where’s the exponent?

The softmax function does the equivalent of:

softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis)

Logistic regression with handwriting recognition

Lets use logistic regression for handwriting recognition. The MNIST datset contains 28x28 images of handwritten numbers. Each of those is flattened to be a 784 size 1-d vector.

The problem is:

  • X: image of a handwritten digit
  • Y: the digit value
  • Recognize the digit in the image

The model:

  • logits = X * w + b
  • Y_predicted = softmax(logits)
  • loss = cross_entropy(Y, Y_predicted)

The same in code:

pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))

Loss is sometimes called cost.

The code below runs the logistic regression model on the handwriting set. Surprisingly the accuracy is 91.43% for this model. Simply copy and run!


from __future__ import print_function

import tensorflow as tf

# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

# Parameters
learning_rate = 0.01
training_epochs = 25
batch_size = 100
display_step = 1

# tf Graph Input
x = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784
y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes

# Set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

# Construct model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax

# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start training
with tf.Session() as sess:

# Run the initializer
sess.run(init)

# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_xs,
y: batch_ys})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if (epoch+1) % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))

print("Optimization Finished!")

# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))

TensorFlow Deep Neural Network with CSV

A neural network can be applied to the classification problem. Given this example, determine the class.

Tensorflow has an implementation for the neural network included, which we’ll use to on csv data (the iris dataset).

Related Course:
Complete Guide to TensorFlow for Deep Learning with Python

Iris Dataset

The iris dataset is split in two files: the training set and the test set. The network has a training phase. After training is completed it can be used to predict.

What does the iris dataset contain?

It 3 contains classes of plants (0,1,2) which is the last parameter of the file.
It has 4 attributes:

  • sepal length in cm
  • sepal width in cm
  • petal length in cm
  • petal width in cm

In short: grabbed a bunch of plants of different types and measured. This is then stored in text files.
You can download the iris dataset on github.

The traning set is a simple file that looks like this:

6.4,2.8,5.6,2.2,2
5.0,2.3,3.3,1.0,1
4.9,2.5,4.5,1.7,2
4.9,3.1,1.5,0.1,0
5.7,3.8,1.7,0.3,0
...

The test set looks similar

5.9,3.0,4.2,1.5,1
6.9,3.1,5.4,2.1,2
5.1,3.3,1.7,0.5,0
6.0,3.4,4.5,1.6,1
...

The files have a header, that we’ll ignore.

Neural network on csv data

The csv files can be loaded with these two lines:

training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TRAINING,
target_dtype=np.int,
features_dtype=np.float32)

test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TEST,
target_dtype=np.int,
features_dtype=np.float32)

The capital letters are the file names. Load type as integer and features as float.
Specify that all features have real data

feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]

Create the neural network with one line of code. As second parameter the number of hidden units per layers are told. All layers are fully connected. [5,10] means the first layer has 5 nodes, the second layer has 10 nodes.

Then specify the number of possible classes with n_classes. In our dataset we have only 3 types of flowers (0,1,2).

classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[5,10,5],
n_classes=3)

Fit the model.

classifier.fit(input_fn=get_train_inputs, steps=2000)

# Define the test inputs
def get_test_inputs():
x = tf.constant(test_set.data)
y = tf.constant(test_set.target)

return x, y

Then you can evaluate the classifier

# Define the test inputs
def get_test_inputs():
x = tf.constant(test_set.data)
y = tf.constant(test_set.target)

return x, y

# Evaluate accuracy.
accuracy_score = classifier.evaluate(input_fn=get_test_inputs,
steps=1)["accuracy"]

print("\nTest Accuracy: {0:f}\n".format(accuracy_score))

Then given 4 new samples, you can predict the type (class) of flower:

# Classify new flower
def new_samples():
return np.array([[6.4, 2.7, 5.6, 2.1]], dtype=np.float32)

predictions = list(classifier.predict(input_fn=new_samples))

print("Predicted class: {}\n".format(predictions))

Neural Network on CSV sample

The example below summarizes what we talked about. You can copy this code and run it. Don’t forget to get the iris dataset (train and test).

# DNNClassifier on CSV input dataset.

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import urllib

import numpy as np
import tensorflow as tf

# Data sets
IRIS_TRAINING = "iris_training.csv"
IRIS_TEST = "iris_test.csv"

def main():
# Load datasets.
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TRAINING,
target_dtype=np.int,
features_dtype=np.float32)

test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TEST,
target_dtype=np.int,
features_dtype=np.float32)

# Specify that all features have real-value data
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]

# Build 3 layer DNN
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[5,10,5],
n_classes=3)
# Define the training inputs
def get_train_inputs():
x = tf.constant(training_set.data)
y = tf.constant(training_set.target)

return x, y

# Fit model.
classifier.fit(input_fn=get_train_inputs, steps=2000)

# Define the test inputs
def get_test_inputs():
x = tf.constant(test_set.data)
y = tf.constant(test_set.target)

return x, y

# Evaluate accuracy.
accuracy_score = classifier.evaluate(input_fn=get_test_inputs,
steps=1)["accuracy"]

print("\nTest Accuracy: {0:f}\n".format(accuracy_score))

# Classify new flower
def new_samples():
return np.array([[6.4, 2.7, 5.6, 2.1]], dtype=np.float32)

predictions = list(classifier.predict(input_fn=new_samples))

print("Predicted class: {}\n".format(predictions))

if __name__ == "__main__":
main()

TensorFlow Neural Network

Let’s start Deep Learning with Neural Networks.
In this tutorial you’ll learn how to make a Neural Network in tensorflow.

Related Course:
Complete Guide to TensorFlow for Deep Learning with Python

Training

The network will be trained on the MNIST database of handwritten digits. Its used in computer vision.

The Mnist database contains 28x28 arrays, each representing a digit. You can view these 28x28 digits as arrays.

These are flattened, the 28x28 array into a 1-d vector: 28 x 28 = 784 numbers. You’ll see the number 784 later in the code. This is where it comes from.

This dataset is included in tensorflow by default:

from tensorflow.examples.tutorials.mnist import input_data

Example

Introduction

Build a simple neural network.
First import the required modules:

from __future__ import print_function

# Load MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

import tensorflow as tf

Then define the model: how many nodes per hidden layer, how many classes in the dataset (10: 0 to 9) and finally the batch size.

nrNodesHiddenLayer1 = 64
nrNodesHiddenLayer2 = 64
nrNodesHiddenLayer3 = 64
nrClasses = 10
batchSize = 100

Then define the placeholders.

x = tf.placeholder('float', [None, 784])
y = tf.placeholder('float')

All constants are defined. You can now create the neural network.
Define weights and biases. A bias is a vlue that’s added to the sums, before the activation function.

def neuralNetworkModel(data):
hidden1Layer = {'weights':tf.Variable(tf.random_normal([784, nrNodesHiddenLayer1])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer1]))}

hidden2Layer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer1, nrNodesHiddenLayer2])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer2]))}

hidden3Layer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer2, nrNodesHiddenLayer3])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer3]))}

outputLayer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer3, nrClasses])),
'biases':tf.Variable(tf.random_normal([nrClasses]))}

This is not the model yet, these are only random numbers.
The method tf.random_normal creates random numbers. We just made it in the shape that we need.

Lets create the model:

l1 = tf.add(tf.matmul(data,hidden1Layer['weights']), hidden1Layer['biases'])
l1 = tf.nn.relu(l1)

l2 = tf.add(tf.matmul(l1,hidden2Layer['weights']), hidden2Layer['biases'])
l2 = tf.nn.relu(l2)

l3 = tf.add(tf.matmul(l2,hidden3Layer['weights']), hidden3Layer['biases'])
l3 = tf.nn.relu(l3)

output = tf.matmul(l3,outputLayer['weights']) + outputLayer['biases']
return output

Values are added into layer one. The input (data) is multiplied by the weights. Then the bias is added with tf.add().
This process is repeated in each layer. (layer1 into layer2, layer2 into layer 3). Then return the output layer.

The model needs to be trained. Lets create a function for that. Make a prediction with the model, then calculate the cost variable. The cost variable measures how wrong the prediction is. This function can be called loss function.

def trainNeuralNetwork(x):
prediction = neuralNetworkModel(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y) )

Cost needs to be optimized, pick some optimizer for it.

optimizer = tf.train.AdamOptimizer().minimize(cost)

Lets begin training. This is similar to the training we did on the linear regression model.

no_epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())

for epoch in range(no_epochs):
epoch_loss = 0

for _ in range(int(mnist.train.num_examples/batchSize)):
epoch_x, epoch_y = mnist.train.next_batch(batchSize)
_, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y})
epoch_loss += c

print('Epoch', epoch, 'completed out of',no_epochs,'loss:',epoch_loss)

Then after training

correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:mnist.test.images, y:mnist.test.labels}))

Lets call the training function!

trainNeuralNetwork(x)

If you want to get a single prediction (given image, which class?):

x_in = np.expand_dims(mnist.test.images[0], axis=0)
classification = sess.run(tf.argmax(prediction, 1), feed_dict={x:x_in})
print(classification)

Neural network example

The complete code of the above description below. You can tweak the number of neurons to optimize the accuracy of the network.
(I have slightly altered it).


from __future__ import print_function

# Load MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

import tensorflow as tf

# Define model
nrNodesHiddenLayer1 = 64 # 1st layer of neurons
nrNodesHiddenLayer2 = 64 # 2nd layer of neurons
nrNodesHiddenLayer3 = 64
nrClasses = 10
batchSize = 128

# Placeholders
x = tf.placeholder('float', [None, 784])
y = tf.placeholder('float')

# Create neural network model
def neuralNetworkModel(data):
hidden1Layer = {'weights':tf.Variable(tf.random_normal([784, nrNodesHiddenLayer1])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer1]))}

hidden2Layer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer1, nrNodesHiddenLayer2])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer2]))}

hidden3Layer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer2, nrNodesHiddenLayer3])),
'biases':tf.Variable(tf.random_normal([nrNodesHiddenLayer3]))}

outputLayer = {'weights':tf.Variable(tf.random_normal([nrNodesHiddenLayer3, nrClasses])),
'biases':tf.Variable(tf.random_normal([nrClasses]))}

# create flow
l1 = tf.add(tf.matmul(data,hidden1Layer['weights']), hidden1Layer['biases'])
l1 = tf.nn.relu(l1)

l2 = tf.add(tf.matmul(l1,hidden2Layer['weights']), hidden2Layer['biases'])
l2 = tf.nn.relu(l2)

l3 = tf.add(tf.matmul(l2,hidden3Layer['weights']), hidden3Layer['biases'])
l3 = tf.nn.relu(l3)

output = tf.matmul(l3,outputLayer['weights']) + outputLayer['biases']
return output

def trainNeuralNetwork(x):
prediction = neuralNetworkModel(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y) )
optimizer = tf.train.AdamOptimizer(learning_rate=0.1).minimize(cost)
no_epochs = 10

init = tf.global_variables_initializer()

with tf.Session() as sess:
sess.run(init)

for epoch in range(no_epochs):
epoch_loss = 0

for _ in range(int(mnist.train.num_examples/batchSize)):
epoch_x, epoch_y = mnist.train.next_batch(batchSize)
_, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: epoch_y})
epoch_loss += c

print('Epoch', epoch, 'completed out of',no_epochs,'loss:',epoch_loss)

correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:mnist.test.images, y:mnist.test.labels}))


trainNeuralNetwork(x)