Softmax, MLP, CNN 三种方法识别手写数字MNIST——《TensorFlow 实战》读书笔记
不要代码写多了就变得那么没有人情味了
0x00 Intro
1. 读入MNIST数据库
执行mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
后,会检查MNIST_data/
文件夹下有没有数据库文件,如果没有会自动下载。这一步如果执行比较慢,可以用迅雷手动下载下面四个文件,保存到MNIST_data目录(不需要解压)
- train-images-idx3-ubyte.gz: training set images (9912422 bytes)
- train-labels-idx1-ubyte.gz: training set labels (28881 bytes)
- t10k-images-idx3-ubyte.gz: test set images (1648877 bytes)
- t10k-labels-idx1-ubyte.gz: test set labels (4542 bytes)
1 | from tensorflow.examples.tutorials.mnist import input_data |
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
2. 初始化 Tensorflow
Tensorflow 运行时默认会把GPU显存一次性占满,添加config.gpu_options.allow_growth = True
使其可以动态分配显存
1 | import tensorflow as tf |
0x01 Softmax
只用 Softmax Regression 进行分类,正确率约为92%
1. 定义变量
Softmax Regression 的公式可以写成:
$$y=softmax(Wx+b)$$
其中,x
为输入数据(手写数字图片),不限条数的784
维的Float32
型数据;W
为784×10(特征维数×图片种类)
的Variable
向量;b
是bias(偏置)y
为Softmax分类后得出的结果
loss function为cross_entropy
,定义如下:
$$H_y’(y)=-\sum_{i}{y’_ilog(y_i)}$$
而reduce_mean
为对每个batch求均值(reduction_indices=[1]的意思请看后面的附录, 在新版的Tensorflow tutorial中,这部分稍有区别
)
1 | x = tf.placeholder(tf.float32,[None,784]) |
2. 训练
每次训练抽取100个样本作为mini-batch
,并传给placeholder(x,y_)
进行训练,并每隔250输出一次权重。
1 | tf.global_variables_initializer().run() |
3. 计算正确率
accuracy 与 train_step 的区别官方给出了如下解释: “Note: the Tensor class will be replaced by Output in the future. Currently these two are aliases for each other.“
1 | correct_prediction = tf.equal(tf.arg_max(y,1),tf.arg_max(y_,1)) |
0.9163
0x02 MLP
使用多层感知机(MLP)进行分类,准确率约为98%
1. 定义变量
1 | in_units = 784 |
2. 训练
1 | tf.global_variables_initializer().run() |
3. 计算正确率
1 | correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) |
0.9787
0x03 CNN
1. 定义变量
1 | def weight_variable(shape): |
1 | x = tf.placeholder(tf.float32,[None,in_units]) |
2. 训练
1 | correct_prediction = tf.equal(tf.arg_max(y_conv,1),tf.argmax(y_,1)) |
step 0, training accuracy 0.07
step 100, training accuracy 0.84
step 200, training accuracy 0.9
step 300, training accuracy 0.88
step 400, training accuracy 0.95
step 500, training accuracy 0.98
step 600, training accuracy 0.96
step 700, training accuracy 0.94
step 800, training accuracy 1
step 900, training accuracy 0.98
step 1000, training accuracy 0.99
step 1100, training accuracy 0.96
step 1200, training accuracy 0.99
step 1300, training accuracy 0.99
step 1400, training accuracy 0.98
step 1500, training accuracy 1
step 1600, training accuracy 0.98
step 1700, training accuracy 0.97
step 1800, training accuracy 0.99
step 1900, training accuracy 0.98
step 2000, training accuracy 0.98
step 2100, training accuracy 0.98
step 2200, training accuracy 0.99
step 2300, training accuracy 0.98
step 2400, training accuracy 0.99
step 2500, training accuracy 0.97
step 2600, training accuracy 0.97
step 2700, training accuracy 0.97
step 2800, training accuracy 0.99
step 2900, training accuracy 1
step 3000, training accuracy 1
step 3100, training accuracy 1
step 3200, training accuracy 0.98
step 3300, training accuracy 0.99
step 3400, training accuracy 0.98
step 3500, training accuracy 1
step 3600, training accuracy 1
step 3700, training accuracy 0.98
step 3800, training accuracy 1
step 3900, training accuracy 0.98
step 4000, training accuracy 1
step 4100, training accuracy 0.99
step 4200, training accuracy 0.99
step 4300, training accuracy 0.99
step 4400, training accuracy 1
step 4500, training accuracy 0.99
step 4600, training accuracy 1
step 4700, training accuracy 1
step 4800, training accuracy 1
step 4900, training accuracy 0.98
step 5000, training accuracy 0.99
step 5100, training accuracy 1
step 5200, training accuracy 0.98
step 5300, training accuracy 1
step 5400, training accuracy 1
step 5500, training accuracy 1
step 5600, training accuracy 1
step 5700, training accuracy 1
step 5800, training accuracy 1
step 5900, training accuracy 0.99
step 6000, training accuracy 1
step 6100, training accuracy 1
step 6200, training accuracy 1
step 6300, training accuracy 0.99
step 6400, training accuracy 1
step 6500, training accuracy 0.99
step 6600, training accuracy 1
step 6700, training accuracy 1
step 6800, training accuracy 1
step 6900, training accuracy 0.97
...
step 18500, training accuracy 1
step 18600, training accuracy 1
step 18700, training accuracy 1
step 18800, training accuracy 1
step 18900, training accuracy 1
step 19000, training accuracy 1
step 19100, training accuracy 1
step 19200, training accuracy 1
step 19300, training accuracy 1
step 19400, training accuracy 1
step 19500, training accuracy 1
step 19600, training accuracy 1
step 19700, training accuracy 1
step 19800, training accuracy 1
step 19900, training accuracy 1
1 | print("test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels,keep_prob:1.0})) # 这一步会瞬间把显存占满 |
test accuracy 0.9931
0x04 附录
1. 关于reduction_indices
1 | x=[[2,2,2],[2,2,2]] |
x = [[2, 2, 2], [2, 2, 2]]
y0 = [4 4 4] (x在第0维度相加)
y1 = [6 6] (x在第1维度相加)