您现在的位置: 首页 > 网站导航收录 > 百科知识百科知识
cnna cn哪个国家缩写
卷积,神经网络,操作cnna cn哪个国家缩写
发布时间:2019-02-08加入收藏来源:互联网点击:
-
TensorFlow实现卷积神经网络:什么是Tensorflow?
TensorFlow是一个使用数据流图进行数值计算的开源软件库。它最初由谷歌机器智能研究机构谷歌大脑团队开发,用于机器学习和深度神经网络的研究。
什么是张量?张量是一个有组织的多维数组,张量的顺序是表示它所需数组的维数。
张量的类型
什么是计算图?计算图是计算代数中的一个基础处理方法,在机器学习中的神经网络和其他模型推导算法和软件包方面非常富有成效。计算图中的基本思想是表达一些模型——例如前馈神经网络,计算图作为表示计算步骤序列的一个有向图。序列中的每个步骤对应于计算图中的顶点, 每个步骤对应一个简单的操作,每个操作接受一些输入并根据其输入产生一些输出。
在下面的图示中,我们有两个输入w1 = x和w2 = y,这个输入将流经图形,其中图形中的每个节点都是数学运算,为我们提供以下输出:
w3 = cos(x),余弦三角函数操作w4 = sin(x),正弦三角函数操作w5 = w3∙w4,乘法操作w6 = w1 / w2,除法操作w7 = w5 + w6,加法操作现在我们了解了什么是计算图,下面让我们TensorFlow中构建自己的计算图吧。
代码:# Import the deep learning libraryimport tensorflow as tf# Define our compuational graph W1 = tf.constant(5.0, name = "x")W2 = tf.constant(3.0, name = "y")W3 = tf.cos(W1, name = "cos")W4 = tf.sin(W2, name = "sin")W5 = tf.multiply(W3, W4, name = "mult")W6 = tf.divide(W1, W2, name = "div")W7 = tf.add(W5, W6, name = "add")# Open the sessionwith tf.Session() as sess: cos = sess.run(W3) sin = sess.run(W4) mult = sess.run(W5) div = sess.run(W6) add = sess.run(W7) # Before running TensorBoard, make sure you have generated summary data in a log directory by creating a summary writer writer = tf.summary.FileWriter("./Desktop/ComputationGraph", sess.graph) # Once you have event files, run TensorBoard and provide the log directory # Command: tensorboard --logdir="path/to/logs" 使用Tensorboard进行可视化:什么是Tensorboard?TensorBoard是一套用于检查和理解TensorFlow运行和图形的Web应用程序,这也是Google的TensorFlow比Facebook的Pytorch最大的优势之一。
上面的代码在Tensorboard中进行可视化
在卷积神经网络、TensorFlow和TensorBoard有了深刻的理解,下面让我们一起构建我们的第一个使用MNIST数据集识别手写数字的卷积神经网络。
MNIST数据集
我们的卷积神经网络模型将似于LeNet-5架构,由卷积层、最大池化和非线操作层。
卷积神经网络三维仿真
代码:# Import the deep learning libraryimport tensorflow as tfimport time# Import the MNIST datasetfrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("/tmp/data/", one_hot=True)# Network inputs and outputs# The network's input is a 28×28 dimensional inputn = 28m = 28num_input = n * m # MNIST data input num_classes = 10 # MNIST total classes (0-9 digits)# tf Graph inputX = tf.placeholder(tf.float32, [None, num_input])Y = tf.placeholder(tf.float32, [None, num_classes])# Storing the parameters of our LeNET-5 inspired Convolutional Neural Networkweights = { "W_ij": tf.Variable(tf.random_normal([5, 5, 1, 32])), "W_jk": tf.Variable(tf.random_normal([5, 5, 32, 64])), "W_kl": tf.Variable(tf.random_normal([7 * 7 * 64, 1024])), "W_lm": tf.Variable(tf.random_normal([1024, num_classes])) }biases = { "b_ij": tf.Variable(tf.random_normal([32])), "b_jk": tf.Variable(tf.random_normal([64])), "b_kl": tf.Variable(tf.random_normal([1024])), "b_lm": tf.Variable(tf.random_normal([num_classes])) }# The hyper-parameters of our Convolutional Neural Networklearning_rate = 1e-3num_steps = 500batch_size = 128display_step = 10def ConvolutionLayer(x, W, b, strides=1): # Convolution Layer x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME') x = tf.nn.bias_add(x, b) return xdef ReLU(x): # ReLU activation function return tf.nn.relu(x)def PoolingLayer(x, k=2, strides=2): # Max Pooling layer return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, strides, strides, 1], padding='SAME')def Softmax(x): # Softmax activation function for the CNN's final output return tf.nn.softmax(x)# Create modeldef ConvolutionalNeuralNetwork(x, weights, biases): # MNIST data input is a 1-D row vector of 784 features (28×28 pixels) # Reshape to match picture format [Height x Width x Channel] # Tensor input become 4-D: [Batch Size, Height, Width, Channel] x = tf.reshape(x, shape=[-1, 28, 28, 1]) # Convolution Layer Conv1 = ConvolutionLayer(x, weights["W_ij"], biases["b_ij"]) # Non-Linearity ReLU1 = ReLU(Conv1) # Max Pooling (down-sampling) Pool1 = PoolingLayer(ReLU1, k=2) # Convolution Layer Conv2 = ConvolutionLayer(Pool1, weights["W_jk"], biases["b_jk"]) # Non-Linearity ReLU2 = ReLU(Conv2) # Max Pooling (down-sampling) Pool2 = PoolingLayer(ReLU2, k=2) # Fully connected layer # Reshape conv2 output to fit fully connected layer input FC = tf.reshape(Pool2, [-1, weights["W_kl"].get_shape().as_list()[0]]) FC = tf.add(tf.matmul(FC, weights["W_kl"]), biases["b_kl"]) FC = ReLU(FC) # Output, class prediction output = tf.add(tf.matmul(FC, weights["W_lm"]), biases["b_lm"]) return output# Construct modellogits = ConvolutionalNeuralNetwork(X, weights, biases)prediction = Softmax(logits)# Softamx cross entropy loss functionloss_function = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=Y))# Optimization using the Adam Gradient Descent optimizeroptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)training_process = optimizer.minimize(loss_function)# Evaluate modelcorrect_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))# recording how the loss functio varies over time during trainingcost = tf.summary.scalar("cost", loss_function)training_accuracy = tf.summary.scalar("accuracy", accuracy)train_summary_op = tf.summary.merge([cost,training_accuracy])train_writer = tf.summary.FileWriter("./Desktop/logs", graph=tf.get_default_graph())# Initialize the variables (i.e. assign their default value)init = tf.global_variables_initializer()# Start trainingwith tf.Session() as sess: # Run the initializer sess.run(init) start_time = time.time() for step in range(1, num_steps+1): batch_x, batch_y = mnist.train.next_batch(batch_size) # Run optimization op (backprop) sess.run(training_process, feed_dict={X: batch_x, Y: batch_y}) if step % display_step == 0 or step == 1: # Calculate batch loss and accuracy loss, acc, summary = sess.run([loss_function, accuracy, train_summary_op], feed_dict={X: batch_x, Y: batch_y}) train_writer.add_summary(summary, step) print("Step " + str(step) + ", Minibatch Loss= " + \ "{:.4f}".format(loss) + ", Training Accuracy= " + \ "{:.3f}".format(acc)) end_time = time.time() print("Time duration: " + str(int(end_time-start_time)) + " seconds") print("Optimization Finished!") # Calculate accuracy for 256 MNIST test images print("Testing Accuracy:", \ sess.run(accuracy, feed_dict={X: mnist.test.images[:256], Y: mnist.test.labels[:256]}))上面的代码显得有些冗长,但如果一段一段的对其进行分解,读起来不是很难理解。
运行完该程序,对应结果应如下所示:
Step 1, Minibatch Loss= 74470.4844, Training Accuracy= 0.117Step 10, Minibatch Loss= 20529.4141, Training Accuracy= 0.250Step 20, Minibatch Loss= 14074.7539, Training Accuracy= 0.531Step 30, Minibatch Loss= 7168.9839, Training Accuracy= 0.586Step 40, Minibatch Loss= 4781.1060, Training Accuracy= 0.703Step 50, Minibatch Loss= 3281.0979, Training Accuracy= 0.766Step 60, Minibatch Loss= 2701.2451, Training Accuracy= 0.781Step 70, Minibatch Loss= 2478.7153, Training Accuracy= 0.773Step 80, Minibatch Loss= 2312.8320, Training Accuracy= 0.820Step 90, Minibatch Loss= 2143.0774, Training Accuracy= 0.852Step 100, Minibatch Loss= 1373.9169, Training Accuracy= 0.852Step 110, Minibatch Loss= 1852.9535, Training Accuracy= 0.852Step 120, Minibatch Loss= 1845.3500, Training Accuracy= 0.891Step 130, Minibatch Loss= 1677.2566, Training Accuracy= 0.844Step 140, Minibatch Loss= 1683.3661, Training Accuracy= 0.875Step 150, Minibatch Loss= 1859.3821, Training Accuracy= 0.836Step 160, Minibatch Loss= 1495.4796, Training Accuracy= 0.859Step 170, Minibatch Loss= 609.3800, Training Accuracy= 0.914Step 180, Minibatch Loss= 1376.5054, Training Accuracy= 0.891Step 190, Minibatch Loss= 1085.0363, Training Accuracy= 0.891Step 200, Minibatch Loss= 1129.7145, Training Accuracy= 0.914Step 210, Minibatch Loss= 1488.5452, Training Accuracy= 0.906Step 220, Minibatch Loss= 584.5027, Training Accuracy= 0.930Step 230, Minibatch Loss= 619.9744, Training Accuracy= 0.914Step 240, Minibatch Loss= 1575.8933, Training Accuracy= 0.891Step 250, Minibatch Loss= 1558.5853, Training Accuracy= 0.891Step 260, Minibatch Loss= 375.0371, Training Accuracy= 0.922Step 270, Minibatch Loss= 1568.0758, Training Accuracy= 0.859Step 280, Minibatch Loss= 1172.9205, Training Accuracy= 0.914Step 290, Minibatch Loss= 1023.5415, Training Accuracy= 0.914Step 300, Minibatch Loss= 475.9756, Training Accuracy= 0.945Step 310, Minibatch Loss= 488.8930, Training Accuracy= 0.961Step 320, Minibatch Loss= 1105.7720, Training Accuracy= 0.914Step 330, Minibatch Loss= 1111.8589, Training Accuracy= 0.906Step 340, Minibatch Loss= 842.7805, Training Accuracy= 0.930Step 350, Minibatch Loss= 1514.0153, Training Accuracy= 0.914Step 360, Minibatch Loss= 1722.1812, Training Accuracy= 0.875Step 370, Minibatch Loss= 681.6041, Training Accuracy= 0.891Step 380, Minibatch Loss= 902.8599, Training Accuracy= 0.930Step 390, Minibatch Loss= 714.1541, Training Accuracy= 0.930Step 400, Minibatch Loss= 1654.8883, Training Accuracy= 0.914Step 410, Minibatch Loss= 696.6915, Training Accuracy= 0.906Step 420, Minibatch Loss= 536.7183, Training Accuracy= 0.914Step 430, Minibatch Loss= 1405.9148, Training Accuracy= 0.891Step 440, Minibatch Loss= 199.4781, Training Accuracy= 0.953Step 450, Minibatch Loss= 438.3784, Training Accuracy= 0.938Step 460, Minibatch Loss= 409.6419, Training Accuracy= 0.969Step 470, Minibatch Loss= 503.1216, Training Accuracy= 0.930Step 480, Minibatch Loss= 482.6476, Training Accuracy= 0.922Step 490, Minibatch Loss= 767.3893, Training Accuracy= 0.922Step 500, Minibatch Loss= 626.8249, Training Accuracy= 0.930Time duration: 657 secondsOptimization Finished!Testing Accuracy: 0.9453125下一篇:返回列表
相关链接 |
||
网友回复(共有 0 条回复) |