첨부 실행 코드는 나눔고딕코딩 폰트를 사용합니다.
유용한 소스 코드가 있으면 icodebroker@naver.com으로 보내주시면 감사합니다.
블로그 자료는 자유롭게 사용하세요.

728x90
반응형

■ 컨볼루션 신경망 만들기 (MNIST)

------------------------------------------------------------------------------------------------------------------------

import tensorflow as tf

import tensorflow.examples.tutorials.mnist as mnist

 

imageSize            = 28

batchSize            = 100

outputLayerNodeCount = 10

learningRate         = 0.001

epochCount           = 10

dropoutRate          = 0.8

 

mnistDatasets = mnist.input_data.read_data_sets("data", one_hot = True)

 

inputLayerTensor    = tf.placeholder(tf.float32, [None, imageSize, imageSize, 1])

correctOutputTensor = tf.placeholder(tf.float32, [None, outputLayerNodeCount])

dropoutRateTensor   = tf.placeholder(tf.float32)

 

convolutionLayer1WeightVariable   = tf.Variable(tf.random_normal([4, 4, 1, 16              ], stddev = 0.01))

convolutionLayer2WeightVariable   = tf.Variable(tf.random_normal([4, 4, 16, 32             ], stddev = 0.01))

fullyConnectedLayerWeightVariable = tf.Variable(tf.random_normal([7 * 7 * 32, 256          ], stddev = 0.01))

outputLayerWeightVariable         = tf.Variable(tf.random_normal([256, outputLayerNodeCount], stddev = 0.01))

 

convolutionLayer1OutputTensor   = tf.nn.conv2d(inputLayerTensor, convolutionLayer1WeightVariable, strides = [1, 1, 1, 1], padding = "SAME")

convolutionLayer1OutputTensor   = tf.nn.relu(convolutionLayer1OutputTensor)

convolutionLayer1OutputTensor   = tf.nn.max_pool(convolutionLayer1OutputTensor, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = "SAME")

convolutionLayer2OutputTensor   = tf.nn.conv2d(convolutionLayer1OutputTensor, convolutionLayer2WeightVariable, strides = [1, 1, 1, 1], padding = "SAME")

convolutionLayer2OutputTensor   = tf.nn.relu(convolutionLayer2OutputTensor)

convolutionLayer2OutputTensor   = tf.nn.max_pool(convolutionLayer2OutputTensor, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = "SAME")

fullyConnectedLayerOutputTensor = tf.reshape(convolutionLayer2OutputTensor, [-1, 7 * 7 * 32])

fullyConnectedLayerOutputTensor = tf.matmul(fullyConnectedLayerOutputTensor, fullyConnectedLayerWeightVariable)

fullyConnectedLayerOutputTensor = tf.nn.relu(fullyConnectedLayerOutputTensor)

fullyConnectedLayerOutputTensor = tf.nn.dropout(fullyConnectedLayerOutputTensor, dropoutRateTensor)

outputLayerOutputTensor         = tf.matmul(fullyConnectedLayerOutputTensor, outputLayerWeightVariable)

 

costTensor = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = outputLayerOutputTensor, labels = correctOutputTensor))

 

optimizerOperation = tf.train.AdamOptimizer(learningRate).minimize(costTensor)

 

with tf.Session() as session:

    session.run(tf.global_variables_initializer())

    totalBatch = int(mnistDatasets.train.num_examples / batchSize)

    for epoch in range(epochCount):

        totalCost = 0

        for batch in range(totalBatch):

            batchInputNDArray, batchCorrectOutputNDArray = mnistDatasets.train.next_batch(batchSize)

            batchInputNDArray = batchInputNDArray.reshape(-1, imageSize, imageSize, 1)

            _, cost = session.run([optimizerOperation, costTensor], feed_dict = {inputLayerTensor : batchInputNDArray, correctOutputTensor : batchCorrectOutputNDArray,\

                dropoutRateTensor : dropoutRate})

            totalCost += cost

        print("반복 : ", "%04d" % (epoch + 1), "평균 비용 : ", "{:.4f}".format(totalCost / totalBatch))

    print("학습 완료!")

 

    scoreTensor    = tf.equal(tf.argmax(outputLayerOutputTensor, 1), tf.argmax(correctOutputTensor, 1))

    accuracyTensor = tf.reduce_mean(tf.cast(scoreTensor, tf.float32))

 

    print("정확도 : ", session.run(accuracyTensor, feed_dict = {inputLayerTensor : mnistDatasets.test.images.reshape(-1, imageSize, imageSize, 1),\

        correctOutputTensor : mnistDatasets.test.labels, dropoutRateTensor : 1}))

------------------------------------------------------------------------------------------------------------------------

728x90
반응형
Posted by 사용자 icodebroker

댓글을 달아 주세요