728x90
반응형
728x170
■ 컨볼루션 신경망을 만드는 방법을 보여준다.
▶ 예제 코드 (PY)
import numpy as np
import tensorflow as tf
import tensorflow.examples.tutorials.mnist as mnist
batchSize = 128
testSize = 256
imageSize = 28
outputLayerNodeCount = 10
convolutionDropoutRate = 0.8
fullyConnectedDropoutRate = 0.5
mnistDatasets = mnist.input_data.read_data_sets("data", one_hot = True)
trainInputNDArray = mnistDatasets.train.images
trainCorrectOutputNDArray = mnistDatasets.train.labels
testInputNDArray = mnistDatasets.test.images
testCorrectOutputNDArray = mnistDatasets.test.labels
trainInputNDArray = trainInputNDArray.reshape(-1, imageSize, imageSize, 1)
testInputNDArray = testInputNDArray.reshape (-1, imageSize, imageSize, 1)
inputLayerTensor = tf.placeholder("float", [None, imageSize, imageSize, 1])
correctOutputTensor = tf.placeholder("float", [None, outputLayerNodeCount ])
convolutionLayer1WeightVariable = tf.Variable(tf.random_normal([3, 3, 1 , 32 ], stddev = 0.01))
convolutionLayer2WeightVariable = tf.Variable(tf.random_normal([3, 3, 32, 64 ], stddev = 0.01))
convolutionLayer3WeightVariable = tf.Variable(tf.random_normal([3, 3, 64, 128 ], stddev = 0.01))
fullyConnectedLayerWeightVariable = tf.Variable(tf.random_normal([128 * 4 * 4, 625 ], stddev = 0.01))
outputLayerWeightVariable = tf.Variable(tf.random_normal([625, outputLayerNodeCount], stddev = 0.01))
convolutionDropoutRateTensor = tf.placeholder("float")
fullyConnectedDropoutRateTensor = tf.placeholder("float")
convolutionLayer1OutputTensor = tf.nn.conv2d(inputLayerTensor, convolutionLayer1WeightVariable, strides = [1, 1, 1, 1], padding = "SAME")
convolutionLayer1OutputTensorReLU = tf.nn.relu(convolutionLayer1OutputTensor)
convolutionLayer1OutputTensorMaxPool = tf.nn.max_pool(convolutionLayer1OutputTensorReLU, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = "SAME")
convolutionLayer1OutputTensorDropout = tf.nn.dropout(convolutionLayer1OutputTensorMaxPool, convolutionDropoutRateTensor)
convolutionLayer2OutputTensor = tf.nn.conv2d(convolutionLayer1OutputTensorDropout, convolutionLayer2WeightVariable, strides = [1, 1, 1, 1], padding = "SAME")
convolutionLayer2OutputTensorReLU = tf.nn.relu(convolutionLayer2OutputTensor)
convolutionLayer2OutputTensorMaxPool = tf.nn.max_pool(convolutionLayer2OutputTensorReLU, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = "SAME")
convolutionLayer2OutputTensorDropout = tf.nn.dropout(convolutionLayer2OutputTensorMaxPool, convolutionDropoutRateTensor)
convolutionLayer3OutputTensor = tf.nn.conv2d(convolutionLayer2OutputTensorDropout, convolutionLayer3WeightVariable, strides = [1, 1, 1, 1], padding = "SAME")
convolutionLayer3OutputTensorReLU = tf.nn.relu(convolutionLayer3OutputTensor)
convolutionLayer3OutputTensorMaxPool = tf.nn.max_pool(convolutionLayer3OutputTensorReLU, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = "SAME")
convolutionLayer3OutputTensorReshape = tf.reshape(convolutionLayer3OutputTensorMaxPool, [-1, fullyConnectedLayerWeightVariable.get_shape().as_list()[0]])
convolutionLayer3OutputTensorDropout = tf.nn.dropout(convolutionLayer3OutputTensorReshape, convolutionDropoutRateTensor)
fullyConnectedLayerOutputTensor = tf.matmul(convolutionLayer3OutputTensorDropout, fullyConnectedLayerWeightVariable)
fullyConnectedLayerOutputTensorReLU = tf.nn.relu(fullyConnectedLayerOutputTensor)
fullyConnectedLayerOutputTensorDropout = tf.nn.dropout(fullyConnectedLayerOutputTensorReLU, fullyConnectedDropoutRateTensor)
outputLayerOutputTensor = tf.matmul(fullyConnectedLayerOutputTensorDropout, outputLayerWeightVariable)
costTensor = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = outputLayerOutputTensor, labels = correctOutputTensor))
optimizerOperation = tf.train.RMSPropOptimizer(0.001, 0.9).minimize(costTensor)
predictTensor = tf.argmax(outputLayerOutputTensor, 1)
with tf.Session() as session:
session.run(tf.global_variables_initializer())
for i in range(10):
trainingBatch = zip(range(0, len(trainInputNDArray), batchSize), range(batchSize, len(trainInputNDArray) + 1, batchSize))
for startIndex, endIndex in trainingBatch:
session.run(optimizerOperation, feed_dict = {inputLayerTensor : trainInputNDArray[startIndex:endIndex],\
correctOutputTensor: trainCorrectOutputNDArray[startIndex:endIndex], convolutionDropoutRateTensor : convolutionDropoutRate,\
fullyConnectedDropoutRateTensor : fullyConnectedDropoutRate})
testIndexNDArray = np.arange(len(testInputNDArray))
np.random.shuffle(testIndexNDArray)
testIndexNDArray = testIndexNDArray[0:testSize]
print("Epoch : ", i + 1, "정확도 : ", np.mean(np.argmax(testCorrectOutputNDArray[testIndexNDArray], axis = 1) == session.run(predictTensor,\
feed_dict = {inputLayerTensor : testInputNDArray[testIndexNDArray], correctOutputTensor : testCorrectOutputNDArray[testIndexNDArray],\
convolutionDropoutRateTensor : 1.0, fullyConnectedDropoutRateTensor : 1.0})))
scoreTensor = tf.equal(tf.argmax(outputLayerOutputTensor, 1), tf.argmax(correctOutputTensor, 1))
accuracyTensor = tf.reduce_mean(tf.cast(scoreTensor, tf.float32))
print("정확도 : ", session.run(accuracyTensor, feed_dict = {inputLayerTensor : testInputNDArray, correctOutputTensor : testCorrectOutputNDArray,\
convolutionDropoutRateTensor : 1.0, fullyConnectedDropoutRateTensor : 1.0}))
728x90
반응형
그리드형(광고전용)
'Python > tensorflow' 카테고리의 다른 글
[PYTHON/TENSORFLOW] 순환 신경망 만들기 (MNIST) (0) | 2018.08.13 |
---|---|
[PYTHON/TENSORFLOW] 컨볼루션 오토 인코더 만들기 (0) | 2018.08.10 |
[PYTHON/TENSORFLOW] 노이즈 제거 오토 인코더 만들기 (0) | 2018.08.07 |
[PYTHON/TENSORFLOW] 오토 인코더 만들기 (0) | 2018.08.07 |
[PYTHON/TENSORFLOW] 컨볼루션 신경망 만들기 (얼굴 감정 판정) (0) | 2018.08.05 |
[PYTHON/TENSORFLOW] 컨볼루션 신경망 만들기 (MNIST) (0) | 2018.08.03 |
[PYTHON/TENSORFLOW] 다층 퍼셉트론 신경망 만들기 (MNIST) : 4계층 ReLU Dropout 1계층 Softmax (0) | 2018.08.01 |
[PYTHON/TENSORFLOW] 다층 퍼셉트론 신경망 만들기 (MNIST) : 4계층 ReLU 1계층 Softmax (0) | 2018.08.01 |
[PYTHON/TENSORFLOW] 다층 퍼셉트론 신경망 만들기 (MNIST) : 4계층 Sigmoid 1계층 Softmax (0) | 2018.08.01 |
[PYTHON/TENSORFLOW] 신경망 모델 로드하기 (0) | 2018.08.01 |
댓글을 달아 주세요