첨부 실행 코드는 나눔고딕코딩 폰트를 사용합니다.
728x90
반응형
728x170

■ 4계층 ReLU 1계층 Softmax로 구성된 다층 퍼셉트론 신경망을 만드는 방법을 보여준다.

 

▶ 예제 코드 (PY)

import tensorflow as tf
import tensorflow.examples.tutorials.mnist as mnist

inputLayerNodeCount   = 784
hiddenLayer1NodeCount = 200
hiddenLayer2NodeCount = 100
hiddenLayer3NodeCount = 60
hiddenLayer4NodeCount = 30
outputLayerNodeCount  = 10

summaryLogDirectoryPath = "log_mnist_4_layer_relu_1_layer_softmax"

batchSize    = 100
learningRate = 0.005
epochCount   = 10

mnistDatasets = mnist.input_data.read_data_sets("data", one_hot = True)

inputLayerTensor = tf.placeholder(tf.float32, [None, inputLayerNodeCount])

hiddenLayer1WeightVariable = tf.Variable(tf.truncated_normal([inputLayerNodeCount  , hiddenLayer1NodeCount], stddev = 0.1))
hiddenLayer1BiasVariable   = tf.Variable(tf.zeros([hiddenLayer1NodeCount]))
hiddenLayer2WeightVariable = tf.Variable(tf.truncated_normal([hiddenLayer1NodeCount, hiddenLayer2NodeCount], stddev = 0.1))
hiddenLayer2BiasVariable   = tf.Variable(tf.zeros([hiddenLayer2NodeCount]))
hiddenLayer3WeightVariable = tf.Variable(tf.truncated_normal([hiddenLayer2NodeCount, hiddenLayer3NodeCount], stddev = 0.1))
hiddenLayer3BiasVariable   = tf.Variable(tf.zeros([hiddenLayer3NodeCount]))
hiddenLayer4WeightVariable = tf.Variable(tf.truncated_normal([hiddenLayer3NodeCount, hiddenLayer4NodeCount], stddev = 0.1))
hiddenLayer4BiasVariable   = tf.Variable(tf.zeros([hiddenLayer4NodeCount]))
outputLayerWeightVariable  = tf.Variable(tf.truncated_normal([hiddenLayer4NodeCount, outputLayerNodeCount ], stddev = 0.1))
outputLayerBiasVariable    = tf.Variable(tf.zeros([outputLayerNodeCount]))

hiddenLayer1OutputTensor       = tf.nn.relu(tf.matmul(inputLayerTensor        , hiddenLayer1WeightVariable) + hiddenLayer1BiasVariable)
hiddenLayer2OutputTensor       = tf.nn.relu(tf.matmul(hiddenLayer1OutputTensor, hiddenLayer2WeightVariable) + hiddenLayer2BiasVariable)
hiddenLayer3OutputTensor       = tf.nn.relu(tf.matmul(hiddenLayer2OutputTensor, hiddenLayer3WeightVariable) + hiddenLayer3BiasVariable)
hiddenLayer4OutputTensor       = tf.nn.relu(tf.matmul(hiddenLayer3OutputTensor, hiddenLayer4WeightVariable) + hiddenLayer4BiasVariable)
outputLayerOutputTensor        =            tf.matmul(hiddenLayer4OutputTensor, outputLayerWeightVariable ) + outputLayerBiasVariable
outputLayerOutputTensorSoftmax = tf.nn.softmax(outputLayerOutputTensor)

correctOutputTensor = tf.placeholder(tf.float32, [None, outputLayerNodeCount])

costTensor = tf.nn.softmax_cross_entropy_with_logits(logits = outputLayerOutputTensor, labels = correctOutputTensor)
costTensor = tf.reduce_mean(costTensor) * 100

correctPredictionTensor = tf.equal(tf.argmax(outputLayerOutputTensorSoftmax, 1), tf.argmax(correctOutputTensor, 1))
accuracyTensor          = tf.reduce_mean(tf.cast(correctPredictionTensor, tf.float32))

optimizerOperation = tf.train.AdamOptimizer(learningRate).minimize(costTensor)

tf.summary.scalar("cost"    , costTensor    )
tf.summary.scalar("accuracy", accuracyTensor)

summaryTensor = tf.summary.merge_all()

with tf.Session() as session:
    session.run(tf.global_variables_initializer())
    fileWriter = tf.summary.FileWriter(summaryLogDirectoryPath, graph = tf.get_default_graph())
    batchCount = int(mnistDatasets.train.num_examples / batchSize)
    for epoch in range(epochCount):
        for batch in range(batchCount):
            batchInputNDArray, batchCorrectOutputNDArray = mnistDatasets.train.next_batch(batchSize)
            _, summary = session.run([optimizerOperation, summaryTensor], feed_dict = {inputLayerTensor : batchInputNDArray, correctOutputTensor : batchCorrectOutputNDArray})
            fileWriter.add_summary(summary, epoch * batchCount + batch)
        print("Epoch : ", epoch)
    print("정확도 : ", accuracyTensor.eval(feed_dict = {inputLayerTensor : mnistDatasets.test.images, correctOutputTensor : mnistDatasets.test.labels}))
    print("학습이 완료되었습니다.")
728x90
반응형
그리드형(광고전용)
Posted by icodebroker

댓글을 달아 주세요