Python入門トップページ


手書き数字を認識するAIを作ってみよう : 目次

  1. 画像データの準備と確認
  2. 画像データを読み込んでみよう
  3. 画像データの一覧を読み込んでみよう
  4. 学習データとテストデータを準備する
  5. 保存したデータを開いてみる
  6. モデルを作る
  7. 学習(トレーニング)させてみよう
  8. モデルを評価しよう
  9. 学習データで認識させてみよう(1)
  10. 学習データで認識させてみよう(2)
  11. 学習データで認識させてみよう(3)
  12. テストデータで認識させてみよう
  13. モデルと重みパラメータを保存しよう
  14. 学習済みモデルをロードしよう
  15. 学習済みモデルをロードして,認識してみよう

学習(トレーニング)させてみよう

構築したモデルに学習データを投入して学習させてみよう.Keras では model.fit() を呼び出すだけで学習ができます.

学習させる (06-train.py)
import numpy as np
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense

# ファイルを開いて読み込む
x_train = np.load('train_X_data.npy')
y_train = np.load('train_Y_data.npy')
x_test = np.load('test_X_data.npy')
y_test = np.load('test_Y_data.npy')

# 正解ラベルを one-hot-encoding にする
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

# モデルを作る
model = Sequential()
model.add(Dense(128, activation='relu', input_dim=225))  # input_dim = 15 x 15 = 225
model.add(Dense(10, activation='softmax'))

# モデルをコンパイルする
model.compile(optimizer='rmsprop',
          loss='categorical_crossentropy',
          metrics=['accuracy'])

model.summary()

# 学習してみよう(このコードだけで,学習状況も表示される)
model.fit(x_train, y_train,
        batch_size=20,
        epochs=30,
        verbose=1)
Using TensorFlow backend.
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 128)               28928
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290
=================================================================
Total params: 30,218
Trainable params: 30,218
Non-trainable params: 0
_________________________________________________________________
Epoch 1/30
80/80 [==============================] - 0s 2ms/step - loss: 2.3014 - accuracy: 0.1625
Epoch 2/30
80/80 [==============================] - 0s 62us/step - loss: 1.7425 - accuracy: 0.5000
Epoch 3/30
80/80 [==============================] - 0s 237us/step - loss: 1.4521 - accuracy: 0.6625
Epoch 4/30
80/80 [==============================] - 0s 125us/step - loss: 1.2113 - accuracy: 0.8500
Epoch 5/30
80/80 [==============================] - 0s 100us/step - loss: 1.0363 - accuracy: 0.8750
Epoch 6/30
80/80 [==============================] - 0s 100us/step - loss: 0.8712 - accuracy: 0.8875
Epoch 7/30
80/80 [==============================] - 0s 100us/step - loss: 0.7511 - accuracy: 0.9125
Epoch 8/30
80/80 [==============================] - 0s 100us/step - loss: 0.6510 - accuracy: 0.9375
Epoch 9/30
80/80 [==============================] - 0s 100us/step - loss: 0.5552 - accuracy: 0.9250
Epoch 10/30
80/80 [==============================] - 0s 112us/step - loss: 0.4797 - accuracy: 0.9625
Epoch 11/30
80/80 [==============================] - 0s 112us/step - loss: 0.4069 - accuracy: 0.9875
Epoch 12/30
80/80 [==============================] - 0s 112us/step - loss: 0.3509 - accuracy: 1.0000
Epoch 13/30
80/80 [==============================] - 0s 100us/step - loss: 0.3064 - accuracy: 0.9750
Epoch 14/30
80/80 [==============================] - 0s 100us/step - loss: 0.2646 - accuracy: 1.0000
Epoch 15/30
80/80 [==============================] - 0s 112us/step - loss: 0.2298 - accuracy: 1.0000
Epoch 16/30
80/80 [==============================] - 0s 112us/step - loss: 0.1951 - accuracy: 1.0000
Epoch 17/30
80/80 [==============================] - 0s 87us/step - loss: 0.1712 - accuracy: 1.0000
Epoch 18/30
80/80 [==============================] - 0s 87us/step - loss: 0.1463 - accuracy: 1.0000
Epoch 19/30
80/80 [==============================] - 0s 100us/step - loss: 0.1263 - accuracy: 1.0000
Epoch 20/30
80/80 [==============================] - 0s 100us/step - loss: 0.1136 - accuracy: 1.0000
Epoch 21/30
80/80 [==============================] - 0s 87us/step - loss: 0.0980 - accuracy: 1.0000
Epoch 22/30
80/80 [==============================] - 0s 237us/step - loss: 0.0820 - accuracy: 1.0000
Epoch 23/30
80/80 [==============================] - 0s 224us/step - loss: 0.0763 - accuracy: 1.0000
Epoch 24/30
80/80 [==============================] - 0s 125us/step - loss: 0.0626 - accuracy: 1.0000
Epoch 25/30
80/80 [==============================] - 0s 100us/step - loss: 0.0584 - accuracy: 1.0000
Epoch 26/30
80/80 [==============================] - 0s 100us/step - loss: 0.0471 - accuracy: 1.0000
Epoch 27/30
80/80 [==============================] - 0s 100us/step - loss: 0.0427 - accuracy: 1.0000
Epoch 28/30
80/80 [==============================] - 0s 100us/step - loss: 0.0356 - accuracy: 1.0000
Epoch 29/30
80/80 [==============================] - 0s 100us/step - loss: 0.0314 - accuracy: 1.0000
Epoch 30/30
80/80 [==============================] - 0s 112us/step - loss: 0.0272 - accuracy: 1.0000

80個のデータを20個ずつに分けて,誤差が小さくなるようにパラメータを最適化します.その後,誤差 (loss) とその時の認識精度 (accuracy) が得られます.最初の学習(エポック)では loss が 2.3014,accuracy が 16.25% になりました.この学習を30回繰り返すことで,徐々に誤差が小さくなり,認識精度が向上していく様子が読み取れます.最終的には学習データを100%の精度で認識できるようになりました.なお,データの作成時に並び順をランダムにシャッフルしているので,結果は必ずしも一致しないことに注意してください.

目次に戻る