Optuna를 사용한 Keras 분류 모델 하이퍼파라미터 최적화 - MnistDeep Learning & Machine Learning/Keras2023. 11. 5. 12:27
Table of Contents
반응형
Optuna를 사용하여 MNIST 데이터셋에 대한 keras 분류 모델의 하이퍼파라미터를 최적화하는 예제코드입니다.
2023. 11. 5 최초작성
실행결과입니다.
(tensorflow-dev) webnautes@webnautesui-MacBookAir keras_example % /Users/webnautes/miniforge3/envs/tensorflow-dev/bin/python /Users/webnautes/keras_example/optuna _mnist.py [I 2023-11-05 12:06:30,500] A new study created in memory with name: no-name-42c2fbc2-e7d7-4e40-a7af-f9c08dc199a5 Metal device set to: Apple M1 systemMemory: 16.00 GB maxCacheSize: 5.33 GB 2023-11-05 12:06:30.507666: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2023-11-05 12:06:30.508013: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>) 2023-11-05 12:06:31.499977: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz 2023-11-05 12:06:31.860377: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2023-11-05 12:07:44.675712: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. [I 2023-11-05 12:07:47,456] Trial 0 finished with value: 0.9894000291824341 and parameters: {'learning_rate': 0.00021544434836655099, 'num_filters1': 32, 'num_filters2': 32, 'kernel_size': 4, 'dense_units': 192}. Best is trial 0 with value: 0.9894000291824341. 2023-11-05 12:07:48.444306: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2023-11-05 12:08:51.064880: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. [I 2023-11-05 12:08:53,752] Trial 1 finished with value: 0.9911999702453613 and parameters: {'learning_rate': 0.0007349619283166518, 'num_filters1': 16, 'num_filters2': 16, 'kernel_size': 5, 'dense_units': 128}. Best is trial 1 with value: 0.9911999702453613. 2023-11-05 12:08:54.587205: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2023-11-05 12:10:16.848442: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. [I 2023-11-05 12:10:19,640] Trial 2 finished with value: 0.9897000193595886 and parameters: {'learning_rate': 0.0004093314073937259, 'num_filters1': 32, 'num_filters2': 48, 'kernel_size': 5, 'dense_units': 64}. Best is trial 1 with value: 0.9911999702453613. 2023-11-05 12:10:20.353493: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2023-11-05 12:12:00.459286: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. [I 2023-11-05 12:12:03,450] Trial 3 finished with value: 0.989799976348877 and parameters: {'learning_rate': 0.00023004696767179203, 'num_filters1': 48, 'num_filters2': 48, 'kernel_size': 4, 'dense_units': 128}. Best is trial 1 with value: 0.9911999702453613. 2023-11-05 12:12:03.939290: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2023-11-05 12:13:25.562616: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. [I 2023-11-05 12:13:28,293] Trial 4 finished with value: 0.9882000088691711 and parameters: {'learning_rate': 0.003659211925380224, 'num_filters1': 32, 'num_filters2': 48, 'kernel_size': 5, 'dense_units': 64}. Best is trial 1 with value: 0.9911999702453613. 2023-11-05 12:13:29.346637: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2023-11-05 12:14:51.605530: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. [I 2023-11-05 12:14:54,476] Trial 5 finished with value: 0.9898999929428101 and parameters: {'learning_rate': 0.0007277234012592447, 'num_filters1': 48, 'num_filters2': 16, 'kernel_size': 3, 'dense_units': 64}. Best is trial 1 with value: 0.9911999702453613. 2023-11-05 12:14:55.166363: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2023-11-05 12:16:09.619551: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. [I 2023-11-05 12:16:12,539] Trial 6 finished with value: 0.9918000102043152 and parameters: {'learning_rate': 0.0007298226903438897, 'num_filters1': 32, 'num_filters2': 48, 'kernel_size': 3, 'dense_units': 192}. Best is trial 6 with value: 0.9918000102043152. 2023-11-05 12:16:13.083236: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2023-11-05 12:17:26.845638: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. [I 2023-11-05 12:17:29,655] Trial 7 finished with value: 0.9897000193595886 and parameters: {'learning_rate': 0.0015667090957721314, 'num_filters1': 32, 'num_filters2': 48, 'kernel_size': 3, 'dense_units': 64}. Best is trial 6 with value: 0.9918000102043152. 2023-11-05 12:17:30.355420: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2023-11-05 12:18:32.657156: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. [I 2023-11-05 12:18:35,375] Trial 8 finished with value: 0.9904999732971191 and parameters: {'learning_rate': 0.0011610811602267334, 'num_filters1': 16, 'num_filters2': 16, 'kernel_size': 4, 'dense_units': 192}. Best is trial 6 with value: 0.9918000102043152. 2023-11-05 12:18:36.245704: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. 2023-11-05 12:20:28.384978: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled. [I 2023-11-05 12:20:31,383] Trial 9 finished with value: 0.9789999723434448 and parameters: {'learning_rate': 0.00837925111006837, 'num_filters1': 64, 'num_filters2': 48, 'kernel_size': 5, 'dense_units': 64}. Best is trial 6 with value: 0.9918000102043152. Best Parameters: {'learning_rate': 0.0007298226903438897, 'num_filters1': 32, 'num_filters2': 48, 'kernel_size': 3, 'dense_units': 192} Best Accuracy: 0.9918000102043152 |
전체 코드입니다.
import optuna import tensorflow as tf from tensorflow.keras.datasets import mnist from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam def objective(trial): learning_rate = trial.suggest_float("learning_rate", 1e-4, 1e-2, log=True) num_filters1 = trial.suggest_int("num_filters1", 16, 64, step=16) num_filters2 = trial.suggest_int("num_filters2", 16, 64, step=16) kernel_size = trial.suggest_int("kernel_size", 3, 5) dense_units = trial.suggest_int("dense_units", 64, 256, step=64) model = build_model(learning_rate, num_filters1, num_filters2, kernel_size, dense_units) test_accuracy = train_and_evaluate(model) return test_accuracy def build_model(learning_rate, num_filters1, num_filters2, kernel_size, dense_units): model = Sequential([ Conv2D(num_filters1, kernel_size, activation='relu', input_shape=(28, 28, 1)), MaxPooling2D(2), Conv2D(num_filters2, kernel_size, activation='relu'), MaxPooling2D(2), Flatten(), Dense(dense_units, activation='relu'), Dense(10, activation='softmax') ]) optimizer = Adam(learning_rate=learning_rate) model.compile(optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model def train_and_evaluate(model): (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(-1, 28, 28, 1) / 255.0 x_test = x_test.reshape(-1, 28, 28, 1) / 255.0 model.fit(x_train, y_train, epochs=5, batch_size=32, verbose=0) _, test_accuracy = model.evaluate(x_test, y_test, verbose=0) return test_accuracy study = optuna.create_study(direction="maximize") study.optimize(objective, n_trials=10) best_params = study.best_params best_accuracy = study.best_value print("Best Parameters:", best_params) print("Best Accuracy:", best_accuracy) |
관련 포스트
Optuna를 사용한 Keras 분류 모델 하이퍼파라미터 최적화 - iris
https://webnautes.tistory.com/2131
반응형
'Deep Learning & Machine Learning > Keras' 카테고리의 다른 글
Keras 예제 – flow_from_directory와 flow_from_dataframe (0) | 2024.04.21 |
---|---|
Keras의 EfficientNet, EfficientNetV2 모델의 파라미터 개수 (0) | 2024.03.18 |
Optuna를 사용한 Keras 분류 모델 하이퍼파라미터 최적화 - iris (0) | 2023.11.05 |
Keras MNIST Image Classification 예제 (0) | 2023.10.30 |
Keras 모델 전체 파라미터 개수 세기 (0) | 2023.10.22 |
시간날때마다 틈틈이 이것저것 해보며 블로그에 글을 남깁니다.
블로그의 문서는 종종 최신 버전으로 업데이트됩니다.
여유 시간이 날때 진행하는 거라 언제 진행될지는 알 수 없습니다.
영화,책, 생각등을 올리는 블로그도 운영하고 있습니다.
https://freewriting2024.tistory.com
제가 쓴 책도 한번 검토해보세요 ^^
@webnautes :: 멈춤보단 천천히라도
그렇게 천천히 걸으면서도 그렇게 빨리 앞으로 나갈 수 있다는 건.
포스팅이 좋았다면 "좋아요❤️" 또는 "구독👍🏻" 해주세요!