HomeAboutMeBlogGuest
© 2025 Sejin Cha. All rights reserved.
Built with Next.js, deployed on Vercel
장지원 페이지/
📕
2024 UGRP
/
Member Page
Member Page
/
권태완
권태완
/
2024/12/1 - train

2024/12/1 - train

Tags

train- code

 

ViT - train with EmoSet

result
Epoch [9/100], Loss: 0.0443 Test Accuracy after Epoch 9: 19.05%
model_epoch_9_accuracy_19.05.pth
337051.3KB
 

ViT - train with Monet EmoSet

result
model_epoch_29_accuracy_19.05.pth
337051.6KB
 

ViT - train with Crawling dataset

result
💡
Device: cuda Some weights of ViTForImageClassification were not initialized from the model checkpoint at google/vit-base-patch16-224 and are newly initialized because the shapes did not match:
  • classifier.bias: found shape torch.Size([1000]) in the checkpoint and torch.Size([6]) in the model instantiated
  • classifier.weight: found shape torch.Size([1000, 768]) in the checkpoint and torch.Size([6, 768]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Exception ignored in: <function _MultiProcessingDataLoaderIter.del at 0x7f030812c4c0> Traceback (most recent call last): File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1477, in del self._shutdown_workers() File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1460, in _shutdown_workers if w.is_alive(): File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/multiprocessing/process.py", line 160, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process Exception ignored in: <function _MultiProcessingDataLoaderIter.del at 0x7f030812c4c0> Traceback (most recent call last): File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1477, in del self._shutdown_workers() File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1460, in _shutdown_workers if w.is_alive(): File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/multiprocessing/process.py", line 160, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process Exception ignored in: <function _MultiProcessingDataLoaderIter.del at 0x7f030812c4c0> Traceback (most recent call last): File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1477, in del self._shutdown_workers() File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1460, in _shutdown_workers if w.is_alive(): File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/multiprocessing/process.py", line 160, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process Exception ignored in: <function _MultiProcessingDataLoaderIter.del at 0x7f030812c4c0> Traceback (most recent call last): File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1477, in del self._shutdown_workers() File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1460, in _shutdown_workers if w.is_alive(): File "/home/rnjsxodhks/anaconda3/envs/UGRP/lib/python3.9/multiprocessing/process.py", line 160, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process Epoch [1/100], Train Loss: 1.4068, Test Loss: 2.2409, Test Accuracy: 16.67%
Top 10 Models (by accuracy): Rank 1: Accuracy = 16.67%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_1_accuracy_16.67.pth Epoch [2/100], Train Loss: 0.9329, Test Loss: 2.4076, Test Accuracy: 9.52% Epoch [3/100], Train Loss: 0.5858, Test Loss: 2.9339, Test Accuracy: 16.67% Epoch [4/100], Train Loss: 0.3357, Test Loss: 2.8058, Test Accuracy: 16.67% Epoch [5/100], Train Loss: 0.1854, Test Loss: 2.6044, Test Accuracy: 35.71% Epoch [6/100], Train Loss: 0.1129, Test Loss: 3.8690, Test Accuracy: 19.05% Epoch [7/100], Train Loss: 0.0844, Test Loss: 3.6341, Test Accuracy: 19.05% Epoch [8/100], Train Loss: 0.0721, Test Loss: 3.8714, Test Accuracy: 21.43% Epoch [9/100], Train Loss: 0.0688, Test Loss: 4.1343, Test Accuracy: 21.43% Epoch [10/100], Train Loss: 0.0587, Test Loss: 4.4832, Test Accuracy: 19.05% Epoch [11/100], Train Loss: 0.0575, Test Loss: 4.2719, Test Accuracy: 16.67%
Top 10 Models (by accuracy): Rank 1: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_5_accuracy_35.71.pth Rank 2: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_8_accuracy_21.43.pth Rank 3: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_9_accuracy_21.43.pth Rank 4: Accuracy = 19.05%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_6_accuracy_19.05.pth Rank 5: Accuracy = 19.05%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_7_accuracy_19.05.pth Rank 6: Accuracy = 19.05%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_10_accuracy_19.05.pth Rank 7: Accuracy = 16.67%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_1_accuracy_16.67.pth Rank 8: Accuracy = 16.67%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_3_accuracy_16.67.pth Rank 9: Accuracy = 16.67%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_4_accuracy_16.67.pth Rank 10: Accuracy = 16.67%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_11_accuracy_16.67.pth Epoch [12/100], Train Loss: 0.0540, Test Loss: 4.7256, Test Accuracy: 19.05% Epoch [13/100], Train Loss: 0.0516, Test Loss: 4.6765, Test Accuracy: 19.05% Epoch [14/100], Train Loss: 0.0494, Test Loss: 4.6145, Test Accuracy: 21.43% Epoch [15/100], Train Loss: 0.0487, Test Loss: 4.6528, Test Accuracy: 19.05% Epoch [16/100], Train Loss: 0.0479, Test Loss: 5.1164, Test Accuracy: 14.29% Epoch [17/100], Train Loss: 0.0462, Test Loss: 4.9214, Test Accuracy: 19.05% Epoch [18/100], Train Loss: 0.0440, Test Loss: 4.5749, Test Accuracy: 21.43% Epoch [19/100], Train Loss: 0.0422, Test Loss: 4.7251, Test Accuracy: 21.43% Epoch [20/100], Train Loss: 0.0421, Test Loss: 5.0658, Test Accuracy: 19.05% Epoch [21/100], Train Loss: 0.0402, Test Loss: 4.8625, Test Accuracy: 21.43%
Top 10 Models (by accuracy): Rank 1: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_5_accuracy_35.71.pth Rank 2: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_8_accuracy_21.43.pth Rank 3: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_9_accuracy_21.43.pth Rank 4: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_14_accuracy_21.43.pth Rank 5: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_18_accuracy_21.43.pth Rank 6: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_19_accuracy_21.43.pth Rank 7: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_21_accuracy_21.43.pth Rank 8: Accuracy = 19.05%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_6_accuracy_19.05.pth Rank 9: Accuracy = 19.05%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_7_accuracy_19.05.pth Rank 10: Accuracy = 19.05%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_10_accuracy_19.05.pth Epoch [22/100], Train Loss: 0.0380, Test Loss: 5.0102, Test Accuracy: 23.81% Epoch [23/100], Train Loss: 0.0378, Test Loss: 5.2412, Test Accuracy: 19.05% Epoch [24/100], Train Loss: 0.0369, Test Loss: 5.0292, Test Accuracy: 21.43% Epoch [25/100], Train Loss: 0.0375, Test Loss: 5.0925, Test Accuracy: 19.05% Epoch [26/100], Train Loss: 0.0360, Test Loss: 5.1585, Test Accuracy: 21.43% Epoch [27/100], Train Loss: 0.0358, Test Loss: 5.0841, Test Accuracy: 19.05% Epoch [28/100], Train Loss: 0.0353, Test Loss: 4.9480, Test Accuracy: 23.81% Epoch [29/100], Train Loss: 0.0365, Test Loss: 5.1721, Test Accuracy: 21.43% Epoch [30/100], Train Loss: 0.0362, Test Loss: 5.1561, Test Accuracy: 23.81% Epoch [31/100], Train Loss: 0.0346, Test Loss: 5.2556, Test Accuracy: 19.05%
Top 10 Models (by accuracy): Rank 1: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_5_accuracy_35.71.pth Rank 2: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_22_accuracy_23.81.pth Rank 3: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_28_accuracy_23.81.pth Rank 4: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_30_accuracy_23.81.pth Rank 5: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_8_accuracy_21.43.pth Rank 6: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_9_accuracy_21.43.pth Rank 7: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_14_accuracy_21.43.pth Rank 8: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_18_accuracy_21.43.pth Rank 9: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_19_accuracy_21.43.pth Rank 10: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_21_accuracy_21.43.pth Epoch [32/100], Train Loss: 0.0349, Test Loss: 5.5148, Test Accuracy: 23.81% Epoch [33/100], Train Loss: 0.0340, Test Loss: 5.3969, Test Accuracy: 21.43% Epoch [34/100], Train Loss: 0.2909, Test Loss: 2.5258, Test Accuracy: 4.76% Epoch [35/100], Train Loss: 1.6083, Test Loss: 1.9889, Test Accuracy: 7.14% Epoch [36/100], Train Loss: 1.6194, Test Loss: 2.0071, Test Accuracy: 7.14% Epoch [37/100], Train Loss: 1.6199, Test Loss: 2.0256, Test Accuracy: 7.14% Epoch [38/100], Train Loss: 1.6152, Test Loss: 1.9641, Test Accuracy: 7.14% Epoch [39/100], Train Loss: 1.6158, Test Loss: 2.0246, Test Accuracy: 7.14% Epoch [40/100], Train Loss: 1.6149, Test Loss: 1.9889, Test Accuracy: 7.14% Epoch [41/100], Train Loss: 1.6117, Test Loss: 2.0890, Test Accuracy: 7.14%
Top 10 Models (by accuracy): Rank 1: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_5_accuracy_35.71.pth Rank 2: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_22_accuracy_23.81.pth Rank 3: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_28_accuracy_23.81.pth Rank 4: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_30_accuracy_23.81.pth Rank 5: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_32_accuracy_23.81.pth Rank 6: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_8_accuracy_21.43.pth Rank 7: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_9_accuracy_21.43.pth Rank 8: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_14_accuracy_21.43.pth Rank 9: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_18_accuracy_21.43.pth Rank 10: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_19_accuracy_21.43.pth Epoch [42/100], Train Loss: 1.6178, Test Loss: 1.9364, Test Accuracy: 7.14% Epoch [43/100], Train Loss: 1.6015, Test Loss: 2.5079, Test Accuracy: 7.14% Epoch [44/100], Train Loss: 1.6024, Test Loss: 2.4495, Test Accuracy: 4.76% Epoch [45/100], Train Loss: 1.6061, Test Loss: 2.2984, Test Accuracy: 4.76% Epoch [46/100], Train Loss: 1.6330, Test Loss: 2.1480, Test Accuracy: 9.52% Epoch [47/100], Train Loss: 1.6227, Test Loss: 2.0524, Test Accuracy: 7.14% Epoch [48/100], Train Loss: 1.6197, Test Loss: 1.9304, Test Accuracy: 7.14% Epoch [49/100], Train Loss: 1.6173, Test Loss: 1.9269, Test Accuracy: 7.14% Epoch [50/100], Train Loss: 1.6168, Test Loss: 1.9827, Test Accuracy: 7.14% Epoch [51/100], Train Loss: 1.6228, Test Loss: 2.0278, Test Accuracy: 7.14%
Top 10 Models (by accuracy): Rank 1: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_5_accuracy_35.71.pth Rank 2: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_22_accuracy_23.81.pth Rank 3: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_28_accuracy_23.81.pth Rank 4: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_30_accuracy_23.81.pth Rank 5: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_32_accuracy_23.81.pth Rank 6: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_8_accuracy_21.43.pth Rank 7: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_9_accuracy_21.43.pth Rank 8: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_14_accuracy_21.43.pth Rank 9: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_18_accuracy_21.43.pth Rank 10: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_19_accuracy_21.43.pth Epoch [52/100], Train Loss: 1.6212, Test Loss: 2.0425, Test Accuracy: 7.14% Epoch [53/100], Train Loss: 1.6182, Test Loss: 1.9116, Test Accuracy: 7.14% Epoch [54/100], Train Loss: 1.6192, Test Loss: 2.0485, Test Accuracy: 7.14% Epoch [55/100], Train Loss: 1.6156, Test Loss: 1.8641, Test Accuracy: 7.14% Epoch [56/100], Train Loss: 1.6179, Test Loss: 1.9567, Test Accuracy: 7.14% Epoch [57/100], Train Loss: 1.6111, Test Loss: 1.9115, Test Accuracy: 7.14% Epoch [58/100], Train Loss: 1.6046, Test Loss: 1.9273, Test Accuracy: 11.90% Epoch [59/100], Train Loss: 1.5900, Test Loss: 2.3081, Test Accuracy: 7.14% Epoch [60/100], Train Loss: 1.6153, Test Loss: 1.9833, Test Accuracy: 7.14% Epoch [61/100], Train Loss: 1.5976, Test Loss: 2.4895, Test Accuracy: 7.14%
Top 10 Models (by accuracy): Rank 1: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_5_accuracy_35.71.pth Rank 2: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_22_accuracy_23.81.pth Rank 3: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_28_accuracy_23.81.pth Rank 4: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_30_accuracy_23.81.pth Rank 5: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_32_accuracy_23.81.pth Rank 6: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_8_accuracy_21.43.pth Rank 7: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_9_accuracy_21.43.pth Rank 8: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_14_accuracy_21.43.pth Rank 9: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_18_accuracy_21.43.pth Rank 10: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_19_accuracy_21.43.pth Epoch [62/100], Train Loss: 1.6246, Test Loss: 2.0226, Test Accuracy: 7.14% Epoch [63/100], Train Loss: 1.6148, Test Loss: 2.0086, Test Accuracy: 7.14% Epoch [64/100], Train Loss: 1.6109, Test Loss: 1.9559, Test Accuracy: 7.14% Epoch [65/100], Train Loss: 1.6127, Test Loss: 2.0627, Test Accuracy: 7.14% Epoch [66/100], Train Loss: 1.6076, Test Loss: 2.0722, Test Accuracy: 7.14% Epoch [67/100], Train Loss: 1.6080, Test Loss: 1.8943, Test Accuracy: 7.14% Epoch [68/100], Train Loss: 1.5970, Test Loss: 2.0656, Test Accuracy: 7.14% Epoch [69/100], Train Loss: 1.5998, Test Loss: 2.1303, Test Accuracy: 7.14% Epoch [70/100], Train Loss: 1.5880, Test Loss: 1.8300, Test Accuracy: 33.33% Epoch [71/100], Train Loss: 1.5840, Test Loss: 1.8887, Test Accuracy: 7.14%
Top 10 Models (by accuracy): Rank 1: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_5_accuracy_35.71.pth Rank 2: Accuracy = 33.33%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_70_accuracy_33.33.pth Rank 3: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_22_accuracy_23.81.pth Rank 4: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_28_accuracy_23.81.pth Rank 5: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_30_accuracy_23.81.pth Rank 6: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_32_accuracy_23.81.pth Rank 7: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_8_accuracy_21.43.pth Rank 8: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_9_accuracy_21.43.pth Rank 9: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_14_accuracy_21.43.pth Rank 10: Accuracy = 21.43%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_18_accuracy_21.43.pth Epoch [72/100], Train Loss: 1.5762, Test Loss: 1.7051, Test Accuracy: 50.00% Epoch [73/100], Train Loss: 1.5557, Test Loss: 1.8310, Test Accuracy: 35.71% Epoch [74/100], Train Loss: 1.5501, Test Loss: 1.7750, Test Accuracy: 45.24% Epoch [75/100], Train Loss: 1.5479, Test Loss: 1.8943, Test Accuracy: 38.10% Epoch [76/100], Train Loss: 1.5219, Test Loss: 1.8631, Test Accuracy: 38.10% Epoch [77/100], Train Loss: 1.5146, Test Loss: 2.0852, Test Accuracy: 21.43% Epoch [78/100], Train Loss: 1.5030, Test Loss: 1.8591, Test Accuracy: 33.33% Epoch [79/100], Train Loss: 1.4821, Test Loss: 2.2685, Test Accuracy: 19.05% Epoch [80/100], Train Loss: 1.4738, Test Loss: 2.5035, Test Accuracy: 14.29% Epoch [81/100], Train Loss: 1.4542, Test Loss: 2.7322, Test Accuracy: 9.52%
Top 10 Models (by accuracy): Rank 1: Accuracy = 50.00%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_72_accuracy_50.00.pth Rank 2: Accuracy = 45.24%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_74_accuracy_45.24.pth Rank 3: Accuracy = 38.10%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_75_accuracy_38.10.pth Rank 4: Accuracy = 38.10%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_76_accuracy_38.10.pth Rank 5: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_5_accuracy_35.71.pth Rank 6: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_73_accuracy_35.71.pth Rank 7: Accuracy = 33.33%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_70_accuracy_33.33.pth Rank 8: Accuracy = 33.33%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_78_accuracy_33.33.pth Rank 9: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_22_accuracy_23.81.pth Rank 10: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_28_accuracy_23.81.pth Epoch [82/100], Train Loss: 1.4377, Test Loss: 2.6550, Test Accuracy: 9.52% Epoch [83/100], Train Loss: 1.4121, Test Loss: 2.4003, Test Accuracy: 11.90% Epoch [84/100], Train Loss: 1.4099, Test Loss: 2.1126, Test Accuracy: 35.71% Epoch [85/100], Train Loss: 1.4040, Test Loss: 2.3205, Test Accuracy: 11.90% Epoch [86/100], Train Loss: 1.3911, Test Loss: 2.6614, Test Accuracy: 9.52% Epoch [87/100], Train Loss: 1.3861, Test Loss: 2.7220, Test Accuracy: 4.76% Epoch [88/100], Train Loss: 1.4656, Test Loss: 2.3437, Test Accuracy: 11.90% Epoch [89/100], Train Loss: 1.4279, Test Loss: 2.6180, Test Accuracy: 9.52% Epoch [90/100], Train Loss: 1.4053, Test Loss: 2.4638, Test Accuracy: 9.52% Epoch [91/100], Train Loss: 1.3716, Test Loss: 2.6106, Test Accuracy: 11.90%
Top 10 Models (by accuracy): Rank 1: Accuracy = 50.00%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_72_accuracy_50.00.pth Rank 2: Accuracy = 45.24%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_74_accuracy_45.24.pth Rank 3: Accuracy = 38.10%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_75_accuracy_38.10.pth Rank 4: Accuracy = 38.10%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_76_accuracy_38.10.pth Rank 5: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_5_accuracy_35.71.pth Rank 6: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_73_accuracy_35.71.pth Rank 7: Accuracy = 35.71%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_84_accuracy_35.71.pth Rank 8: Accuracy = 33.33%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_70_accuracy_33.33.pth Rank 9: Accuracy = 33.33%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_78_accuracy_33.33.pth Rank 10: Accuracy = 23.81%, Model Path = /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/model_epoch_22_accuracy_23.81.pth Epoch [92/100], Train Loss: 1.3544, Test Loss: 2.8767, Test Accuracy: 7.14% Epoch [93/100], Train Loss: 1.3365, Test Loss: 2.9999, Test Accuracy: 4.76% Epoch [94/100], Train Loss: 1.3179, Test Loss: 3.4855, Test Accuracy: 11.90% Epoch [95/100], Train Loss: 1.3102, Test Loss: 3.2873, Test Accuracy: 9.52% Epoch [96/100], Train Loss: 1.2849, Test Loss: 2.9020, Test Accuracy: 14.29% Epoch [97/100], Train Loss: 1.2720, Test Loss: 2.9844, Test Accuracy: 11.90% Epoch [98/100], Train Loss: 1.2309, Test Loss: 2.8711, Test Accuracy: 19.05% Epoch [99/100], Train Loss: 1.2233, Test Loss: 3.2766, Test Accuracy: 9.52% Epoch [100/100], Train Loss: 1.2049, Test Loss: 3.0082, Test Accuracy: 19.05% Train and Test loss graph saved to /home/rnjsxodhks/code/UGRP/ViT with Crawling top models survey/train_test_loss_graph.png
model_epoch_19_accuracy_38.10.pth
337051.6KB
notion image
 
import torch import os import numpy as np from PIL import Image from transformers import AutoModelForImageClassification, AutoImageProcessor from datasets import load_dataset from torch.utils.data import DataLoader from torch.optim import AdamW import torch.nn as nn from torchvision import transforms from peft import get_peft_model, LoraConfig
def setup_device(): """설치된 GPU 확인 및 설정.""" device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print("Device:", device) return device def merge_datasets(dataset1, dataset2): """두 데이터셋을 결합.""" return dataset1 + dataset2 def load_datasets(train_path, test_path): train_dataset = load_dataset(train_path, split="train") test_dataset = load_dataset(test_path, split="train") return train_dataset, test_dataset def load_datasets_crawling(train_path, test_path): test_dataset = load_dataset(test_path, split="train") train_dataset = load_dataset(train_path, split="train").map(lambda x: {"image": x["image"].convert("RGB")}) return train_dataset, test_dataset def prepare_model(device, num_labels=6): """모델 및 이미지 프로세서 초기화.""" processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224", use_fast=True) model = AutoModelForImageClassification.from_pretrained( "google/vit-base-patch16-224", num_labels=num_labels, ignore_mismatched_sizes=True ).to(device) config = LoraConfig( r=8, lora_alpha=16, lora_dropout=0.1, target_modules=["query", "key", "value", "vit.encoder.layer.*.attention.output.dense"], bias="none" ) model = get_peft_model(model, config) return model, processor def create_dataloader(dataset, processor, batch_size=32, shuffle=True, num_workers=4): """DataLoader 생성.""" def collate_fn(batch): images = [item['image'] for item in batch] labels = [item['label'] for item in batch] inputs = processor(images=images, return_tensors="pt") inputs['labels'] = torch.tensor(labels, dtype=torch.long) return inputs return DataLoader(dataset, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn, num_workers=num_workers) def create_dataloader_with_augmentation(dataset, processor, batch_size=32, shuffle=True, num_workers=4): augmentation = transforms.Compose([ transforms.RandomHorizontalFlip(), transforms.RandomRotation(15), transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1), transforms.ToTensor() ]) def collate_fn(batch): images = [augmentation(item['image']) for item in batch] labels = [item['label'] for item in batch] inputs = processor(images=images, return_tensors="pt") inputs['labels'] = torch.tensor(labels, dtype=torch.long) return inputs return DataLoader(dataset, batch_size=batch_size, shuffle=shuffle, collate_fn=collate_fn, num_workers=num_workers) def add_noise_to_images(images, noise_level=0.1): """이미지에 랜덤 노이즈 추가.""" noisy_images = [] for img in images: img_np = np.array(img) noise = np.random.normal(0, noise_level, img_np.shape) noisy_img = np.clip(img_np + noise, 0, 255).astype(np.uint8) noisy_images.append(Image.fromarray(noisy_img)) return noisy_images # collate_fn 내부에서 적용 def collate_fn_with_noise(batch): images = add_noise_to_images([item['image'] for item in batch]) labels = [item['label'] for item in batch] inputs = processor(images=images, return_tensors="pt") inputs['labels'] = torch.tensor(labels, dtype=torch.long) return inputs def evaluate_model(model, data_loader, device, valid_label_indices): """모델 평가.""" model.eval() correct = 0 total = 0 with torch.no_grad(): for batch in data_loader: inputs = {k: v.to(device) for k, v in batch.items()} outputs = model(**inputs) _, preds = torch.max(outputs.logits, 1) for pred, label in zip(preds, inputs['labels']): if pred.item() in valid_label_indices: if pred.item() == label.item(): correct += 1 total += 1 return 100 * correct / total def save_top_models(epoch, accuracy, model, top_models, directory): """사용자 지정 디렉토리에 최고 성능 모델 저장.""" os.makedirs(directory, exist_ok=True) model_filename = f"model_epoch_{epoch + 1}_accuracy_{accuracy:.2f}.pth" model_path = os.path.join(directory, model_filename) top_models.append((accuracy, model_path)) top_models = sorted(top_models, key=lambda x: x[0], reverse=True)[:10] torch.save(model.state_dict(), model_path) if epoch % 10 == 0: print("\nTop 10 Models (by accuracy):") for i, (acc, path) in enumerate(top_models, 1): print(f"Rank {i}: Accuracy = {acc:.2f}%, Model Path = {path}") return top_models def train_model(num_epochs, train_loader, test_loader, model, device, optimizer, criterion, valid_label_indices, directory): """모델 학습 루프.""" top_models = [] for epoch in range(num_epochs): model.train() running_loss = 0.0 for batch in train_loader: optimizer.zero_grad() inputs = {k: v.to(device) for k, v in batch.items()} outputs = model(**inputs) loss = outputs.loss loss.backward() optimizer.step() running_loss += loss.item() print(f"Epoch [{epoch+1}/{num_epochs}], Loss: {running_loss/len(train_loader):.4f}") test_accuracy = evaluate_model(model, test_loader, device, valid_label_indices) print(f"Test Accuracy after Epoch {epoch+1}: {test_accuracy:.2f}%") top_models = save_top_models(epoch, test_accuracy, model, top_models, directory) return top_models # 실행 메인 함수 def execute_functions(train_url, test_url, batch_size, criterion, optimizer, ): device = setup_device() train_dataset, test_dataset = load_datasets(train_url, test_url) valid_label_indices = [0, 1, 2, 3, 4, 5] model, processor = prepare_model(device) ''' train dataset 변주 ''' # combined_dataset = merge_datasets(train_dataset, test_dataset) # train_loader = create_dataloader(combined_dataset, processor, batch_size=32, shuffle=True) train_loader = create_dataloader(train_dataset, processor, batch_size=32, shuffle=True) # train_loader = create_dataloader_with_augmentation(train_dataset, processor, batch_size=32, shuffle=True) # train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True, collate_fn=collate_fn_with_noise, num_workers=4) test_loader = create_dataloader(test_dataset, processor, batch_size=32, shuffle=False) optimizer = AdamW(model.parameters(), lr=1e-4) criterion = nn.CrossEntropyLoss() # 사용자로부터 저장 디렉토리 입력 받기 save_directory = input("Enter the directory name to save models: ") save_directory = os.path.join(os.getcwd(), save_directory) # 실행 폴더 기준으로 디렉토리 생성 top_models = train_model( num_epochs=100, train_loader=train_loader, test_loader=test_loader, model=model, device=device, optimizer=optimizer, criterion=criterion, valid_label_indices=valid_label_indices, directory = save_directory ) print("Finished Training")
device = setup_device() train_dataset, test_dataset = load_datasets_crawling("xodhks/crawling-emotions-in-google-train", "xodhks/ugrp-survey-test") valid_label_indices = [0, 1, 2, 3, 4, 5] model, processor = prepare_model(device) ''' train dataset 변주 ''' # combined_dataset = merge_datasets(train_dataset, test_dataset) # train_loader = create_dataloader(combined_dataset, processor, batch_size=32, shuffle=True) train_loader = create_dataloader(train_dataset, processor, batch_size=32, shuffle=True) # train_loader = create_dataloader_with_augmentation(train_dataset, processor, batch_size=32, shuffle=True) # train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True, collate_fn=collate_fn_with_noise, num_workers=4) test_loader = create_dataloader(test_dataset, processor, batch_size=32, shuffle=False) optimizer = AdamW(model.parameters(), lr=1e-3) criterion = nn.CrossEntropyLoss() # 사용자로부터 저장 디렉토리 입력 받기 save_directory = "ViT with Crawling top models survey" save_directory = os.path.join(os.getcwd(), save_directory) # 실행 폴더 기준으로 디렉토리 생성 top_models = train_model( num_epochs=100, train_loader=train_loader, test_loader=test_loader, model=model, device=device, optimizer=optimizer, criterion=criterion, valid_label_indices=valid_label_indices, directory = save_directory, title = "ViT with Crawling" ) print("Finished Training")