- dataset 찾아보기(감정 dataset 위주로…)
- 코드 수정
- 설문조사도 틈틈이
Datasets
- EMOTIC
- 일상적인 장면에서 사람들이 표현하는 감정을 모은 데이터셋
- 애정, 분노, 짜증, 기대, 혐오, 자신감, 비난, 단절, 불안, 의심/혼란, 당혹감, 몰입, 존중, 흥분, 피로, 공포, 행복, 고통, 평온, 기쁨, 슬픔, 민감함, 고통, 놀람, 동정, 갈망(26가지) 등 범주적감정과 Valence, Arousal, Dominance(3가지)의 연속적 감정으로 분류
emotic/src/various_images.png at master · rkosti/emotic
Code repo for the EMOTIC dataset. Contribute to rkosti/emotic development by creating an account on GitHub.

- ArtEmis
- 예술작품에서 감정을 추론하기 위한 dataset
- 행복, 슬픔, 분노, 놀람, 공포, 혐오, 경멸, 중립(8가지)로 감정 분류
ArtEmis Dataset V2.0
It is Okay to Not be Okay: Overcoming Emotional Bias in Affective Image Captioning by Contrastive Data Collection

Code Revision
This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module:
이런 에러가 계속 발생해서 확인해봄
> 무한루프가 발생하는 상황이 생겨 일어날 수 있는 에러라고 함(Window 내에서 process 처리 방식 관련)
>>
if name == "main": 로 감싸 해결코드
import math import torch import torch.nn as nn import torch.optim as optim import torchvision.transforms as transforms import torchvision.datasets as datasets from torch.utils.data import Subset, DataLoader from torch.utils.data.dataset import Dataset from PIL import Image import timm # ViT 모델 로드 def load_vit_model(): model = timm.create_model('vit_base_patch16_224', pretrained=True) preprocess = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)), ]) return model, preprocess class FilteredCIFAR10(Dataset): def __init__(self, root, train=True, transform=None, download=False): self.cifar10 = datasets.CIFAR10(root=root, train=train, transform=transform, download=download) self.data = [] self.targets = [] for img, target in zip(self.cifar10.data, self.cifar10.targets): if target < 8: # Only keep classes 0-7 self.data.append(img) self.targets.append(target) def __len__(self): return len(self.data) def __getitem__(self, idx): img, target = self.data[idx], self.targets[idx] img = Image.fromarray(img) if self.cifar10.transform: img = self.cifar10.transform(img) return img, target class ViTFineTuning(nn.Module): def __init__(self, base_model, num_classes): super(ViTFineTuning, self).__init__() self.base_model = base_model self.fc = nn.Linear(self.base_model.head.in_features, num_classes) # 새로운 Fully Connected Layer 추가 self.base_model.head = self.fc # 모델의 헤드 부분을 새로운 FC 레이어로 교체 def forward(self, x): x = self.base_model.forward_features(x) x = x[:, 0] # 첫 번째 클래스 토큰만 사용 x = self.fc(x) return x if __name__ == "__main__": # GPU 사용 설정 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # ViT 모델과 전처리 함수 로드 base_model, preprocess = load_vit_model() model = ViTFineTuning(base_model, num_classes=8).to(device) # num_classes 변경 # 일부 레이어만 학습시키고 나머지를 고정 for name, param in model.named_parameters(): if 'head' not in name and 'blocks.10' not in name and 'blocks.11' not in name: # 마지막 두 블록과 헤드 외의 파라미터는 고정 param.requires_grad = False criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=0.01, momentum=0.9) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1) # 학습률 스케줄러 추가 # 전처리 함수로 데이터 로딩 train_dataset = FilteredCIFAR10(root='./data', train=True, download=True, transform=preprocess) test_dataset = FilteredCIFAR10(root='./data', train=False, download=True, transform=preprocess) # 훈련 및 테스트 데이터셋에서 100개의 샘플 선택 train_subset = Subset(train_dataset, range(100)) test_subset = Subset(test_dataset, range(100)) # 데이터 로더 train_loader = DataLoader(train_subset, batch_size=64, shuffle=True, num_workers=2) test_loader = DataLoader(test_subset, batch_size=64, shuffle=False, num_workers=2) # 훈련 루프 (간단히) num_epochs = 20 for epoch in range(num_epochs): model.train() for images, labels in train_loader: images, labels = images.to(device), labels.to(device) optimizer.zero_grad() outputs = model(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() scheduler.step() # 학습률 스케줄러 스텝 print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}') # 에포크마다 정확도 계산 model.eval() with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images, labels = images.to(device), labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() accuracy = 100 * correct / total print(f'Accuracy after epoch {epoch+1}: {accuracy:.2f} %') print('Finished Training') # 최종 정확도 model.eval() with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images, labels = images.to(device), labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print(f'Final Test Accuracy: {100 * correct / total:.2f} %')
결과
Epoch [1/20], Loss: 2.0976 Accuracy after epoch 1: 54.00 % Epoch [2/20], Loss: 0.5800 Accuracy after epoch 2: 88.00 % Epoch [3/20], Loss: 0.0765 Accuracy after epoch 3: 93.00 % Epoch [4/20], Loss: 0.0103 Accuracy after epoch 4: 92.00 % Epoch [5/20], Loss: 0.0034 Accuracy after epoch 5: 91.00 % Epoch [6/20], Loss: 0.0018 Accuracy after epoch 6: 92.00 % Epoch [7/20], Loss: 0.0004 Accuracy after epoch 7: 91.00 % Epoch [8/20], Loss: 0.0001 Accuracy after epoch 8: 91.00 % Epoch [9/20], Loss: 0.0004 Accuracy after epoch 9: 91.00 % Epoch [10/20], Loss: 0.0001 Accuracy after epoch 10: 91.00 % Epoch [11/20], Loss: 0.0003 Accuracy after epoch 11: 91.00 % Epoch [12/20], Loss: 0.0006 Accuracy after epoch 12: 91.00 % Epoch [13/20], Loss: 0.0007 Accuracy after epoch 13: 91.00 % Epoch [14/20], Loss: 0.0002 Accuracy after epoch 14: 91.00 % Epoch [15/20], Loss: 0.0006 Accuracy after epoch 15: 91.00 %
결과
Epoch [1/20], Loss: 2.0976 Accuracy after epoch 1: 54.00 % Epoch [2/20], Loss: 0.5800 Accuracy after epoch 2: 88.00 % Epoch [3/20], Loss: 0.0765 Accuracy after epoch 3: 93.00 % Epoch [4/20], Loss: 0.0103 Accuracy after epoch 4: 92.00 % Epoch [5/20], Loss: 0.0034 Accuracy after epoch 5: 91.00 % Epoch [6/20], Loss: 0.0018 Accuracy after epoch 6: 92.00 % Epoch [7/20], Loss: 0.0004 Accuracy after epoch 7: 91.00 % Epoch [8/20], Loss: 0.0001 Accuracy after epoch 8: 91.00 % Epoch [9/20], Loss: 0.0004 Accuracy after epoch 9: 91.00 % Epoch [10/20], Loss: 0.0001 Accuracy after epoch 10: 91.00 % Epoch [11/20], Loss: 0.0003 Accuracy after epoch 11: 91.00 % Epoch [12/20], Loss: 0.0006 Accuracy after epoch 12: 91.00 % Epoch [13/20], Loss: 0.0007 Accuracy after epoch 13: 91.00 % Epoch [14/20], Loss: 0.0002 Accuracy after epoch 14: 91.00 % Epoch [15/20], Loss: 0.0006 Accuracy after epoch 15: 91.00 % Epoch [16/20], Loss: 0.0006 Accuracy after epoch 16: 91.00 % Epoch [17/20], Loss: 0.0001 Accuracy after epoch 17: 91.00 % Epoch [18/20], Loss: 0.0001 Accuracy after epoch 18: 91.00 % Epoch [19/20], Loss: 0.0001 Accuracy after epoch 19: 91.00 % Epoch [20/20], Loss: 0.0001 Accuracy after epoch 20: 91.00 % Finished Training Final Test Accuracy: 91.00 %