01
Part 8-1
1-1. Tensor + Numpy + GPU 1 - PyTorch의 Tensor 데이터타입
1-2. Tensor + Numpy + GPU 2 - numpy() & from_numpy()
2-1. LinearRegression 1 - nn.Module & nn.Parameter
2-2. LinearRegression 2 - PyTorch model-training process & Saving+Loading model params
3-1. Binaray Classification 1 - nn.Sequential & nn.Sigmoid
3-2. Binary Classification 2 - nn.Sequential & nn.BCEWithLogitsLoss
4-1. Multi-class Classification 1 - nn.CrossEntropyLoss
4-2. Multi-class Classification 2 - TorchMetrics & Non-linearity
5-1. Classification & Regression 요약 1 - Classification (Titanic)
5-2. Classification & Regression 요약 2 - Regression(Boston house price)
02
Part 8-2
6-1. TorchVision & DataLoader 1 - Fashion-MNIST & DataLoader
6-2. TorchVision & DataLoader 2 - nn.Flatten & DataLoader for mini-batch
6-3. TorchVision & DataLoader 3 - nn.Conv2d & nn.MaxPool2d
6-4. TorchVision & DataLoader 4 - 모델 성능비교 & 예측결과 시각화
7-1. CNN with ImageFolder & DataLoader 1 - ImageFolder 활용법
7-2. CNN with ImageFolder & DataLoader 2 - CNN without data-augmentation
7-3. CNN with ImageFolder & DataLoader 3 - TrivialAugment
7-4. CNN with ImageFolder & DataLoader 4 - CNN with data-augmentation
8. Converting source code to modules
9-1. Transfer-learning 1 - Introduction to torchvision.models
9-2. Transfer-learning 2 - Loading the pre-trained weights
9-3. Transfer-learning 3 - Loading the pre-trained model
03
Part 8-3
10-1. Tracking multiple experiments 1 - Introduction to Tensorboard
10-2. Tracking multiple experiments 2 - Tensorboard for EfficientNet
10-3. Tracking multiple experiments 3 - Prepare the multiple experiments
10-4. Tracking multiple experiments 4 - Run the experiments & Compare the results
11-1. (Adv) Tracking multiple experiments 1 - EfficientNet_B2 & ConvNeXt_Tiny
11-2. (Adv) Tracking multiple experiments 2 - Run the multiple experiments
11-3. (Adv) Tracking multiple experiments 3 - Compare the results & Predict on samples
11-4. (Adv) Tracking multiple experiments 4 - torch.utils.data.Subset
11-5. (Adv) Tracking multiple experiments 5 - Use the full-dataset to get 99% accuracy
12-1. Compare the model performance 1 - Introduction to Vision Transformer
12-2. Compare the model performance 2 - Train a ViT model on the full-dataset
12-3. Compare the model performance 3 - Model size & Inference speed
04
Part 8-4
13-1. Model deployment 1 - Prepare the dataset with random_split
13-2. Model deployment 2 - Introduction to Gradio
13-3. Model deployment 3 - Check the trained model + Weights & Biases
13-4. Model deployment 4 - Create a Gradio app with PyTorch model
13-5. Model deployment 5 - Deploy the Gradio app with Hugging-Face Spaces
Recent Update|2023. 02. 16