From 2d20c734c1fdab75fae3ddd8c7b1a1b2acc158a0 Mon Sep 17 00:00:00 2001 From: keon Date: Tue, 22 May 2018 11:48:10 -0400 Subject: [PATCH] add cnn notebooks --- .../02-neural-network.ipynb | 9 +- .../03-overfitting-and-regularization.ipynb | 414 ++++++++++++++++++ .../02-cifar-cnn.ipynb | 32 ++ .../03-torchvision-models.ipynb | 32 ++ README.md | 12 +- 5 files changed, 485 insertions(+), 14 deletions(-) create mode 100644 04-Neural-Network-For-Fashion/03-overfitting-and-regularization.ipynb create mode 100644 05-CNN-For-Image-Classification/02-cifar-cnn.ipynb create mode 100644 05-CNN-For-Image-Classification/03-torchvision-models.ipynb diff --git a/04-Neural-Network-For-Fashion/02-neural-network.ipynb b/04-Neural-Network-For-Fashion/02-neural-network.ipynb index f17ce2d..bd7cf82 100644 --- a/04-Neural-Network-For-Fashion/02-neural-network.ipynb +++ b/04-Neural-Network-For-Fashion/02-neural-network.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# 4. 딥러닝으로 패션 아이템 구분하기\n", + "# 4.2 뉴럴넷으로 패션 아이템 구분하기\n", "Fashion MNIST 데이터셋과 앞서 배운 인공신경망을 이용하여 패션아이템을 구분해봅니다.\n", "\n", "본 튜토리얼은 PyTorch의 공식 튜토리얼 (https://github.com/pytorch/examples/blob/master/mnist/main.py)을 참고하여 만들어졌습니다." @@ -378,13 +378,6 @@ " train(model, device, train_loader, optimizer, epoch)\n", " test(model, device, test_loader)" ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { diff --git a/04-Neural-Network-For-Fashion/03-overfitting-and-regularization.ipynb b/04-Neural-Network-For-Fashion/03-overfitting-and-regularization.ipynb new file mode 100644 index 0000000..169f4bf --- /dev/null +++ b/04-Neural-Network-For-Fashion/03-overfitting-and-regularization.ipynb @@ -0,0 +1,414 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 4.3 오버피팅과 정규화 (Overfitting and Regularization)\n", + "\n", + "머신러닝 모델\n", + "과적합(Overfitting)\n", + "\n", + "본 튜토리얼은 PyTorch의 공식 튜토리얼 (https://github.com/pytorch/examples/blob/master/mnist/main.py)을 참고하여 만들어졌습니다." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import torch\n", + "import torchvision\n", + "import torch.nn.functional as F\n", + "from torch import nn, optim\n", + "from torch.autograd import Variable\n", + "from torchvision import transforms, datasets" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "use_cuda = torch.cuda.is_available()\n", + "device = torch.device(\"cuda\" if use_cuda else \"cpu\")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "batch_size = 64" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "transform = transforms.Compose([\n", + " transforms.ToTensor(),\n", + " transforms.Normalize((0.1307,), (0.3081,))\n", + "])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 데이터셋 불러오기" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "trainset = datasets.FashionMNIST(\n", + " root = './.data/', \n", + " train = True,\n", + " download = True,\n", + " transform = transform\n", + ")\n", + "testset = datasets.FashionMNIST(\n", + " root = './.data/', \n", + " train = False,\n", + " download = True,\n", + " transform = transform\n", + ")\n", + "\n", + "train_loader = torch.utils.data.DataLoader(\n", + " dataset = trainset,\n", + " batch_size = batch_size,\n", + " shuffle = True,\n", + ")\n", + "test_loader = torch.utils.data.DataLoader(\n", + " dataset = testset,\n", + " batch_size = batch_size,\n", + " shuffle = True,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 뉴럴넷으로 Fashion MNIST 학습하기\n", + "\n", + "입력 `x` 는 `[배치크기, 색, 높이, 넓이]`로 이루어져 있습니다.\n", + "`x.size()`를 해보면 `[64, 1, 28, 28]`이라고 표시되는 것을 보실 수 있습니다.\n", + "Fashion MNIST에서 이미지의 크기는 28 x 28, 색은 흑백으로 1 가지 입니다.\n", + "그러므로 입력 x의 총 특성값 갯수는 28 x 28 x 1, 즉 784개 입니다.\n", + "\n", + "우리가 사용할 모델은 3개의 레이어를 가진 뉴럴네트워크 입니다. " + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "class Net(nn.Module):\n", + " def __init__(self):\n", + " super(Net, self).__init__()\n", + " self.fc1 = nn.Linear(784, 256)\n", + " self.fc2 = nn.Linear(256, 256)\n", + " self.fc3 = nn.Linear(256, 10)\n", + "\n", + " def forward(self, x):\n", + " x = x.view(-1, 784)\n", + " x = F.relu(self.fc1(x))\n", + " x = F.relu(self.fc2(x))\n", + " x = F.dropout(x, training=self.training)\n", + " x = self.fc3(x)\n", + " return x" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 하이퍼파라미터 \n", + "\n", + "`to()` 함수는 모델의 파라미터들을 지정한 곳으로 보내는 역할을 합니다.\n", + "일반적으로 CPU 1개만 사용할 경우 필요는 없지만,\n", + "GPU를 사용하고자 하는 경우 `to(\"cuda\")`로 지정하여 GPU로 보내야 합니다.\n", + "지정하지 않을 경우 계속 CPU에 남아 있게 되며 빠른 훈련의 이점을 누리실 수 없습니다.\n", + "\n", + "최적화 알고리즘으로 파이토치에 내장되어 있는 `optim.SGD`를 사용하겠습니다." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "model = Net().to(device)\n", + "optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)\n", + "epochs = 10\n", + "log_interval = 100" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 훈련하기" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [], + "source": [ + "def train(model, device, train_loader, optimizer, epoch):\n", + " model.train()\n", + " for batch_idx, (data, target) in enumerate(train_loader):\n", + " data, target = data.to(device), target.to(device)\n", + " optimizer.zero_grad()\n", + " output = model(data)\n", + " loss = F.cross_entropy(output, target)\n", + " loss.backward()\n", + " optimizer.step()\n", + " if batch_idx % log_interval == 0:\n", + " print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n", + " epoch, batch_idx * len(data), len(train_loader.dataset),\n", + " 100. * batch_idx / len(train_loader), loss.item()))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 테스트하기\n", + "\n", + "아무리 훈련이 잘 되었다고 해도 실제 데이터를 만났을때 성능이 낮다면 쓸모 없는 모델일 것입니다.\n", + "우리가 진정 원하는 것은 훈련 데이터에 최적화한 모델이 아니라 모든 데이터에서 높은 성능을 보이는 모델이기 때문입니다.\n", + "세상에 존재하는 모든 데이터에 최적화 하는 것을 \"일반화\"라고 부르고\n", + "모델이 얼마나 실제 데이터에 적응하는지를 수치로 나타낸 것을 \"일반화 오류\"(Generalization Error) 라고 합니다. \n", + "\n", + "우리가 만든 모델이 얼마나 일반화를 잘 하는지 알아보기 위해,\n", + "그리고 언제 훈련을 멈추어야 할지 알기 위해\n", + "매 이포크가 끝날때 마다 테스트셋으로 모델의 성능을 측정해보겠습니다." + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [], + "source": [ + "def test(model, device, test_loader):\n", + " model.eval()\n", + " test_loss = 0\n", + " correct = 0\n", + " with torch.no_grad():\n", + " for data, target in test_loader:\n", + " data, target = data.to(device), target.to(device)\n", + " output = model(data)\n", + " test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss\n", + " pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability\n", + " correct += pred.eq(target.view_as(pred)).sum().item()\n", + "\n", + " test_loss /= len(test_loader.dataset)\n", + " print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n", + " test_loss, correct, len(test_loader.dataset),\n", + " 100. * correct / len(test_loader.dataset)))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 코드 돌려보기\n", + "\n", + "자, 이제 모든 준비가 끝났습니다. 코드를 돌려서 실제로 훈련이 되는지 확인해봅시다!" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Train Epoch: 1 [0/60000 (0%)]\tLoss: 2.297793\n", + "Train Epoch: 1 [6400/60000 (11%)]\tLoss: 0.879420\n", + "Train Epoch: 1 [12800/60000 (21%)]\tLoss: 0.707845\n", + "Train Epoch: 1 [19200/60000 (32%)]\tLoss: 0.491728\n", + "Train Epoch: 1 [25600/60000 (43%)]\tLoss: 0.746743\n", + "Train Epoch: 1 [32000/60000 (53%)]\tLoss: 0.598782\n", + "Train Epoch: 1 [38400/60000 (64%)]\tLoss: 0.523441\n", + "Train Epoch: 1 [44800/60000 (75%)]\tLoss: 0.399858\n", + "Train Epoch: 1 [51200/60000 (85%)]\tLoss: 0.577800\n", + "Train Epoch: 1 [57600/60000 (96%)]\tLoss: 0.614767\n", + "\n", + "Test set: Average loss: 0.5003, Accuracy: 8229/10000 (82%)\n", + "\n", + "Train Epoch: 2 [0/60000 (0%)]\tLoss: 0.451823\n", + "Train Epoch: 2 [6400/60000 (11%)]\tLoss: 0.364500\n", + "Train Epoch: 2 [12800/60000 (21%)]\tLoss: 0.264642\n", + "Train Epoch: 2 [19200/60000 (32%)]\tLoss: 0.635316\n", + "Train Epoch: 2 [25600/60000 (43%)]\tLoss: 0.487202\n", + "Train Epoch: 2 [32000/60000 (53%)]\tLoss: 0.658737\n", + "Train Epoch: 2 [38400/60000 (64%)]\tLoss: 0.532459\n", + "Train Epoch: 2 [44800/60000 (75%)]\tLoss: 0.288026\n", + "Train Epoch: 2 [51200/60000 (85%)]\tLoss: 0.449904\n", + "Train Epoch: 2 [57600/60000 (96%)]\tLoss: 0.482623\n", + "\n", + "Test set: Average loss: 0.4316, Accuracy: 8439/10000 (84%)\n", + "\n", + "Train Epoch: 3 [0/60000 (0%)]\tLoss: 0.331988\n", + "Train Epoch: 3 [6400/60000 (11%)]\tLoss: 0.245480\n", + "Train Epoch: 3 [12800/60000 (21%)]\tLoss: 0.491709\n", + "Train Epoch: 3 [19200/60000 (32%)]\tLoss: 0.395424\n", + "Train Epoch: 3 [25600/60000 (43%)]\tLoss: 0.382458\n", + "Train Epoch: 3 [32000/60000 (53%)]\tLoss: 0.322766\n", + "Train Epoch: 3 [38400/60000 (64%)]\tLoss: 0.301913\n", + "Train Epoch: 3 [44800/60000 (75%)]\tLoss: 0.318479\n", + "Train Epoch: 3 [51200/60000 (85%)]\tLoss: 0.267777\n", + "Train Epoch: 3 [57600/60000 (96%)]\tLoss: 0.416187\n", + "\n", + "Test set: Average loss: 0.4085, Accuracy: 8511/10000 (85%)\n", + "\n", + "Train Epoch: 4 [0/60000 (0%)]\tLoss: 0.347194\n", + "Train Epoch: 4 [6400/60000 (11%)]\tLoss: 0.420034\n", + "Train Epoch: 4 [12800/60000 (21%)]\tLoss: 0.557871\n", + "Train Epoch: 4 [19200/60000 (32%)]\tLoss: 0.157676\n", + "Train Epoch: 4 [25600/60000 (43%)]\tLoss: 0.444332\n", + "Train Epoch: 4 [32000/60000 (53%)]\tLoss: 0.443144\n", + "Train Epoch: 4 [38400/60000 (64%)]\tLoss: 0.518663\n", + "Train Epoch: 4 [44800/60000 (75%)]\tLoss: 0.300901\n", + "Train Epoch: 4 [51200/60000 (85%)]\tLoss: 0.314376\n", + "Train Epoch: 4 [57600/60000 (96%)]\tLoss: 0.365263\n", + "\n", + "Test set: Average loss: 0.3886, Accuracy: 8594/10000 (86%)\n", + "\n", + "Train Epoch: 5 [0/60000 (0%)]\tLoss: 0.258622\n", + "Train Epoch: 5 [6400/60000 (11%)]\tLoss: 0.323684\n", + "Train Epoch: 5 [12800/60000 (21%)]\tLoss: 0.242623\n", + "Train Epoch: 5 [19200/60000 (32%)]\tLoss: 0.304207\n", + "Train Epoch: 5 [25600/60000 (43%)]\tLoss: 0.430161\n", + "Train Epoch: 5 [32000/60000 (53%)]\tLoss: 0.380617\n", + "Train Epoch: 5 [38400/60000 (64%)]\tLoss: 0.453532\n", + "Train Epoch: 5 [44800/60000 (75%)]\tLoss: 0.639302\n", + "Train Epoch: 5 [51200/60000 (85%)]\tLoss: 0.345414\n", + "Train Epoch: 5 [57600/60000 (96%)]\tLoss: 0.440118\n", + "\n", + "Test set: Average loss: 0.3828, Accuracy: 8623/10000 (86%)\n", + "\n", + "Train Epoch: 6 [0/60000 (0%)]\tLoss: 0.297733\n", + "Train Epoch: 6 [6400/60000 (11%)]\tLoss: 0.313359\n", + "Train Epoch: 6 [12800/60000 (21%)]\tLoss: 0.296536\n", + "Train Epoch: 6 [19200/60000 (32%)]\tLoss: 0.194678\n", + "Train Epoch: 6 [25600/60000 (43%)]\tLoss: 0.354127\n", + "Train Epoch: 6 [32000/60000 (53%)]\tLoss: 0.360125\n", + "Train Epoch: 6 [38400/60000 (64%)]\tLoss: 0.367713\n", + "Train Epoch: 6 [44800/60000 (75%)]\tLoss: 0.301126\n", + "Train Epoch: 6 [51200/60000 (85%)]\tLoss: 0.361885\n", + "Train Epoch: 6 [57600/60000 (96%)]\tLoss: 0.286500\n", + "\n", + "Test set: Average loss: 0.3586, Accuracy: 8701/10000 (87%)\n", + "\n", + "Train Epoch: 7 [0/60000 (0%)]\tLoss: 0.276611\n", + "Train Epoch: 7 [6400/60000 (11%)]\tLoss: 0.427675\n", + "Train Epoch: 7 [12800/60000 (21%)]\tLoss: 0.251460\n", + "Train Epoch: 7 [19200/60000 (32%)]\tLoss: 0.242846\n", + "Train Epoch: 7 [25600/60000 (43%)]\tLoss: 0.379731\n", + "Train Epoch: 7 [32000/60000 (53%)]\tLoss: 0.467081\n", + "Train Epoch: 7 [38400/60000 (64%)]\tLoss: 0.359526\n", + "Train Epoch: 7 [44800/60000 (75%)]\tLoss: 0.286382\n", + "Train Epoch: 7 [51200/60000 (85%)]\tLoss: 0.251880\n", + "Train Epoch: 7 [57600/60000 (96%)]\tLoss: 0.212195\n", + "\n", + "Test set: Average loss: 0.3587, Accuracy: 8714/10000 (87%)\n", + "\n", + "Train Epoch: 8 [0/60000 (0%)]\tLoss: 0.289801\n", + "Train Epoch: 8 [6400/60000 (11%)]\tLoss: 0.317002\n", + "Train Epoch: 8 [12800/60000 (21%)]\tLoss: 0.265676\n", + "Train Epoch: 8 [19200/60000 (32%)]\tLoss: 0.374449\n", + "Train Epoch: 8 [25600/60000 (43%)]\tLoss: 0.283620\n", + "Train Epoch: 8 [32000/60000 (53%)]\tLoss: 0.423949\n", + "Train Epoch: 8 [38400/60000 (64%)]\tLoss: 0.340573\n", + "Train Epoch: 8 [44800/60000 (75%)]\tLoss: 0.410054\n", + "Train Epoch: 8 [51200/60000 (85%)]\tLoss: 0.416204\n", + "Train Epoch: 8 [57600/60000 (96%)]\tLoss: 0.374269\n", + "\n", + "Test set: Average loss: 0.3445, Accuracy: 8747/10000 (87%)\n", + "\n", + "Train Epoch: 9 [0/60000 (0%)]\tLoss: 0.291795\n", + "Train Epoch: 9 [6400/60000 (11%)]\tLoss: 0.330994\n", + "Train Epoch: 9 [12800/60000 (21%)]\tLoss: 0.228712\n", + "Train Epoch: 9 [19200/60000 (32%)]\tLoss: 0.232579\n", + "Train Epoch: 9 [25600/60000 (43%)]\tLoss: 0.269398\n", + "Train Epoch: 9 [32000/60000 (53%)]\tLoss: 0.231449\n", + "Train Epoch: 9 [38400/60000 (64%)]\tLoss: 0.278414\n", + "Train Epoch: 9 [44800/60000 (75%)]\tLoss: 0.235476\n", + "Train Epoch: 9 [51200/60000 (85%)]\tLoss: 0.263410\n", + "Train Epoch: 9 [57600/60000 (96%)]\tLoss: 0.325505\n", + "\n", + "Test set: Average loss: 0.3491, Accuracy: 8738/10000 (87%)\n", + "\n", + "Train Epoch: 10 [0/60000 (0%)]\tLoss: 0.255661\n", + "Train Epoch: 10 [6400/60000 (11%)]\tLoss: 0.358245\n", + "Train Epoch: 10 [12800/60000 (21%)]\tLoss: 0.256200\n", + "Train Epoch: 10 [19200/60000 (32%)]\tLoss: 0.244674\n", + "Train Epoch: 10 [25600/60000 (43%)]\tLoss: 0.242888\n", + "Train Epoch: 10 [32000/60000 (53%)]\tLoss: 0.384486\n", + "Train Epoch: 10 [38400/60000 (64%)]\tLoss: 0.199211\n", + "Train Epoch: 10 [44800/60000 (75%)]\tLoss: 0.213740\n", + "Train Epoch: 10 [51200/60000 (85%)]\tLoss: 0.239941\n", + "Train Epoch: 10 [57600/60000 (96%)]\tLoss: 0.319814\n", + "\n", + "Test set: Average loss: 0.3401, Accuracy: 8738/10000 (87%)\n", + "\n" + ] + } + ], + "source": [ + "for epoch in range(1, epochs + 1):\n", + " train(model, device, train_loader, optimizer, epoch)\n", + " test(model, device, test_loader)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.5.2" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/05-CNN-For-Image-Classification/02-cifar-cnn.ipynb b/05-CNN-For-Image-Classification/02-cifar-cnn.ipynb new file mode 100644 index 0000000..9e2543a --- /dev/null +++ b/05-CNN-For-Image-Classification/02-cifar-cnn.ipynb @@ -0,0 +1,32 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.5.2" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/05-CNN-For-Image-Classification/03-torchvision-models.ipynb b/05-CNN-For-Image-Classification/03-torchvision-models.ipynb new file mode 100644 index 0000000..9e2543a --- /dev/null +++ b/05-CNN-For-Image-Classification/03-torchvision-models.ipynb @@ -0,0 +1,32 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.5.2" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/README.md b/README.md index efed5d0..8b71b19 100644 --- a/README.md +++ b/README.md @@ -23,16 +23,16 @@ * [Hello World] 신경망 모델 구현하기 * [Hello World] 모델 저장, 재사용 4. [딥러닝으로 패션 아이템 구분하기](04-Neural-Network-For-Fashion) - Fashion MNIST 데이터셋과 앞서 배운 인공신경망을 이용하여 패션아이템을 구분해봅니다. - * [개념] Fashion MNIST 데이터셋 설명 - * [프로젝트 1] [Fashion MNIST 학습하기](04-Neural-Network-For-Fashion/4-fasion-mnist.ipynb) + * [개념] [Fashion MNIST 데이터셋 설명](04-Neural-Network-For-Fashion/01-fasion-mnist.ipynb) + * [프로젝트 1] [Fashion MNIST 학습하기](04-Neural-Network-For-Fashion/02-neural-network.ipynb) * [팁] 성능 측정법 알아보기 (Train/Validation/Test) - * [프로젝트 2] Dropout + * [프로젝트 2] [오버피팅과 정규화](04-Neural-Network-For-Fashion/02-overfitting-and-regularization.ipynb) * 더 보기 5. [이미지 인식능력이 탁월한 CNN](05-CNN-For-Image-Classification) * [개념] CNN 기초 - * [프로젝트 1] 모델 구현하기 - * [프로젝트 2] 컬러 데이터셋에 적용하기 - * [팁] 토치비전으로 복잡한 모델 사용하기 + * [프로젝트 1] [CNN 모델 구현하기](05-CNN-For-Image-Classification/01-cnn.ipynb) + * [프로젝트 2] [컬러 데이터셋에 적용하기](05-CNN-For-Image-Classification/02-cifar-cnn.ipynb) + * [팁] [토치비전으로 복잡한 모델 사용하기](05-CNN-For-Image-Classification/03-torcivision-models.ipynb) * 더 보기 6. [신경망 깊게 쌓아보기](06-Getting-Deeper) - CNN의 발전사와 함께 발전된 형태의 모델들을 알아봅니다. * [개념] 복잡한 CNN모델들