CNNでCIFAR 10を分類する(pytorch)
3965 ワード
CIFAR 10には60000個の(32*32)サイズの色付き画像があり、全部で10種類のカテゴリがあり、各カテゴリは6000個あります.
トレーニングセットは全部で50000個の画像、テストセットは全部で10000個の画像です.
次は損失の出力です
そしてその出力
転載先:https://www.cnblogs.com/MartinLwx/p/10549229.html
トレーニングセットは全部で50000個の画像、テストセットは全部で10000個の画像です.
データセットを先に読み込む
import numpy as np
import torch
import torch.optim as optim
from torchvision import datasets
import torchvision.transforms as transforms
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
trainset = datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
ネットワークアーキテクチャの再定義
import torch.nn.functional as F
import torch.nn as nn
class classifier(nn.Module):
def __init__(self):
super().__init__()
''' 3*32*32, '''
self.conv1 = nn.Conv2d(3, 16, 3, padding=1) # 16*16*16
self.conv2 = nn.Conv2d(16, 32, 3, padding=1) # 32*8*8
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(32 * 8 * 8, 512)
self.fc2 = nn.Linear(512, 10)
self.dropout = nn.Dropout(0.2) #
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 32 * 8 * 8)
x = self.dropout(x)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = self.fc2(x)
return x
訓練開始!
model = classifier()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
epochs = 10
for e in range(epochs):
train_loss = 0
for data, target in train_loader:
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss += loss.item() * data.size(0) #loss.item() , *batch_size=
train_loss = train_loss/len(train_loader.dataset)
print('Epoch: {} \t Training Loss:{:.6f}'.format(e+1, train_loss))
次は損失の出力です
Epoch: 1 Training Loss:1.366521
Epoch: 2 Training Loss:1.063830
Epoch: 3 Training Loss:0.916826
Epoch: 4 Training Loss:0.799573
Epoch: 5 Training Loss:0.708303
Epoch: 6 Training Loss:0.627443
Epoch: 7 Training Loss:0.564043
Epoch: 8 Training Loss:0.503542
Epoch: 9 Training Loss:0.465513
Epoch: 10 Training Loss:0.418729
検証セットでのパフォーマンスを見てみましょう。
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = model(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
そしてその出力
Accuracy of plane : 74 %
Accuracy of car : 76 %
Accuracy of bird : 55 %
Accuracy of cat : 56 %
Accuracy of deer : 54 %
Accuracy of dog : 54 %
Accuracy of frog : 81 %
Accuracy of horse : 72 %
Accuracy of ship : 74 %
Accuracy of truck : 68 %
転載先:https://www.cnblogs.com/MartinLwx/p/10549229.html