1. 程式人生 > >從頭學pytorch(二十):殘差網路resnet

從頭學pytorch(二十):殘差網路resnet

殘差網路ResNet

resnet是何凱明大神在2015年提出的.並且獲得了當年的ImageNet比賽的冠軍. 殘差網路具有里程碑的意義,為以後的網路設計提出了一個新的思路.
googlenet的思路是加寬每一個layer,resnet的思路是加深layer.

論文地址:https://arxiv.org/abs/1512.03385
論文裡指出,隨著網路深度的增加,模型表現並沒有更好,即所謂的網路退化.注意,不是過擬合,而是更深層的網路即便是train error也比淺層網路更高.

這說明,深層網路沒有學習到合理的引數.

然後,大神就開始開腦洞了,提出了殘差結構,也叫shortcut connection:


以前學習的是F(x)(就是每一層的對映關係,輸入x,輸出F(x)),現在學的是F(x)-x,那為啥學習F(x)-x就更容易呢?
關於殘差網路為何有效的分析,參考:https://zhuanlan.zhihu.com/p/80226180
目前並沒有一個統一的結論,我比較傾向於模型整合這個說法.

殘差網路就可以被看作是一系列路徑集合組裝而成的一個整合模型,其中不同的路徑包含了不同的網路層子集。Andreas Veit等人展開了幾組實驗(Lesion study),在測試時,刪去殘差網路的部分網路層(即丟棄一部分路徑)、或交換某些網路模組的順序(改變網路的結構,丟棄一部分路徑的同時引入新路徑)。實驗結果表明,網路的表現與正確網路路徑數平滑相關(在路徑變化時,網路表現沒有劇烈變化),這表明殘差網路展開後的路徑具有一定的獨立性和冗餘性,使得殘差網路表現得像一個整合模型(ensemble)

模型結構

大神的思路咱跟不上,管他孃的為啥有效呢,深度學習裡的玄學事情還少嗎,這種問題留給科學家去研究吧. 咱們用深度學習是來做產品的,實際提高生產力的.
我們來看看resnet模型結構.

實現殘差結構

按照論文裡的34-layer這個來實現.
仔細看上面兩個圖可知,殘差塊用的卷積核為kernel_size=3.模型的conv3_1,conv4_1,conv5_1之前做了寬高減半的downsample.conv2_x是通過maxpool(stride=2)完成的下采樣.其餘的是通過conv2d(stride=2)完成的.

resnet採取了和vgg類似的堆疊結構,只不過vgg堆疊的是連續卷積核,resnet堆疊的是連續殘差塊.和vgg一樣,越往後面的層,channel相較於前面的layer翻倍,h,w減半.

程式碼不是一蹴而就的,具體如何一步步實現可以去看github提交的history.

殘差塊的實現注意兩點

  • 第一個卷積需要傳入stride完成下采樣
  • 卷積後改變了輸入shape的話,為了完成相加的操作,需要對輸入做1x1卷積
class Residual(nn.Module):
    def __init__(self,in_channels,out_channels,stride=1):
        super(Residual,self).__init__()
        self.stride = stride
        self.conv1 = nn.Conv2d(in_channels,out_channels,kernel_size=3,stride=stride,padding=1)
        self.bn1 = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU(inplace=True)
        self.conv2 = nn.Conv2d(out_channels,out_channels,kernel_size=3,padding=1)
        self.bn2 = nn.BatchNorm2d(out_channels)

        # x卷積後shape發生改變,比如:x:[1,64,56,56] --> [1,128,28,28],則需要1x1卷積改變x
        if in_channels != out_channels:
            self.conv1x1 = nn.Conv2d(in_channels,out_channels,kernel_size=1,stride=stride)
        else:
            self.conv1x1 = None
            

    def forward(self,x):
        # print(x.shape)
        o1 = self.relu(self.bn1(self.conv1(x)))
        # print(o1.shape)
        o2 = self.bn2(self.conv2(o1))
        # print(o2.shape)

        if self.conv1x1:
            x = self.conv1x1(x) 

        out = self.relu(o2 + x)
        return out

在卷積層完成特徵提取後, 每張圖可以得到512個7x7的feature map.做全域性平均池化後得到512個feature.再傳入全連線層做特徵的線性組合得到num_classes個類別.

我們來實現34-layer的resnet

class ResNet(nn.Module):
    def __init__(self,in_channels,num_classes):
        super(ResNet,self).__init__()
        self.conv1 = nn.Sequential(
            nn.Conv2d(in_channels,64,kernel_size=7,stride=2,padding=3),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True)
        )

        self.conv2 = nn.Sequential(
            nn.MaxPool2d(kernel_size=3,stride=2,padding=1),
            Residual(64,64),
            Residual(64,64),
            Residual(64,64),
        )

        self.conv3 = nn.Sequential(
            Residual(64,128,stride=2),
            Residual(128,128),
            Residual(128,128),
            Residual(128,128),
            Residual(128,128),
        )

        self.conv4 = nn.Sequential(
            Residual(128,256,stride=2),
            Residual(256,256),
            Residual(256,256),
            Residual(256,256),
            Residual(256,256),
            Residual(256,256),
        )

        self.conv5 = nn.Sequential(
            Residual(256,512,stride=2),
            Residual(512,512),
            Residual(512,512),
        )

        # self.avg_pool = nn.AvgPool2d(kernel_size=7)
        self.avg_pool = nn.AdaptiveAvgPool2d(1) #代替AvgPool2d以適應不同size的輸入
        self.fc = nn.Linear(512,num_classes)

    def forward(self,x):
        out = self.conv1(x)
        out = self.conv2(out)
        out = self.conv3(out)
        out = self.conv4(out)
        out = self.conv5(out)
        
        out = self.avg_pool(out)
        out = out.view((x.shape[0],-1))

        out = self.fc(out)

        return out

接下來就還是熟悉的套路

資料載入

batch_size,num_workers=32,2
train_iter,test_iter = learntorch_utils.load_data(batch_size,num_workers,resize=48)
print('load data done,batch_size:%d' % batch_size)

模型定義

net = ResNet(1,10).cuda()

損失函式定義

l = nn.CrossEntropyLoss()

優化器定義

opt = torch.optim.Adam(net.parameters(),lr=0.01)

評估函式定義

num_epochs=5
def test():
    acc_sum = 0
    batch = 0
    for X,y in test_iter:
        X,y = X.cuda(),y.cuda()
        y_hat = net(X)
        acc_sum += (y_hat.argmax(dim=1) == y).float().sum().item()
        batch += 1
    
    test_acc = acc_sum/(batch*batch_size)

    # print('test acc:%f' % test_acc)
    return test_acc

訓練

def train():
    for epoch in range(num_epochs):
        train_l_sum,batch,train_acc_sum=0,1,0
        start = time.time()
        for X,y in train_iter:
            X,y = X.cuda(),y.cuda() #把tensor放到視訊記憶體
            y_hat = net(X)  #前向傳播
            loss = l(y_hat,y) #計算loss,nn.CrossEntropyLoss中會有softmax的操作
            opt.zero_grad()#梯度清空
            loss.backward()#反向傳播,求出梯度
            opt.step()#根據梯度,更新引數

            # 資料統計
            train_l_sum += loss.item()
            train_acc_sum += (y_hat.argmax(dim=1) == y).float().sum().item()
            train_loss = train_l_sum/(batch*batch_size)
            train_acc = train_acc_sum/(batch*batch_size)
            
            if batch % 100 == 0: #每100個batch輸出一次訓練資料
                print('epoch %d,batch %d,train_loss %.3f,train_acc:%.3f' % (epoch,batch,train_loss,train_acc))

            if batch % 300 == 0: #每300個batch測試一次
                test_acc = test()
                print('epoch %d,batch %d,test_acc:%.3f' % (epoch,batch,test_acc))

            batch += 1

        end = time.time()
        time_per_epoch =  end - start
        print('epoch %d,batch_size %d,train_loss %f,time %f' % 
                (epoch + 1,batch_size ,train_l_sum/(batch*batch_size),time_per_epoch))
        test()

train()

輸出如下:

load data done,batch_size:32
epoch 0,batch 100,train_loss 0.082,train_acc:0.185
epoch 0,batch 200,train_loss 0.065,train_acc:0.297
epoch 0,batch 300,train_loss 0.053,train_acc:0.411
epoch 0,batch 300,test_acc:0.684
epoch 0,batch 400,train_loss 0.046,train_acc:0.487
epoch 0,batch 500,train_loss 0.041,train_acc:0.539
epoch 0,batch 600,train_loss 0.038,train_acc:0.578
epoch 0,batch 600,test_acc:0.763
epoch 0,batch 700,train_loss 0.035,train_acc:0.604
epoch 0,batch 800,train_loss 0.033,train_acc:0.628
epoch 0,batch 900,train_loss 0.031,train_acc:0.647
epoch 0,batch 900,test_acc:0.729
epoch 0,batch 1000,train_loss 0.030,train_acc:0.661
epoch 0,batch 1100,train_loss 0.029,train_acc:0.674
epoch 0,batch 1200,train_loss 0.028,train_acc:0.686
epoch 0,batch 1200,test_acc:0.802
epoch 0,batch 1300,train_loss 0.027,train_acc:0.696