1. 程式人生 > >pytorch中全連線神經網路搭建兩種模式

pytorch中全連線神經網路搭建兩種模式

pytorch搭建神經網路是很簡單明瞭的,這裡介紹兩種自己常用的搭建模式:

import torch
import torch.nn as nn

first:

class NN(nn.Module):
    def __init__(self):
        super(NN,self).__init__()
        self.model=nn.Sequential(
            nn.Linear(30,40),
nn.ReLU(),
nn.Linear(40,60),
nn.Tanh(),
nn.Linear(60,10),
nn.Softmax()
        )
        self
.model[0].weight.data.uniform_(-3e-3, 3e-3) self.model[0].bias.data.uniform(-1,1)   def forward(self,states): return self.model(states)

這一種是將整個網路寫在一個Sequential中,網路引數設定可以在網路搭建好後單獨設定:self.model[0].weight.data.uniform_(-3e-3,3e-3),這是設定第一個linear的權重是(-3e-3,3e-3)之間的均勻分佈,bias是-1至1之間的均勻分佈。

second:

class NN1(nn.Module):
    def __init__(self):
        super(NN1,self).__init__()
        self.Linear1=nn.Linear(30,40)
        self.Linear1.weight.data.fill_(-0.1)
        #self.Linear1.weight.data.uniform_(-3e-3,3e-3)
self.Linear1.bias.data.fill_(-0.1)
        self.layer1=nn.Sequential(self.Linear1,nn.ReLU())

        self
.Linear2=nn.Linear(40,60) self.layer2=nn.Sequential(self.Linear2,nn.Tanh()) self.Linear3=nn.Linear(60,10) self.layer3=nn.Sequential(self.Linear3,nn.Softmax()) def forward(self,states): return self.model(states)

網路引數的設定可以在定義完線性層之後直接設定如這裡對於第一個線性層是這樣設定:self.Linear1.weight.data.fill_(-0.1),self.Linear1.bias.data.fill_(-0.1)。

你可以看一下這樣定義完的引數的效果:

Net=NN()
print("0:",Net.model[0])
print("weight:",type(Net.model[0].weight))
print("weight:",type(Net.model[0].weight.data))
print("bias",Net.model[0].bias.data)
print('1:',Net.model[1])
#print("weight:",Net.model[1].weight.data)
print('2:',Net.model[2])
print('3:',Net.model[3])
#print(Net.model[-1])

Net1=NN1()
print(Net1.Linear1.weight.data)

輸出:

0: Linear (30 -> 40)
weight: <class 'torch.nn.parameter.Parameter'>
weight: <class 'torch.FloatTensor'>
bias 
-0.6287
-0.6573
-0.0452
 0.9594
-0.7477
 0.1363
-0.1594
-0.1586
 0.0360
 0.7375
 0.2501
-0.1371
 0.8359
-0.9684
-0.3886
 0.7200
-0.3906
 0.4911
 0.8081
-0.5449
 0.9872
 0.2004
 0.0969
-0.9712
 0.0873
 0.4562
-0.4857
-0.6013
 0.1651
 0.3315
-0.7033
-0.7440
 0.6487
 0.9802
-0.5977
 0.3245
 0.7563
 0.5596
 0.2303
-0.3836
[torch.FloatTensor of size 40]

1: ReLU ()
2: Linear (40 -> 60)
3: Tanh ()

-0.1000 -0.1000 -0.1000  ...  -0.1000 -0.1000 -0.1000
-0.1000 -0.1000 -0.1000  ...  -0.1000 -0.1000 -0.1000
-0.1000 -0.1000 -0.1000  ...  -0.1000 -0.1000 -0.1000
          ...             ⋱             ...          
-0.1000 -0.1000 -0.1000  ...  -0.1000 -0.1000 -0.1000
-0.1000 -0.1000 -0.1000  ...  -0.1000 -0.1000 -0.1000
-0.1000 -0.1000 -0.1000  ...  -0.1000 -0.1000 -0.1000
[torch.FloatTensor of size 40x30]


Process finished with exit code 0


這裡要注意self.Linear1.weight的型別是網路的parameter。而self.Linear1.weight.data是FloatTensor。