1. 程式人生 > >深度學習&PyTorch筆記 (1)線性迴歸模型

深度學習&PyTorch筆記 (1)線性迴歸模型

首先建立模型
class LinearRegression(nn.Module):
    def __init__(self):
        super(LinearRegression, self).__init__()  # nn.Module 的初始化函式
        self.linear = nn.Linear(1, 1)  

    def forward(self, x):  
        out = self.linear(x)
        return out
    mdoel = LinearRegression()
層結構和損失函式都是繼承自 nn.Module的,nn.Linear是線性迴歸模型。
forward方法向前傳播,呼叫過程是在nn.Module類中的__call__方法中被呼叫,所以可以直接用model(x)來向前傳播。
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
num_epochs = 1000
for epoch in range(num_epochs):
    inputs = Variable(x_train)
    target = Variable(y_train)
    out = model(inputs)
    loss = criterion(out, target)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    print(str(model.state_dict()['linear.weight']) + "\t" + str(model.state_dict()['linear.bias']))

    if (epoch + 1) % 20 == 0:
        print('Epoch[{}/{}],loss:{:.6f}'.format(epoch + 1, num_epochs, loss.data[0]))
model.eval()
predict = model(Variable(x_train))
predict = predict.data.numpy()
# a = np.column_stack((x_train, y_train, predict))
# print(a)
plt.plot(x_train.numpy(), y_train.numpy(), 'ro', color='blue', label='Original data')
plt.plot(x_train.numpy(), predict, 'ro', color='red')
plt.show()