备注
点击 这里 下载完整示例代码
PyTorch:nn¶
Created On: Dec 03, 2020 | Last Updated: Jun 14, 2022 | Last Verified: Nov 05, 2024
一个三阶多项式,训练来预测 \(y=\sin(x)\),范围为 \(-\pi\) 到 \(pi\),通过最小化平方欧几里得距离。
此实现使用 PyTorch 的 nn 包构建网络。PyTorch 的自动梯度使定义计算图和计算梯度变得简单,但直接使用原始的自动梯度可能对定义复杂的神经网络来说太低级了;这时 nn 包可以帮助解决问题。nn 包定义了一组模块,你可以将这些模块看作是神经网络的一层,它从输入生成输出,并可能拥有一些可训练的权重。
99 1127.20458984375
199 751.5488891601562
299 502.2117614746094
399 336.67578125
499 226.7469482421875
599 153.72512817382812
699 105.20501708984375
799 72.95524597167969
899 51.5127067565918
999 37.25068664550781
1099 27.761220932006836
1199 21.4445858001709
1299 17.238258361816406
1399 14.436026573181152
1499 12.568317413330078
1599 11.322880744934082
1699 10.491962432861328
1799 9.937276840209961
1899 9.566793441772461
1999 9.319189071655273
Result: y = 0.010401931591331959 + 0.8371671438217163 x + -0.0017945060972124338 x^2 + -0.09054619818925858 x^3
import torch
import math
# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
# For this example, the output y is a linear function of (x, x^2, x^3), so
# we can consider it as a linear layer neural network. Let's prepare the
# tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)
# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
# of shape (2000, 3)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. The Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
# to match the shape of `y`.
model = torch.nn.Sequential(
torch.nn.Linear(3, 1),
torch.nn.Flatten(0, 1)
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(xx)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
# You can access the first layer of `model` like accessing the first item of a list
linear_layer = model[0]
# For linear layer, its parameters are stored as `weight` and `bias`.
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')