← 返回首页
pytorch搭建最简单的基础神经网络
发表时间:2025-02-05 03:09:03
pytorch搭建最简单的基础神经网络

使用 PyTorch 搭建的简单全连接神经网络示例,用于解决回归任务。

1.代码实现

import torch
import torch.nn as nn
import torch.optim as optim

# 生成示例数据(回归任务)
torch.manual_seed(0)  # 设置随机种子保证可重复性
X = torch.randn(100, 1)  # 100个样本,1个特征
y = 3 * X + 2 + torch.randn(100, 1) * 0.1  # 线性关系 + 噪声

# 定义神经网络结构
class SimpleNN(nn.Module):
    def __init__(self):
        super().__init__()
        self.hidden = nn.Linear(1, 10)  # 输入层到隐藏层
        self.relu = nn.ReLU()           # 激活函数
        self.output = nn.Linear(10, 1)  # 隐藏层到输出层

    def forward(self, x):
        x = self.hidden(x)
        x = self.relu(x)
        x = self.output(x)
        return x

# 实例化模型、损失函数和优化器
model = SimpleNN()
criterion = nn.MSELoss()              # 均方误差损失
optimizer = optim.SGD(model.parameters(), lr=0.01)  # 随机梯度下降

# 训练循环
epochs = 500
for epoch in range(epochs):
    # 前向传播
    predictions = model(X)
    loss = criterion(predictions, y)

    # 反向传播和优化
    optimizer.zero_grad()  # 清空梯度
    loss.backward()        # 计算梯度
    optimizer.step()       # 更新参数

    # 每50个epoch打印进度
    if (epoch + 1) % 50 == 0:
        print(f'Epoch [{epoch+1}/{epochs}], Loss: {loss.item():.4f}')

# 测试预测
test_inputs = torch.tensor([[0.5], [1.0], [2.0]], dtype=torch.float32)
with torch.no_grad():  # 禁用梯度计算
    predictions = model(test_inputs)
    print("\nTest predictions:")
    for x, pred in zip(test_inputs, predictions):
        print(f"Input: {x.item():.2f} -> Predicted: {pred.item():.2f}")

输出结果:

Epoch [50/500], Loss: 0.9824
Epoch [100/500], Loss: 0.2654
Epoch [150/500], Loss: 0.1076
Epoch [200/500], Loss: 0.0675
Epoch [250/500], Loss: 0.0531
Epoch [300/500], Loss: 0.0443
Epoch [350/500], Loss: 0.0380
Epoch [400/500], Loss: 0.0325
Epoch [450/500], Loss: 0.0279
Epoch [500/500], Loss: 0.0242

Test predictions:
Input: 0.50 -> Predicted: 3.62
Input: 1.00 -> Predicted: 5.01
Input: 2.00 -> Predicted: 7.86

代码说明: