深度学习之神经网络(CNN/RNN/GAN)算法原理+实战

m0_74210484 2023-06-04 13:15:20

入门教程

安装PyTorch

```
```

`pip install torch `

创建张量

```
```

python复制代码

`import torch x = torch.Tensor([[1, 2], [3, 4]]) print(x) `

自动求导

```
```

python复制代码

`import torch x = torch.tensor([1., 2.], requires_grad=True) y = x.sum() y.backward() print(x.grad) `

进阶教程

定义神经网络模型

```
```

python复制代码

`import torch.nn as nn class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64, 10) def forward(self, x): x = x.view(-1, 784) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return F.log_softmax(x, dim=1) `

训练神经网络模型

```
```

python复制代码

`import torch import torch.nn as nn import torch.optim as optim from torchvision import datasets, transforms # 定义数据预处理 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) # 加载数据集 trainset = datasets.MNIST('../data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) # 定义模型和优化器 net = Net() criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.5) # 训练模型 for epoch in range(10): running_loss = 0.0 for i, data in enumerate(trainloader, 0): inputs, labels = data optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if i % 100 == 99: print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 100)) running_loss = 0.0 `

...全文
73 回复 打赏 收藏 转发到动态 举报

1,556

• 近7日
• 近30日
• 至今