• 主页
• Study Jam 问答
• 博客
• 中文官网
• 官方视频
• 入门课程
• 案例征集
• 问答

# 深度学习代码bug

weixin_43997158 2021-07-16 20:17:48

``````from keras.layers import LSTM
model=Sequential()

``````class Mask(torch.nn.Module):
def __init__(self):
self.con = torch.nn.Conv2d(1, 100, kernel_size=(3, 3), stride=(2, 2))
self.con2 = torch.nn.Conv2d(100, 64, kernel_size=(3, 3))
self.pool = torch.nn.MaxPool2d(2)
self.l1 = torch.nn.Linear(1600, 50)
self.l2 = torch.nn.Linear(50, 2)
self.dropout = torch.nn.Dropout(p=0.2)

def forward(self, x):
batch_size = x.size(0)
x = self.con(x)
x = F.relu(x)
x = self.pool(x)
x = self.con2(x)
x = F.relu(x)
x = self.pool(x)
x = x.view(batch_size, -1)
x = self.l1(x)
x = F.relu(x)
# x = self.dropout(x)
x = self.l2(x)
x = F.softmax(x)
return x``````

``````def train():
running_loss = 0.0
for batch_idx, (inputs, target) in enumerate(train_Loader):
inputs = inputs.to(device)
target = target.to(device)
output = model(inputs.float())  # 前向传播
_, predicts = torch.max(output.data, dim=1)
print(predicts)
accuracy = (predicts.cpu() == target.cpu()).sum().item()
loss = criterion(output, target.long())  # 损失计算
loss.backward()  # 反向传播
optimizer.step()  # 更新梯度
running_loss = running_loss + loss``````

...全文
325 1 收藏 回复

TensorFlow 社区

398

• 第一时间更新 TensorFlow 产品进展
• 定期发布 TensorFlow 实操技巧与独家案例
• 聚集机器学习、人工智能领域优质用户内容
• 鼓励开发者自主探讨、交流学习

【更多官方渠道请看这里】