• 全部
  • 7天学习
  • 问题求助
  • 公开课
  • 博文广场
  • 精选专栏
  • 公告栏
  • 电子书
  • 代码块
  • Python技能树

pytorch 官方DCGAN 中的Discriminator的训练问题

KIngJameS233 2021-09-15 13:54:15

https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html

# Training Loop

# Lists to keep track of progress
img_list = []
G_losses = []
D_losses = []
iters = 0

print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
    # For each batch in the dataloader
    for i, data in enumerate(dataloader, 0):

        ############################
        # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
        ###########################
        ## Train with all-real batch
        netD.zero_grad()
        # Format batch
        real_cpu = data[0].to(device)
        b_size = real_cpu.size(0)
        label = torch.full((b_size,), real_label, dtype=torch.float, device=device)
        # Forward pass real batch through D
        output = netD(real_cpu).view(-1)
        # Calculate loss on all-real batch
        errD_real = criterion(output, label)
        # Calculate gradients for D in backward pass
        errD_real.backward()
        D_x = output.mean().item()

        ## Train with all-fake batch
        # Generate batch of latent vectors
        noise = torch.randn(b_size, nz, 1, 1, device=device)
        # Generate fake image batch with G
        fake = netG(noise)
        label.fill_(fake_label)
        # Classify all fake batch with D
        output = netD(fake.detach()).view(-1)
        # Calculate D's loss on the all-fake batch
        errD_fake = criterion(output, label)
        # Calculate the gradients for this batch, accumulated (summed) with previous gradients
        errD_fake.backward()
        D_G_z1 = output.mean().item()
        # Compute error of D as sum over the fake and the real batches
        #下面这行跟参数更新有什么联系,这个errD 变量都没在其他地方出现过
        errD = errD_real + errD_fake
        # Update D
        optimizerD.step()

        ############################
        # (2) Update G network: maximize log(D(G(z)))
        ###########################
        netG.zero_grad()
        label.fill_(real_label)  # fake labels are real for generator cost
        # Since we just updated D, perform another forward pass of all-fake batch through D
        output = netD(fake).view(-1)
        # Calculate G's loss based on this output
        errG = criterion(output, label)
        # Calculate gradients for G
        errG.backward()
        D_G_z2 = output.mean().item()
        # Update G
        optimizerG.step()

        # Output training stats
        if i % 50 == 0:
            print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
                  % (epoch, num_epochs, i, len(dataloader),
                     errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))

        # Save Losses for plotting later
        G_losses.append(errG.item())
        D_losses.append(errD.item())

        # Check how the generator is doing by saving G's output on fixed_noise
        if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
            with torch.no_grad():
                fake = netG(fixed_noise).detach().cpu()
            img_list.append(vutils.make_grid(fake, padding=2, normalize=True))

        iters += 1
...全文
44 点赞 收藏 回复
写回复
回复
切换为时间正序
请发表友善的回复…
发表回复

还没有回复,快来抢沙发~

发帖
Python全栈技术社区
创建于2020-11-25

5705

社区成员

创建由Python学习者和社区专家组成的国内最大的第三方Python中文社区,帮助社区成员更好地入门学习、职业成长和应用实践
帖子事件
创建了帖子
2021-09-15 13:54
社区公告

创建由Python学习者和社区专家组成的国内最大的第三方Python中文社区,帮助社区成员更好地入门学习、职业成长和应用实践

  • 这里有最新最全的 Python 学习内容及资源,每月多达4次技术公开课
  • 这里有众多 Python 学习者,陪伴你一起交流成长
  • 这里有专业 Python 社区专家、讲师,帮助你跨越学习瓶颈,解决实操难题
  • 这里有丰富的社区活动,可以开阔眼界,结识更多同伴

【最新活动】:

  1. 周四技术公开课讲师招募中,点击查看详情
  2. “Python 社区专家团” 招募中,点击查看详情