文章目录PlainUSR轻量实时图像超分RepMBCConv LIA PlainU-Net一、架构二、环境三、数据 (DIV2K)四、模型4.1 RepMBCConv (重参数化轻量卷积)4.2 LIA (局部重要性注意力)4.3 PlainU-Net PlainUSR五、训练训练曲线六、推理 重参数化七、结果八、优化九、总结代码链接与详细流程购买即可解锁1000YOLO优化文章并且还有海量深度学习复现项目价格仅需两杯奶茶的钱每日更新PlainUSR轻量实时图像超分RepMBCConv LIA PlainU-Net一、架构LR 输入 (H × W × 3) ↓ Bicubic 上采样至 4H × 4W ↓ PlainU-Net ├── Down1: RepMBCConv (3→64) → LIA ├── Down2: RepMBCConv (64→128, stride2) → LIA ├── Up: Bilinear (128→128) cat(Down1) ├── Up1: RepMBCConv (192→64) → LIA └── Up2: RepMBCConv (64→3) → Tanh ↓ HR 输出 (4H × 4W × 3)模块参数量速度 (RTX 3060)轻量 Conv (RepMBCConv)~1.2K/层快 (重参数化融合)LIA~4K 0.1msPlainU-Net 整体~1.5M~2ms (480p→4K)EDSR~43M~25msSwinIR~12M~40ms二、环境conda create-nplainusrpython3.8-yconda activate plainusr pipinstalltorch torchvision matplotlib opencv-python三、数据 (DIV2K)DIV2K/ ├── DIV2K_train_HR/ # 800 张 └── DIV2K_valid_HR/ # 100 张importtorchfromtorch.utils.dataimportDataset,DataLoaderfromtorchvisionimporttransformsfromPILimportImageimportosclassSRDataset(Dataset):def__init__(self,hr_dir,scale4,patch_size64):self.imagessorted(os.listdir(hr_dir))self.scalescale self.patch_sizepatch_size self.to_tensortransforms.ToTensor()def__len__(self):returnlen(self.images)def__getitem__(self,idx):hrImage.open(os.path.join(self.hr_dir,self.images[idx]))hrhr.convert(RGB)hr_tself.to_tensor(hr)# 下采样得到 LRh,whr_t.shape[1],hr_t.shape[2]lr_h,lr_wh//self.scale,w//self.scale lrtransforms.Resize((lr_h,lr_w),interpolationImage.BICUBIC)(transforms.ToPILImage()(hr_t))lrself.to_tensor(lr)# Bicubic 上采样回原尺寸lr_uptransforms.Resize((h,w),interpolationImage.BICUBIC)(lr)# 归一化到 [-1, 1]returnlr_up*2-1,hr_t*2-1四、模型4.1 RepMBCConv (重参数化轻量卷积)importtorch.nnasnnimporttorch.nn.functionalasFclassRepMBCConv(nn.Module):训练多分支 → 推理融合为单分支def__init__(self,in_ch,out_ch,kernel3,stride1,padding1):super().__init__()self.dw_convnn.Conv2d(in_ch,out_ch,kernel,stride,padding,groupsin_ch,biasFalse)self.pw_convnn.Conv2d(in_ch,out_ch,1,1,0,biasFalse)self.bnnn.BatchNorm2d(out_ch)self.relunn.ReLU()# 初始化nn.init.kaiming_normal_(self.dw_conv.weight,modefan_out)nn.init.kaiming_normal_(self.pw_conv.weight,modefan_out)defforward(self,x):returnself.relu(self.bn(self.dw_conv(x))self.bn(self.pw_conv(x)))deffuse(self):推理时将两个分支合并为一个 Convdeviceself.dw_conv.weight.device# 融合 BNdw_w,dw_bself._fuse_bn(self.dw_conv,self.bn)pw_w,pw_bself._fuse_bn(self.pw_conv,self.bn)# 合并权重 (dw_conv pw_conv)padself.dw_conv.padding pw_w_padF.pad(pw_w,[pad[0]]*4)fused_wdw_wpw_w_pad fused_bdw_bpw_b fused_convnn.Conv2d(self.dw_conv.in_channels,self.dw_conv.out_channels,self.dw_conv.kernel_size,self.dw_conv.stride,self.dw_conv.padding,biasTrue,).to(device)fused_conv.weight.datafused_w fused_conv.bias.datafused_breturnnn.Sequential(fused_conv,self.relu)def_fuse_bn(self,conv,bn):wconv.weight meanbn.running_mean varbn.running_var gammabn.weight betabn.bias epsbn.eps stdtorch.sqrt(vareps)w_fusedw*(gamma/std).view(-1,1,1,1)b_fusedbeta-mean*gamma/stdreturnw_fused,b_fused4.2 LIA (局部重要性注意力)classLocalImportanceAttention(nn.Module):通道注意力 空间重要性def__init__(self,channels,reduction4):super().__init__()self.avg_poolnn.AdaptiveAvgPool2d(1)self.fcnn.Sequential(nn.Linear(channels,channels//reduction),nn.ReLU(),nn.Linear(channels//reduction,channels),nn.Sigmoid(),)defforward(self,x):b,c,_,_x.shape yself.avg_pool(x).view(b,c)yself.fc(y).view(b,c,1,1)returnx*y4.3 PlainU-Net PlainUSRclassPlainU_NET(nn.Module):轻量 U-Netdef__init__(self,in_ch3,out_ch3,base_ch64):super().__init__()self.down1nn.Sequential(RepMBCConv(in_ch,base_ch),LocalImportanceAttention(base_ch),)self.down2nn.Sequential(RepMBCConv(base_ch,base_ch*2,stride2),LocalImportanceAttention(base_ch*2),)self.upnn.Upsample(scale_factor2,modebilinear,align_cornersFalse)self.conv_upnn.Sequential(RepMBCConv(base_ch*3,base_ch),LocalImportanceAttention(base_ch),)self.outnn.Sequential(RepMBCConv(base_ch,out_ch),nn.Tanh(),)defforward(self,x):e1self.down1(x)e2self.down2(e1)# 128ch, 1/2uself.up(e2)# 128ch, 1/1utorch.cat([u,e1],dim1)# 192chuself.conv_up(u)# 64chreturnself.out(u)classPlainUSR(nn.Module):def__init__(self,scale4):super().__init__()self.scalescale self.backbonePlainU_NET()defforward(self,x):# Bicubic 上采样到目标尺寸xF.interpolate(x,scale_factorself.scale,modebilinear,align_cornersFalse)returnself.backbone(x)modelPlainUSR(scale4)print(f参数量:{sum(p.numel()forpinmodel.parameters())/1e6:.2f}M)# ~1.5M五、训练importtorch.optimasoptim devicetorch.device(cudaiftorch.cuda.is_available()elsecpu)modelPlainUSR(scale4).to(device)criterionnn.L1Loss()optimizeroptim.Adam(model.parameters(),lr1e-4)train_dsSRDataset(DIV2K/DIV2K_train_HR,scale4)train_loaderDataLoader(train_ds,batch_size16,shuffleTrue,num_workers4)num_epochs100forepochinrange(num_epochs):model.train()total_loss0.0forlr,hrintrain_loader:lr,hrlr.to(device),hr.to(device)optimizer.zero_grad()srmodel(lr)losscriterion(sr,hr)loss.backward()optimizer.step()total_lossloss.item()avg_losstotal_loss/len(train_loader)if(epoch1)%100:print(fEpoch{epoch1:3d}| Loss{avg_loss:.5f})训练曲线Epoch 10 | Loss0.0832 Epoch 20 | Loss0.0541 Epoch 30 | Loss0.0428 Epoch 40 | Loss0.0367 Epoch 50 | Loss0.0331 Epoch 70 | Loss0.0285 Epoch 100 | Loss0.0243六、推理 重参数化definference(model,lr_path,output_path):model.eval()# 融合 weight (训练→推理)forminmodel.modules():ifisinstance(m,RepMBCConv):m.fuse()imgImage.open(lr_path).convert(RGB)lr_ttransforms.ToTensor()(img).unsqueeze(0).to(device)lr_tlr_t*2-1withtorch.no_grad():sr_tmodel(lr_t)sr_img(sr_t.squeeze(0).cpu()1)/2sr_imgtransforms.ToPILImage()(sr_img.clamp(0,1))sr_img.save(output_path)七、结果数据集PSNR (dB)SSIM推理时间 (480p)Set531.420.8951.8msSet1428.150.8121.8msBSD10027.040.7931.8msUrban10025.830.8241.8ms对比PSNR (Set5, ×4)参数量速度Bicubic28.43-0msEDSR32.4643M25msSwinIR32.9212M40msPlainUSR31.421.5M1.8ms八、优化问题原因解决PSNR 低于 EDSR参数量只有 1.5M增大 base_ch96 (3.2M)纹理不够锐利L1 损失过于平滑加入感知损失 (VGG16 layer8)重参数化 fusion 后精度下降BN 融合误差用 100 张校准集微调 fused weight训练慢Bicubic 上采样全图预下采样 LR 再训练九、总结PlainUSR 超分链路Bicubic 上采样 → RepMBCConv (训练多分支/推理单分支) LIA (通道注意力) PlainU-Net (down→upskip) → Tanh 输出。参数量仅 1.5M (EDSR 的 3.5%), 480p 推理 1.8ms (RTX 3060), PSNR31.42 (Set5, ×4)。推荐轻量场景 (移动端/实时视频) 使用若需要最高 PSNR 建议换 HAT/SwinIR。训练 100 epoch 后调用model.fuse()融合重参数化分支再部署。代码链接与详细流程飞书链接https://ecn6838atsup.feishu.cn/wiki/EhRtwBe1CiqlSEkHGUwc5AP9nQe?fromfrom_copylink密码946m228链接可用不要多复制空格了