图像分割技术的实践应用涉及构建、训练和评估语义分割模型。U-Net架构的简化版(一种在分割任务中常用,尤其在医疗影像等领域广泛应用的架构)将使用PyTorch实现。虽然主要关注U-Net,但这些原理也适用于FCN或DeepLab等多种架构。我们假设您已拥有安装了PyTorch、TorchVision以及诸如NumPy和Matplotlib等库的可用的Python环境。1. 数据准备语义分割需要图像及其对应的像素级掩码。掩码中的每个像素都被标记为其所属的类别(例如,0代表背景,1代表道路,2代表建筑物)。在此练习中,您可以使用Pascal VOC、Cityscapes等标准数据集,甚至可以创建一个简单的合成数据集。我们假设数据集目录结构如下:data/ ├── images/ │ ├── 0001.png │ ├── 0002.png │ └── ... └── masks/ ├── 0001.png ├── 0002.png └── ...我们需要一个自定义的PyTorch Dataset 类来加载图像和掩码。import torch from torch.utils.data import Dataset, DataLoader from torchvision import transforms from PIL import Image import os import numpy as np class SegmentationDataset(Dataset): def __init__(self, image_dir, mask_dir, transform=None, mask_transform=None): self.image_dir = image_dir self.mask_dir = mask_dir self.image_filenames = sorted(os.listdir(image_dir)) self.mask_filenames = sorted(os.listdir(mask_dir)) self.transform = transform self.mask_transform = mask_transform # 基本检查:确保图像和掩码列表匹配 assert len(self.image_filenames) == len(self.mask_filenames), \ "Number of images and masks must be the same." # (可选)在此处添加更严格的检查(例如,匹配文件名) def __len__(self): return len(self.image_filenames) def __getitem__(self, idx): img_path = os.path.join(self.image_dir, self.image_filenames[idx]) mask_path = os.path.join(self.mask_dir, self.mask_filenames[idx]) image = Image.open(img_path).convert("RGB") mask = Image.open(mask_path).convert("L") # 假设掩码是灰度图 if self.transform: image = self.transform(image) if self.mask_transform: # 重要:对图像和掩码应用相同的几何变换 # 但避免像图像一样对掩码值进行归一化。 # 通常需要仔细处理随机变换。 # 为简单起见,此处假设基本的尺寸调整/张量转换。 mask = self.mask_transform(mask) # 将掩码转换为LongTensor以用于CrossEntropyLoss mask = mask.squeeze(0).long() else: # 如果没有特定的掩码变换,则进行默认转换 mask = torch.from_numpy(np.array(mask)).long() return image, mask # 定义变换(根据需要调整尺寸和归一化) image_transform = transforms.Compose([ transforms.Resize((128, 128)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) mask_transform = transforms.Compose([ transforms.Resize((128, 128), interpolation=transforms.InterpolationMode.NEAREST), # 掩码使用NEAREST插值 transforms.ToTensor() ]) # 创建Dataset和DataLoader # 替换为您的实际数据路径 train_dataset = SegmentationDataset('data/images', 'data/masks', transform=image_transform, mask_transform=mask_transform) # val_dataset = SegmentationDataset('data_val/images', 'data_val/masks', transform=image_transform, mask_transform=mask_transform) # 用于验证 train_loader = DataLoader(train_dataset, batch_size=8, shuffle=True, num_workers=4) # val_loader = DataLoader(val_dataset, batch_size=8, shuffle=False, num_workers=4)请注意,调整掩码尺寸时使用了transforms.InterpolationMode.NEAREST。这可以防止插值在现有标签之间创建无效的类别标签。掩码张量通常应为LongTensor类型。2. 定义U-Net模型我们来构建一个简化版的U-Net。它由一个捕获上下文信息的编码器(收缩路径)和一个通过转置卷积和跳跃连接实现精确定位的解码器(扩张路径)组成。import torch.nn as nn import torch.nn.functional as F class DoubleConv(nn.Module): """(卷积 => 批归一化 => ReLU) * 2""" def __init__(self, in_channels, out_channels, mid_channels=None): super().__init__() if not mid_channels: mid_channels = out_channels self.double_conv = nn.Sequential( nn.Conv2d(in_channels, mid_channels, kernel_size=3, padding=1, bias=False), nn.BatchNorm2d(mid_channels), nn.ReLU(inplace=True), nn.Conv2d(mid_channels, out_channels, kernel_size=3, padding=1, bias=False), nn.BatchNorm2d(out_channels), nn.ReLU(inplace=True) ) def forward(self, x): return self.double_conv(x) class Down(nn.Module): """最大池化后进行双卷积的下采样""" def __init__(self, in_channels, out_channels): super().__init__() self.maxpool_conv = nn.Sequential( nn.MaxPool2d(2), DoubleConv(in_channels, out_channels) ) def forward(self, x): return self.maxpool_conv(x) class Up(nn.Module): """上采样后进行双卷积""" def __init__(self, in_channels, out_channels, bilinear=True): super().__init__() # 如果是双线性插值,使用普通卷积减少通道数 if bilinear: self.up = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) self.conv = DoubleConv(in_channels, out_channels, in_channels // 2) else: self.up = nn.ConvTranspose2d(in_channels , in_channels // 2, kernel_size=2, stride=2) self.conv = DoubleConv(in_channels, out_channels) def forward(self, x1, x2): x1 = self.up(x1) # 输入是CHW(通道、高度、宽度) diffY = x2.size()[2] - x1.size()[2] diffX = x2.size()[3] - x1.size()[3] x1 = F.pad(x1, [diffX // 2, diffX - diffX // 2, diffY // 2, diffY - diffY // 2]) # 如果您遇到填充问题,请参考 # https://github.com/HaiyongJiang/U-Net-Pytorch-Unstructured-Buggy/commit/0e854509c2cea854e5474a7ae105f32e70a5168b # https://github.com/xiaopeng-liao/Pytorch-UNet/commit/8ebac70e633bac59fc22bb5195e513d5832fb3bd x = torch.cat([x2, x1], dim=1) return self.conv(x) class OutConv(nn.Module): def __init__(self, in_channels, out_channels): super(OutConv, self).__init__() self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=1) def forward(self, x): return self.conv(x) class UNet(nn.Module): def __init__(self, n_channels, n_classes, bilinear=True): super(UNet, self).__init__() self.n_channels = n_channels self.n_classes = n_classes self.bilinear = bilinear self.inc = DoubleConv(n_channels, 64) self.down1 = Down(64, 128) self.down2 = Down(128, 256) self.down3 = Down(256, 512) factor = 2 if bilinear else 1 self.down4 = Down(512, 1024 // factor) self.up1 = Up(1024, 512 // factor, bilinear) self.up2 = Up(512, 256 // factor, bilinear) self.up3 = Up(256, 128 // factor, bilinear) self.up4 = Up(128, 64, bilinear) self.outc = OutConv(64, n_classes) def forward(self, x): x1 = self.inc(x) x2 = self.down1(x1) x3 = self.down2(x2) x4 = self.down3(x3) x5 = self.down4(x4) x = self.up1(x5, x4) x = self.up2(x, x3) x = self.up3(x, x2) x = self.up4(x, x1) logits = self.outc(x) return logits # 实例化模型 # RGB图像 n_channels=3,n_classes = 分割类别数(例如,二分类为2) num_classes = 2 # 示例:背景 + 前景 model = UNet(n_channels=3, n_classes=num_classes) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device)这个U-Net实现使用了标准卷积块、通过最大池化进行下采样,以及可选的双线性上采样或转置卷积进行上采样。跳跃连接将编码器路径的特征图与解码器路径中上采样后的特征图拼接起来,有助于恢复在下采样过程中丢失的精细细节。3. 损失函数和优化器对于多类别语义分割,标准的损失函数是逐像素应用的交叉熵损失。每个像素都被视为一个分类问题。如果您的数据集高度不平衡(例如,大背景下的小物体),您可以考虑加权交叉熵或Dice损失。$$ \text{CrossEntropyLoss}(output, target) = -\sum_{c=1}^{C} target_c \log(\text{softmax}(output)_c) $$其中 $C$ 是类别数量,$output$ 是模型对像素输出的原始logits,$target$ 是该像素的独热编码真实标签(不过PyTorch的nn.CrossEntropyLoss直接处理整数目标)。我们将使用Adam优化器。import torch.optim as optim # 损失函数 # 如果您有要忽略的标签(例如,边界像素),`ignore_index` 会很有用 criterion = nn.CrossEntropyLoss()#ignore_index=255) # 优化器 optimizer = optim.Adam(model.parameters(), lr=1e-4) # 学习率可能需要调整4. 训练循环训练循环遍历数据集,执行前向和反向传播,并更新模型权重。num_epochs = 25 # 根据需要调整 train_losses = [] model.train() # 设置模型为训练模式 for epoch in range(num_epochs): running_loss = 0.0 for i, (images, masks) in enumerate(train_loader): images = images.to(device) masks = masks.to(device) # 形状:[批量大小, 高度, 宽度] # 清零参数梯度 optimizer.zero_grad() # 前向传播 outputs = model(images) # 形状:[批量大小, 类别数, 高度, 宽度] # 计算损失 loss = criterion(outputs, masks) # 反向传播和优化 loss.backward() optimizer.step() running_loss += loss.item() if (i + 1) % 50 == 0: # 每50个mini-batch打印状态 print(f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{len(train_loader)}], Loss: {loss.item():.4f}') epoch_loss = running_loss / len(train_loader) train_losses.append(epoch_loss) print(f'Epoch [{epoch+1}/{num_epochs}] completed. Average Loss: {epoch_loss:.4f}') print('Finished Training') # 保存训练好的模型(可选) # torch.save(model.state_dict(), 'unet_segmentation_model.pth')这是一个基本的训练循环。实际中,您还会添加:验证循环,以监测未见数据的性能。学习率调度。在验证循环中计算IoU等评估指标。保存模型检查点。5. 评估指标:交并比(IoU)分割任务中最常用的指标是交并比(IoU),也称为Jaccard指数。它衡量特定类别下预测分割掩码($A$)与真实掩码($B$)之间的重叠程度。$$ IoU = J(A, B) = \frac{|A \cap B|}{|A \cup B|} = \frac{\text{交集面积}}{\text{并集面积}} $$平均IoU(mIoU)经常被报告,它是所有类别计算出的平均IoU。def calculate_iou(pred, target, num_classes, smooth=1e-6): """计算每个类别的IoU。""" pred = torch.argmax(pred, dim=1) # 将logits转换为预测的类别索引 [B, H, W] pred = pred.contiguous().view(-1) target = target.contiguous().view(-1) iou_per_class = [] for clas in range(num_classes): # 计算每个类别的IoU pred_inds = (pred == clas) target_inds = (target == clas) intersection = (pred_inds[target_inds]).long().sum().item() # 正确的交集计算 union = pred_inds.long().sum().item() + target_inds.long().sum().item() - intersection if union == 0: # 如果没有真实值或预测,如果两者都为空,得分为1,否则为0 iou_per_class.append(float('nan')) # 或根据惯例为0或1 else: iou = (intersection + smooth) / (union + smooth) iou_per_class.append(iou) return np.array(iou_per_class) def calculate_miou(pred_loader, model, num_classes, device): """计算数据集的平均IoU。""" model.eval() # 设置模型为评估模式 total_iou = np.zeros(num_classes) num_samples = 0 with torch.no_grad(): for images, masks in pred_loader: images = images.to(device) masks = masks.to(device) # 真实掩码 outputs = model(images) # 模型预测(logits) iou = calculate_iou(outputs.cpu(), masks.cpu(), num_classes) # 处理批次中不存在某个类别时的NaN值 # 对于mIoU,应跨批次累积交集和并集计数 # 这个简化版本平均了批次IoU,可能不太准确。 valid_iou = iou[~np.isnan(iou)] if len(valid_iou) > 0: total_iou[:len(valid_iou)] += valid_iou # 累积每个类别的IoU num_samples += 1 # 统计具有有效IoU分数的批次 # 计算平均IoU,忽略数据集中缺失类别的NaN值 mean_iou_per_class = total_iou / num_samples mean_iou = np.nanmean(mean_iou_per_class) # 对存在的类别进行平均 print(f'Mean IoU across {num_samples} samples: {mean_iou:.4f}') print(f'IoU per class: {mean_iou_per_class}') return mean_iou # 训练后的示例用法(假设您有一个val_loader) # mIoU = calculate_miou(val_loader, model, num_classes, device)实现mIoU计算通常涉及在所有批次中累积每个类别的交集和并集计数,然后再进行除法,而不是平均每个批次的IoU,尤其是在某些批次中可能没有某些类别时。6. 可视化预测结果可视化模型输出有助于定性地了解其性能。import matplotlib.pyplot as plt def visualize_predictions(dataset, model, device, num_samples=5): model.eval() samples_shown = 0 fig, axes = plt.subplots(num_samples, 3, figsize=(10, num_samples * 3)) fig.suptitle("Image / Ground Truth / Prediction") # 直接使用数据集获取归一化前的原始图像和掩码 vis_dataset = SegmentationDataset('data/images', 'data/masks', transform=transforms.Compose([transforms.Resize((128, 128))]), mask_transform=transforms.Compose([transforms.Resize((128, 128), interpolation=transforms.InterpolationMode.NEAREST)])) # 获取用于模型输入的归一化图像 input_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) with torch.no_grad(): for i in range(len(vis_dataset)): if samples_shown >= num_samples: break raw_image, raw_mask = vis_dataset[i] # 获取原始PIL图像/数组 input_image = input_transform(raw_image).unsqueeze(0).to(device) # 为模型准备 output = model(input_image) pred_mask = torch.argmax(output.squeeze(), dim=0).cpu().numpy() axes[samples_shown, 0].imshow(raw_image) axes[samples_shown, 0].set_title("Image") axes[samples_shown, 0].axis('off') axes[samples_shown, 1].imshow(raw_mask, cmap='gray') # 如有需要调整色图 axes[samples_shown, 1].set_title("Ground Truth") axes[samples_shown, 1].axis('off') axes[samples_shown, 2].imshow(pred_mask, cmap='gray') # 根据类别数调整色图 axes[samples_shown, 2].set_title("Prediction") axes[samples_shown, 2].axis('off') samples_shown += 1 plt.tight_layout(rect=[0, 0.03, 1, 0.95]) # 调整布局以防止标题重叠 plt.show() # 示例用法: # visualize_predictions(train_dataset, model, device) # 使用训练或验证数据集此可视化函数并排显示原始图像、真实掩码以及模型预测掩码的几个示例。(可选)您可以绘制训练损失曲线以检查收敛情况:{"data":[{"type":"scatter","mode":"lines","name":"训练损失","x":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25],"y":[1.53,1.12,0.85,0.67,0.55,0.48,0.42,0.38,0.35,0.32,0.30,0.28,0.26,0.25,0.24,0.23,0.22,0.21,0.20,0.19,0.18,0.17,0.16,0.15,0.14],"line":{"color":"#4263eb"}}],"layout":{"title":{"text":"每轮训练损失"},"xaxis":{"title":{"text":"轮次"}},"yaxis":{"title":{"text":"平均损失"}},"template":"plotly_white"}}训练损失曲线显示了25个训练轮次中的下降趋势。后续步骤与尝试这个实践提供了一个起点。为了改进您的分割模型,可以考虑:数据增强: 对图像和掩码一致地应用空间和颜色增强(例如,旋转、翻转、亮度变化)。更复杂的架构: 实现或使用DeepLabV3+或其他更先进模型的预构建版本。迁移学习: 使用在ImageNet等大型数据集上预训练的权重初始化U-Net的编码器部分。不同的损失函数: 尝试使用Dice损失或Focal损失,特别是不平衡数据集。超参数调整: 系统地调整学习率、批量大小、优化器和网络深度。后处理: 应用诸如条件随机场(CRFs)等技术来优化预测的分割边界。构建有效的分割模型涉及细致的数据准备、恰当的架构选择、正确的损失函数实现以及全面的评估。这项动手练习提供了应对各种分割难题的基本构件。