5/22/2023 0 Comments Pytorch permuteSwin Transforer Block class BasicLayer(nn.Module): X = self.proj(x).flatten(2).transpose(1, 2) # B Ph*Pw C # FIXME look at relaxing size constraintsĪssert H = self.img_size and W = self.img_size, \į"Input image size ()." Self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) Self.num_patches = patches_resolution * patches_resolution Self.patches_resolution = patches_resolution Patches_resolution = // patch_size, img_size // patch_size] Default: Noneĭef _init_(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None): Norm_layer (nn.Module, optional): Normalization layer. Default: 3.Įmbed_dim (int): Number of linear projection output channels. In_chans (int): Number of input image channels. PatchEmbed (Patch Partition Linear Embedding) class PatchEmbed(nn.Module): PatchEmbed module -> BasicLayer modules(nn.ModuleList) -> norm
0 Comments
Leave a Reply. |