重新思考神经网络的激活函数:Dynamic ReLU 与ACON激活函数复现

举报
李长安 发表于 2023/03/17 12:32:12 2023/03/17
【摘要】 重新思考神经网络的激活函数:Dynamic ReLU 与ACON激活函数复现

重新思考神经网络的激活函数:Dynamic ReLU 与ACON激活函数复现

一、Dynamic ReLU

1、论文解读

  ReLU是深度学习中很重要的里程碑,简单但强大,能够极大地提升神经网络的性能。目前也有很多ReLU的改进版,比如Leaky ReLU和 PReLU,而这些改进版和原版的最终参数都是固定的。所以论文自然而然地想到,如果能够根据输入特征来调整ReLU的参数可能会更好。

如上图所示,论文中提出的Dynamic ReLU是一种分段函数,能够在带来少量额外计算的情况下,显著地提高网络的表达能力。下图列出了Dynamic ReLU与其他类型激活函数的不同:

2、复现详情

  论文提供了三种形态的DY-ReLU,在空间位置和维度上有不同的共享机制,如下图所示:

2.1 DY-ReLU-A

  空间位置和维度均共享(spatial and channel-shared),计算如图2a所示,仅需输出个参数,计算最简单,表达能力也最弱。

2.2 DY-ReLU-B

  仅空间位置共享(spatial-shared and channel-wise),输出2KC个参数。

2.3 DY-ReLU-C

  空间位置和维度均不共享(spatial and channel-wise),每个维度的每个元素都有对应的激活函数。虽然表达能力很强,但需要输出的参数(2KCHW)太多了,像前面那要直接用全连接层输出会带来过多的额外计算。

下图为上述三种激活函数的图像分类实验结果。

3、代码实现

  需要注意,本文这里仅实现了DY-ReLU-A与DY-ReLU-B两种形式。

import paddle
import paddle.nn as nn

class DyReLU(nn.Layer):
    def __init__(self, channels, reduction=4, k=2, conv_type='2d'):
        super(DyReLU, self).__init__()
        self.channels = channels
        self.k = k
        self.conv_type = conv_type
        assert self.conv_type in ['1d', '2d']

        self.fc1 = nn.Linear(channels, channels // reduction)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(channels // reduction, 2*k)
        self.sigmoid = nn.Sigmoid()

        self.register_buffer('lambdas', paddle.to_tensor([1.]*k + [0.5]*k))
        self.register_buffer('init_v', paddle.to_tensor([1.] + [0.]*(2*k - 1)))

    def get_relu_coefs(self, x):
        theta = paddle.mean(x, axis=-1)
        if self.conv_type == '2d':
            theta = paddle.mean(theta, axis=-1)
        theta = self.fc1(theta)
        theta = self.relu(theta)
        theta = self.fc2(theta)
        theta = 2 * self.sigmoid(theta) - 1
        return theta

    def forward(self, x):
        raise NotImplementedError

class DyReLUA(DyReLU):
    def __init__(self, channels, reduction=4, k=2, conv_type='2d'):
        super(DyReLUA, self).__init__(channels, reduction, k, conv_type)
        self.fc2 = nn.Linear(channels // reduction, 2*k)

    def forward(self, x):
        assert x.shape[1] == self.channels
        theta = self.get_relu_coefs(x)

        relu_coefs = theta.view(-1, 2*self.k) * self.lambdas + self.init_v
        # BxCxL -> LxCxBx1
        x_perm = x.transpose(0, -1).unsqueeze(-1)
        output = x_perm * relu_coefs[:, :self.k] + relu_coefs[:, self.k:]
        # LxCxBx2 -> BxCxL
        result = paddle.max(output, dim=-1)[0].transpose(0, -1)

        return result

class DyReLUB(DyReLU):
    def __init__(self, channels, reduction=4, k=2, conv_type='2d'):
        super(DyReLUB, self).__init__(channels, reduction, k, conv_type)
        self.fc2 = nn.Linear(channels // reduction, 2*k*channels)

    def forward(self, x):
        assert x.shape[1] == self.channels
        theta = self.get_relu_coefs(x)

        relu_coefs = theta.reshape([-1, self.channels, 2*self.k]) * self.lambdas + self.init_v

        if self.conv_type == '1d':
            # BxCxL -> LxBxCx1
            x_perm = x.transpose([2, 0, 1]).unsqueeze(-1)
            output = x_perm * relu_coefs[:, :, :self.k] + relu_coefs[:, :, self.k:]
            # LxBxCx2 -> BxCxL
            result = paddle.max(output, axis=-1)[0].transpose([1, 2, 0])

        elif self.conv_type == '2d':
            # BxCxHxW -> HxWxBxCx1
            x_perm = x.transpose([2, 3, 0, 1]).unsqueeze(-1)
            # print(x.shape)
            output = x_perm * relu_coefs[:, :, :self.k] + relu_coefs[:, :, self.k:]
            # print(output.shape)
            # HxWxBxCx2 -> BxCxHxW
            # temp = paddle.max(output, axis=-1)
            # print(temp.shape)
            result = paddle.max(output, axis=-1).transpose([2, 3, 0, 1])

        return result

4、对比实验

  为了方便对比,并且能够直观展示网络结构,本项目这里采用了AlexNet作为实验模型,使用Cifar10作为实验数据,通过替换网络中的激活函数给出对比实验。

import math
import paddle
import paddle.nn as nn
import paddle.nn.functional as F

from paddle.nn import Linear, Dropout, ReLU
from paddle.nn import Conv2D, MaxPool2D
from paddle.nn.initializer import Uniform
from paddle.fluid.param_attr import ParamAttr
from paddle.utils.download import get_weights_path_from_url

model_urls = {
    "alexnet": (
        "https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/AlexNet_pretrained.pdparams",
        "7f0f9f737132e02732d75a1459d98a43", )
}

__all__ = []


class ConvPoolLayer(nn.Layer):
    def __init__(self,
                 input_channels,
                 output_channels,
                 filter_size,
                 stride,
                 padding,
                 stdv,
                 groups=1,
                 act=None):
        super(ConvPoolLayer, self).__init__()

        # self.relu = ReLU() if act == "relu" else None
        self.relu = DyReLUB(output_channels, conv_type='2d')

        self._conv = Conv2D(
            in_channels=input_channels,
            out_channels=output_channels,
            kernel_size=filter_size,
            stride=stride,
            padding=padding,
            groups=groups,
            weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)),
            bias_attr=ParamAttr(initializer=Uniform(-stdv, stdv)))
        self._pool = MaxPool2D(kernel_size=3, stride=2, padding=0)

    def forward(self, inputs):
        x = self._conv(inputs)
        if self.relu is not None:
            x = self.relu(x)
            # print(x.shape)
        x = self._pool(x)
        return x


class AlexNet_dyr(nn.Layer):
    """AlexNet model from
    `"ImageNet Classification with Deep Convolutional Neural Networks"
    <https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf>`_
    Args:
        num_classes (int): Output dim of last fc layer. Default: 1000.
    Examples:
        .. code-block:: python
            from paddle.vision.models import AlexNet
            alexnet = AlexNet()
    """

    def __init__(self, num_classes=1000):
        super(AlexNet_dyr, self).__init__()
        self.num_classes = num_classes

        stdv = 1.0 / math.sqrt(3 * 11 * 11)
        self._conv1 = ConvPoolLayer(3, 64, 11, 4, 2, stdv, act="relu")
        stdv = 1.0 / math.sqrt(64 * 5 * 5)
        self._conv2 = ConvPoolLayer(64, 192, 5, 1, 2, stdv, act="relu")
        stdv = 1.0 / math.sqrt(192 * 3 * 3)
        self._conv3 = Conv2D(
            192,
            384,
            3,
            stride=1,
            padding=1,
            weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)),
            bias_attr=ParamAttr(initializer=Uniform(-stdv, stdv)))
        stdv = 1.0 / math.sqrt(384 * 3 * 3)
        self._conv4 = Conv2D(
            384,
            256,
            3,
            stride=1,
            padding=1,
            weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)),
            bias_attr=ParamAttr(initializer=Uniform(-stdv, stdv)))
        stdv = 1.0 / math.sqrt(256 * 3 * 3)
        self._conv5 = ConvPoolLayer(256, 256, 3, 1, 1, stdv, act="relu")

        if self.num_classes > 0:
            stdv = 1.0 / math.sqrt(256 * 6 * 6)
            self._drop1 = Dropout(p=0.5, mode="downscale_in_infer")
            self._fc6 = Linear(
                in_features=256 * 6 * 6,
                out_features=4096,
                weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)),
                bias_attr=ParamAttr(initializer=Uniform(-stdv, stdv)))

            self._drop2 = Dropout(p=0.5, mode="downscale_in_infer")
            self._fc7 = Linear(
                in_features=4096,
                out_features=4096,
                weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)),
                bias_attr=ParamAttr(initializer=Uniform(-stdv, stdv)))
            self._fc8 = Linear(
                in_features=4096,
                out_features=num_classes,
                weight_attr=ParamAttr(initializer=Uniform(-stdv, stdv)),
                bias_attr=ParamAttr(initializer=Uniform(-stdv, stdv)))

    def forward(self, inputs):
        x = self._conv1(inputs)
        x = self._conv2(x)
        x = self._conv3(x)
        x = F.relu(x)
        x = self._conv4(x)
        x = F.relu(x)
        x = self._conv5(x)

        if self.num_classes > 0:
            x = paddle.flatten(x, start_axis=1, stop_axis=-1)
            x = self._drop1(x)
            x = self._fc6(x)
            x = F.relu(x)
            x = self._drop2(x)
            x = self._fc7(x)
            x = F.relu(x)
            x = self._fc8(x)

        return x
alexdyr = AlexNet_dyr(num_classes=10)

paddle.summary(alexdyr,(1,3,224,224))
import paddle

from paddle.metric import Accuracy
from paddle.vision.transforms import Compose, Normalize, Resize, Transpose, ToTensor

callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir_alex_dyrelu')

normalize = Normalize(mean=[0.5, 0.5, 0.5],
                    std=[0.5, 0.5, 0.5],
                    data_format='HWC')
transform = Compose([ToTensor(), Normalize(), Resize(size=(224,224))])

cifar10_train = paddle.vision.datasets.Cifar10(mode='train',
                                               transform=transform)
cifar10_test = paddle.vision.datasets.Cifar10(mode='test',
                                              transform=transform)

# 构建训练集数据加载器
train_loader = paddle.io.DataLoader(cifar10_train, batch_size=512, shuffle=True, drop_last=True)

# 构建测试集数据加载器
test_loader = paddle.io.DataLoader(cifar10_test, batch_size=512, shuffle=True, drop_last=True)

alexdyr = paddle.Model(AlexNet_dyr(num_classes=10))
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=alexdyr.parameters())

alexdyr.prepare(
    optim,
    paddle.nn.CrossEntropyLoss(),
    Accuracy()
    )

alexdyr.fit(train_data=train_loader,
        eval_data=test_loader,
        epochs=12,
        callbacks=callback,
        verbose=1
        )
import paddle
from paddle.vision.models import AlexNet
from paddle.metric import Accuracy
from paddle.vision.transforms import Compose, Normalize, Resize, Transpose, ToTensor

callback = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir_alexnet')

normalize = Normalize(mean=[0.5, 0.5, 0.5],
                    std=[0.5, 0.5, 0.5],
                    data_format='HWC')
transform = Compose([ToTensor(), Normalize(), Resize(size=(224,224))])

cifar10_train = paddle.vision.datasets.Cifar10(mode='train',
                                               transform=transform)
cifar10_test = paddle.vision.datasets.Cifar10(mode='test',
                                              transform=transform)

# 构建训练集数据加载器
train_loader = paddle.io.DataLoader(cifar10_train, batch_size=512, shuffle=True, drop_last=True)

# 构建测试集数据加载器
test_loader = paddle.io.DataLoader(cifar10_test, batch_size=512, shuffle=True, drop_last=True)

alex = paddle.Model(AlexNet(num_classes=10))
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=alex.parameters())

alex.prepare(
    optim,
    paddle.nn.CrossEntropyLoss(),
    Accuracy()
    )

alex.fit(train_data=train_loader,
        eval_data=test_loader,
        epochs=12,
        callbacks=callback,
        verbose=1
        )

5、实验对比结果

6、总结

  本项目对微软提出的Dynamic ReLU进行了复现,该激活函数是一种分段函数。够在带来少量额外计算的情况下,显著地提高网络的表达能力。在论文中提出了三种不同的·激活函数形态,本项目中仅针对A和B两种形态进行了复现。通过对比实验的loss曲线、acc曲线可以发现,论文中提出的Dynamic ReLU能够有效的提升模型的表现。

二、 ACON激活函数



  在此论文中作者提出了一个简单、有效的激活函数ACON,该激活函数可以决定是否要激活神经元,在ACON基础上作者进一步提出了激活函数,它通过引入开关因子去学习非线性(激活)和线性(非激活)之间的参数切换。实验结果表明,在图像分类,目标检测以及语义分割的任务上,该激活函数都可以使得深度模型有显著的提升效果。

论文地址 代码地址

Smooth Maximum(光滑最大值函数)

  我们目前常用的激活函数本质上都是MAX函数,以ReLU函数为例,其形式可以表示为:




而MAX函数的平滑,可微分变体我们称为Smooth Maximum,其公式如下:





这里我们只考虑Smooth Maximum只有两个输入量的情况,即n=2,于是有以下公式:





考虑平滑形式下的ReLU,代入公式我们得到而这个结果就是Swish激活函数!所以我们可以得到,Swish激活函数是ReLU函数的一种平滑近似。我们称其为ACON-A:





ACON-C的一阶导数计算公式如下所示:





解上述方程可得:





可学习的边界对于简化优化是必不可少的,这些可学习的上界和下届是改善结果的关键。

基于飞桨框架的复现

  • 一些API介绍

paddle.static.create_parameter

该OP创建一个参数。该参数是一个可学习的变量, 拥有梯度并且可优化。

根据ACON的官方代码,我复现了Paddle版本的ACON-C如下所示。对比下来,基于飞桨框架的API更加简练一点,并且可以直接在API里指定初始化方式。实际上,各种初始化方式也有很多的,大家可以自行百度一下哦。

import paddle
from paddle import nn
import paddle.nn.functional as F
from paddle import ParamAttr
from paddle.regularizer import L2Decay
from paddle.nn import AvgPool2D, Conv2D
import numpy as np

class AconC(nn.Layer):
    """ ACON activation (activate or not).
    # AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter
    # according to "Activate or Not: Learning Customized Activation" <https://arxiv.org/pdf/2009.04759.pdf>.
    """

    def __init__(self, width):
        super().__init__()
        
        self.p1 = paddle.create_parameter([1, width, 1, 1], dtype='float32', default_initializer=nn.initializer.Normal())
        self.p2 = paddle.create_parameter([1, width, 1, 1], dtype='float32', default_initializer=nn.initializer.Normal())
        self.beta = paddle.create_parameter([1, width, 1, 1], dtype='float32', default_initializer=paddle.fluid.initializer.NumpyArrayInitializer(np.ones([1, width, 1, 1])))

    def forward(self, x):
        return (self.p1 * x - self.p2 * x) * F.sigmoid(self.beta * (self.p1 * x - self.p2 * x)) + self.p2 * x

网络搭建

class dcn2(paddle.nn.Layer):
    def __init__(self, num_classes=1):
        super(dcn2, self).__init__()

        self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=32, kernel_size=(3, 3), stride=1, padding = 1)
        # self.pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)

        self.conv2 = paddle.nn.Conv2D(in_channels=32, out_channels=64, kernel_size=(3,3),  stride=2, padding = 0)
        # self.pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)

        self.conv3 = paddle.nn.Conv2D(in_channels=64, out_channels=64, kernel_size=(3,3), stride=2, padding = 0)

        self.acon1 = AconC(64)
      

        self.conv4 = paddle.nn.Conv2D(in_channels=64, out_channels=64, kernel_size=(3,3), stride=2, padding = 1)

        self.flatten = paddle.nn.Flatten()

        self.linear1 = paddle.nn.Linear(in_features=1024, out_features=64)
        self.linear2 = paddle.nn.Linear(in_features=64, out_features=num_classes)

    def forward(self, x):
        x = self.conv1(x)
        x = F.relu(x)
        # x = self.pool1(x)
        # print(x.shape)
        x = self.conv2(x)
        x = F.relu(x)
        # x = self.pool2(x)
        # print(x.shape)

        x = self.conv3(x)
        x = self.acon1(x)
        # print(x.shape)
        
        # offsets = self.offsets(x)
        # masks = self.mask(x)
        # print(offsets.shape)
        # print(masks.shape)
        x = self.conv4(x)
        x = F.relu(x)
        # print(x.shape)

        x = self.flatten(x)
        x = self.linear1(x)
        x = F.relu(x)
        x = self.linear2(x)
        return x

网络结构可视化

cnn3 = dcn2()

model3 = paddle.Model(cnn3)

model3.summary((64, 3, 32, 32))
---------------------------------------------------------------------------
 Layer (type)       Input Shape          Output Shape         Param #    
===========================================================================
   Conv2D-1      [[64, 3, 32, 32]]     [64, 32, 32, 32]         896      
   Conv2D-2      [[64, 32, 32, 32]]    [64, 64, 15, 15]       18,496     
   Conv2D-3      [[64, 64, 15, 15]]     [64, 64, 7, 7]        36,928     
    AconC-1       [[64, 64, 7, 7]]      [64, 64, 7, 7]          192      
   Conv2D-4       [[64, 64, 7, 7]]      [64, 64, 4, 4]        36,928     
   Flatten-1      [[64, 64, 4, 4]]        [64, 1024]             0       
   Linear-1         [[64, 1024]]           [64, 64]           65,600     
   Linear-2          [[64, 64]]            [64, 1]              65       
===========================================================================
Total params: 159,105
Trainable params: 159,105
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.75
Forward/backward pass size (MB): 27.13
Params size (MB): 0.61
Estimated Total Size (MB): 28.48
---------------------------------------------------------------------------






{'total_params': 159105, 'trainable_params': 159105}
class dcn3(paddle.nn.Layer):
    def __init__(self, num_classes=1):
        super(dcn3, self).__init__()

        self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=32, kernel_size=(3, 3), stride=1, padding = 1)
        # self.pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)

        self.conv2 = paddle.nn.Conv2D(in_channels=32, out_channels=64, kernel_size=(3,3),  stride=2, padding = 0)
        # self.pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)

        self.conv3 = paddle.nn.Conv2D(in_channels=64, out_channels=64, kernel_size=(3,3), stride=2, padding = 0)

        self.conv4 = paddle.nn.Conv2D(in_channels=64, out_channels=64, kernel_size=(3,3), stride=2, padding = 1)

        self.flatten = paddle.nn.Flatten()

        self.linear1 = paddle.nn.Linear(in_features=1024, out_features=64)
        self.linear2 = paddle.nn.Linear(in_features=64, out_features=num_classes)

    def forward(self, x):
        x = self.conv1(x)
        x = F.relu(x)

        x = self.conv2(x)
        x = F.relu(x)
        # print(x.shape)

        x = self.conv3(x)
        x = F.relu(x)
        # print(x.shape)
        
        x = self.conv4(x)
        x = F.relu(x)
        # print(x.shape)

        x = self.flatten(x)
        x = self.linear1(x)
        x = F.relu(x)
        x = self.linear2(x)
        return x
cnn4 = dcn3()

model4 = paddle.Model(cnn4)

model4.summary((64, 3, 32, 32))
---------------------------------------------------------------------------
 Layer (type)       Input Shape          Output Shape         Param #    
===========================================================================
   Conv2D-1      [[64, 3, 32, 32]]     [64, 32, 32, 32]         896      
   Conv2D-2      [[64, 32, 32, 32]]    [64, 64, 15, 15]       18,496     
   Conv2D-3      [[64, 64, 15, 15]]     [64, 64, 7, 7]        36,928     
   Conv2D-4       [[64, 64, 7, 7]]      [64, 64, 4, 4]        36,928     
   Flatten-1      [[64, 64, 4, 4]]        [64, 1024]             0       
   Linear-1         [[64, 1024]]           [64, 64]           65,600     
   Linear-2          [[64, 64]]            [64, 1]              65       
===========================================================================
Total params: 158,913
Trainable params: 158,913
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.75
Forward/backward pass size (MB): 25.59
Params size (MB): 0.61
Estimated Total Size (MB): 26.95
---------------------------------------------------------------------------






{'total_params': 158913, 'trainable_params': 158913}

Meta-ACON

前面我们有提到,ACON系列的激活函数通过 β \beta 的值来控制是否激活神经元( β \beta 为0,即不激活)。因此我们需要为ACON设计一个计算 β \beta 的自适应函数:





import paddle
from paddle import nn
import paddle.nn.functional as F
from paddle import ParamAttr
from paddle.regularizer import L2Decay
from paddle.nn import AvgPool2D, Conv2D
import numpy as np

class MetaAconC(nn.Layer):
    r""" ACON activation (activate or not).
    # MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network
    # according to "Activate or Not: Learning Customized Activation" <https://arxiv.org/pdf/2009.04759.pdf>.
    """

    def __init__(self, width, r=16):
        super().__init__()
        self.fc1 = nn.Conv2D(width, max(r, width // r), kernel_size=1, stride=1)
        self.bn1 = nn.BatchNorm2D(max(r, width // r))
        self.fc2 = nn.Conv2D(max(r, width // r), width, kernel_size=1, stride=1)
        self.bn2 = nn.BatchNorm2D(width)

        self.p1 = paddle.create_parameter([1, width, 1, 1], dtype='float32', default_initializer=nn.initializer.Normal())
        self.p2 = paddle.create_parameter([1, width, 1, 1], dtype='float32', default_initializer=nn.initializer.Normal())

    def forward(self, x):
        beta = F.sigmoid(
            self.bn2(self.fc2(self.bn1(self.fc1(x.mean(axis=2, keepdim=True).mean(axis=3, keepdim=True))))))
            # self.bn2(self.fc2(self.bn1(self.fc1(x.mean().mean())))))
        return (self.p1 * x - self.p2 * x) * F.sigmoid(beta * (self.p1 * x - self.p2 * x)) + self.p2 * x
class dcn2(paddle.nn.Layer):
    def __init__(self, num_classes=1):
        super(dcn2, self).__init__()

        self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=32, kernel_size=(3, 3), stride=1, padding = 1)
        # self.pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)

        self.conv2 = paddle.nn.Conv2D(in_channels=32, out_channels=64, kernel_size=(3,3),  stride=2, padding = 0)
        # self.pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)

        self.conv3 = paddle.nn.Conv2D(in_channels=64, out_channels=64, kernel_size=(3,3), stride=2, padding = 0)

        self.acon1 = MetaAconC(64)
      

        self.conv4 = paddle.nn.Conv2D(in_channels=64, out_channels=64, kernel_size=(3,3), stride=2, padding = 1)

        self.flatten = paddle.nn.Flatten()

        self.linear1 = paddle.nn.Linear(in_features=1024, out_features=64)
        self.linear2 = paddle.nn.Linear(in_features=64, out_features=num_classes)

    def forward(self, x):
        x = self.conv1(x)
        x = F.relu(x)
        # x = self.pool1(x)
        # print(x.shape)
        x = self.conv2(x)
        x = F.relu(x)
        # x = self.pool2(x)
        # print(x.shape)

        x = self.conv3(x)
        x = self.acon1(x)
        # print(x.shape)
        
        
        x = self.conv4(x)
        x = F.relu(x)
        # print(x.shape)

        x = self.flatten(x)
        x = self.linear1(x)
        x = F.relu(x)
        x = self.linear2(x)
        return x
cnn3 = dcn2()

model3 = paddle.Model(cnn3)

model3.summary((64, 3, 32, 32))

总结

本教程主要关注于程序的具体复现,该激活函数的效果以及在各类计算机视觉任务上均未进行验证,大家可以根据自己的实际需求进行使用。对比二者的网络参数,使用了ACON-C激活函数会增加网络的参数量。在论文中提到的Meta-ACON将在下一个教程中为大家进行讲解。在本教程中,已经为大家演示了该激活函数的复现代码,以及如何应用在网络结构中。大家可以即插即用。

【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。