原创

深度学习入门05---梯度算法的不同类型


1.梯度法的种类

小批量梯度下降法(MBGD)、批量梯度下降法(BGD)、随机梯度下降(SGD)、标准梯度下降、Momentum(SGD-M)、Adagrad、Adam、Nesterov、AdaDelta、Nadam等等...

2.SGD的缺点

avatar

通过举例的方法看看它的缺点,找到下列函数的最小值:

avatar

import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np

def fun(x, y):
    return np.power(x, 2) / 20 + np.power(y, 2)

fig1 = plt.figure()  # 展示图片窗口
ax = Axes3D(fig1)  # 显示3D坐标
X = np.arange(-10, 10, 0.1)
Y = np.arange(-10, 10, 0.1)
X, Y = np.meshgrid(X, Y)
Z = fun(X, Y)
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=plt.cm.cool)
ax.contourf(X, Y, Z, zdir='z', offset=-2, cmap=plt.cm.coolwarm)
ax.set_xlabel('x label', color='r')
ax.set_ylabel('y label', color='g')
ax.set_zlabel('z label', color='b')
plt.show()

avatar

查看该图像的梯度情况:

X = np.arange(-10, 10, 1)
Y = np.arange(-10, 10, 2)
U, V = np.meshgrid(-X/10, -2*Y)
fig, ax = plt.subplots()
q = ax.quiver(X, Y, U, V)
ax.quiverkey(q, X=0.3, Y=1.1, U=10, label='Quiver key')
plt.show()

avatar

X = np.arange(-10, 10, 0.1)
Y = np.arange(-3, 3, 0.1)
X, Y = np.meshgrid(X, Y)
plt.contourf(X, Y, fun(X, Y), cmap=plt.cm.coolwarm)
plt.show()

avatar

【说明】:由此可见,该函数的梯度在y轴上大,在x轴上小(y轴坡度大,x轴坡度小),虽然最小值在(0,0)处,但是很多地方的坡度并没有指向(0,0),下面模拟之前我们使用的小批量梯度下降法(MBGD),看看其的寻找路径是什么样子的?

X_SGD = [-7.0]
Y_SGD = [2.0]
x = - 7.0
y = 2.0
N = 40
lr = 0.9
for i in range(N):
    x -= x / 10 * lr
    y -= 2 * y * lr
    X_SGD.append(x)
    Y_SGD.append(y)

X = np.arange(-10, 10, 0.01)
Y = np.arange(-3, 3, 0.01)
X, Y = np.meshgrid(X, Y)
plt.plot(X_SGD, Y_SGD, color='red', marker='o', linestyle='solid')
plt.contourf(X, Y, fun(X, Y), cmap=plt.cm.coolwarm)
plt.show()

avatar

【说明】:如图所示,SGD呈现“之”字移动,这是一个非常低效率的路径。也就是说,当函数的性状为非均向时,如:延伸状,搜索的路径会非常低效。SGD低效的根本原因是:梯度方向没有指向最小值方向。

3.Momentum(SGD-M)

类似于物理的动量,小球在坡上滚动:

avatar

import matplotlib.pyplot as plt
import numpy as np

def df(x, y):
    return x / 10.0, 2.0 * y

def fun(x, y):
    return np.power(x, 2) / 20 + np.power(y, 2)

class Momentum:

    def __init__(self, lr=0.01, momentum=0.9):
        self.lr = lr
        self.momentum = momentum
        self.v = None

    def update(self, params, grads):
        if self.v is None:
            self.v = {}
            for key, val in params.items():
                self.v[key] = np.zeros_like(val)

        for key in params.keys():
            self.v[key] = self.momentum * self.v[key] - self.lr * grads[key]
            params[key] += self.v[key]


# 创建实例Momentum
SGD_M = Momentum(lr=0.1)

init_pos = (-7.0, 2.0)
X_Momentum = [init_pos[0]]
Y_Momentum = [init_pos[1]]
N = 24

params = {}
params['x'], params['y'] = init_pos[0], init_pos[1]
grads = {}
grads['x'], grads['y'] = 0, 0

for i in range(N):
    grads['x'], grads['y'] = df(params['x'], params['y'])
    SGD_M.update(params, grads)

    X_Momentum.append(params['x'])
    Y_Momentum.append(params['y'])

# 绘制图像
X = np.arange(-10, 10, 0.01)
Y = np.arange(-3, 3, 0.01)
X, Y = np.meshgrid(X, Y)
plt.plot(X_Momentum, Y_Momentum, color='red', marker='o', linestyle='solid')
plt.contourf(X, Y, fun(X, Y), cmap=plt.cm.coolwarm)
plt.show()

avatar

【说明】:SGD-M更新路径就像小球在碗中滚动一样,相对于SGD相比,减轻了“之”字形的“程度”减轻了。这是因为,虽然x轴方向上受到的力非常小,但是一直在同方向受力,所以朝同一方向会有一定加速度。相反,如果y轴方向上受到很大的力,但是因为交互地受到正方向与反方向的力,它们互相抵消,所以y轴方向上的速度不稳定。因此,可以更快的朝x轴方向靠近,减弱“之”字形的变动程度。

4.AdaGrad

在神经网络的学习中,学习率的值非常重要。学习率过小,会导致学习花费时间过长;学习率过大,会导致学习发散而不能正常进行。在关于学习率的有效技巧中,有一种叫做“学习率衰减”的方法,随着学习的进行,使学习率逐渐减小。(一开始多学,然后逐渐少学的思想)。而AdaGrad进一步发展了这个思想,针对一个个参数,赋予其“定制”的值,(AdaGrad中的Ada代表Adaptive“自适应”),会让参数元素中变动较大的元素学习率变小。

avatar

import matplotlib.pyplot as plt
import numpy as np

def df(x, y):
    return x / 10.0, 2.0 * y

def fun(x, y):
    return np.power(x, 2) / 20 + np.power(y, 2)

class AdaGrad:

    def __init__(self, lr=0.01):
        self.lr = lr
        self.h = None

    def update(self, params, grads):
        if self.h is None:
            self.h = {}
            for key, val in params.items():
                self.h[key] = np.zeros_like(val)

        for key in params.keys():
            self.h[key] += grads[key] * grads[key]
            params[key] -= self.lr * grads[key] / (np.sqrt(self.h[key]) + 1e-7)


# 创建实例AdaGrad
adagrad = AdaGrad(lr=1.5)

init_pos = (-7.0, 2.0)
X_adagrad = [init_pos[0]]
Y_adagrad = [init_pos[1]]
N = 20

params = {}
params['x'], params['y'] = init_pos[0], init_pos[1]
grads = {}
grads['x'], grads['y'] = 0, 0

for i in range(N):
    grads['x'], grads['y'] = df(params['x'], params['y'])
    adagrad.update(params, grads)

    X_adagrad.append(params['x'])
    Y_adagrad.append(params['y'])

# 绘制图像
X = np.arange(-10, 10, 0.01)
Y = np.arange(-3, 3, 0.01)
X, Y = np.meshgrid(X, Y)
plt.plot(X_adagrad, Y_adagrad, color='red', marker='o', linestyle='solid')
plt.contourf(X, Y, fun(X, Y), cmap=plt.cm.coolwarm)
plt.show()

avatar

【说明】:如图可见,函数的取值高效的向最小值方向移动。由于y轴方向上梯度较大,因此刚开始变动较大,但是后面会根据这个较大的变动按比例进行调整,减小更新步伐,效率大大提高。

5.Adam

Adam方法简单的说,相当于Momentum(SGD-M)与AdaGrad两者结合:

import matplotlib.pyplot as plt
import numpy as np

def df(x, y):
    return x / 10.0, 2.0 * y

def fun(x, y):
    return np.power(x, 2) / 20 + np.power(y, 2)

class Adam:

    def __init__(self, lr=0.001, beta1=0.9, beta2=0.999):
        self.lr = lr
        self.beta1 = beta1
        self.beta2 = beta2
        self.iter = 0
        self.m = None
        self.v = None

    def update(self, params, grads):
        if self.m is None:
            self.m, self.v = {}, {}
            for key, val in params.items():
                self.m[key] = np.zeros_like(val)
                self.v[key] = np.zeros_like(val)

        self.iter += 1
        lr_t = self.lr * np.sqrt(1.0 - self.beta2 ** self.iter) / (1.0 - self.beta1 ** self.iter)

        for key in params.keys():
            self.m[key] += (1 - self.beta1) * (grads[key] - self.m[key])
            self.v[key] += (1 - self.beta2) * (grads[key] ** 2 - self.v[key])
            params[key] -= lr_t * self.m[key] / (np.sqrt(self.v[key]) + 1e-7)

# 创建实例Adam
adam = Adam(lr=0.3)

init_pos = (-7.0, 2.0)
X_adam = [init_pos[0]]
Y_adam = [init_pos[1]]
N = 25

params = {}
params['x'], params['y'] = init_pos[0], init_pos[1]
grads = {}
grads['x'], grads['y'] = 0, 0

for i in range(N):
    grads['x'], grads['y'] = df(params['x'], params['y'])
    adam.update(params, grads)

    X_adam.append(params['x'])
    Y_adam.append(params['y'])

# 绘制图像
X = np.arange(-10, 10, 0.01)
Y = np.arange(-3, 3, 0.01)
X, Y = np.meshgrid(X, Y)
plt.plot(X_adam, Y_adam, color='red', marker='o', linestyle='solid')
plt.contourf(X, Y, fun(X, Y), cmap=plt.cm.coolwarm)
plt.show()

avatar

6.MBGD、SGD-M、AdaGrad、Adam四种方法比较

将4种方法放在一张图中,大致看看各自路径情况,注意不同的方法,学习率参数也不同,需要根据各自情况来调整:

# 初始化起点位置
init_pos = (-7.0, 2.0)
params = {}
params['x'], params['y'] = init_pos[0], init_pos[1]
grads = {}
grads['x'], grads['y'] = 0, 0

# 创建4种方法的实例化对象
optimizers = OrderedDict()
optimizers["SGD"] = SGD(lr=0.95)
optimizers["Momentum"] = Momentum(lr=0.1)
optimizers["AdaGrad"] = AdaGrad(lr=1.5)
optimizers["Adam"] = Adam(lr=0.3)

# 用于描述第几张图像
idx = 1

# 选择方法,并进行模拟
for key in optimizers:
    optimizer = optimizers[key]
    x_history = []
    y_history = []
    params['x'], params['y'] = init_pos[0], init_pos[1]

    for i in range(30):
        x_history.append(params['x'])
        y_history.append(params['y'])

        grads['x'], grads['y'] = df(params['x'], params['y'])
        optimizer.update(params, grads)

    x = np.arange(-10, 10, 0.01)
    y = np.arange(-3, 3, 0.01)
    X, Y = np.meshgrid(x, y)
    Z = f(X, Y)

    mask = Z > 7
    Z[mask] = 0

    plt.subplot(2, 2, idx)
    idx += 1
    plt.plot(x_history, y_history, marker='o', color="red", linestyle='solid')
    plt.contourf(X, Y, Z, cmap=plt.cm.coolwarm)
    plt.plot(0, 0, '+')
    plt.title(key)

plt.show()

avatar

【说明】:目前,并不存在在所有问题中都表现很好的方法,都有各自擅长和不擅长的问题。

【分析】:下面,我们对比一下学习次数对于4种方法的影响。例子:5层神经网络,每层100个神经元,激活函数ReLU:

import matplotlib.pyplot as plt
from Deep_Learning_From_Scratch.dataset.mnist import load_mnist
from Deep_Learning_From_Scratch.common.util import smooth_curve
from Deep_Learning_From_Scratch.common.multi_layer_net import MultiLayerNet
from Deep_Learning_From_Scratch.common.optimizer import *

# ========================0:读入MNIST数据===============================
(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True)

train_size = x_train.shape[0]
batch_size = 128
max_iterations = 2000

# ========================1:进行实验的设置===============================
optimizers = {}
optimizers['SGD'] = SGD()
optimizers['Momentum'] = Momentum()
optimizers['AdaGrad'] = AdaGrad()
optimizers['Adam'] = Adam()

networks = {}
train_loss = {}
for key in optimizers.keys():
    networks[key] = MultiLayerNet(
        input_size=784, hidden_size_list=[100, 100, 100, 100],
        output_size=10)
    train_loss[key] = []    

# ===========================2:开始训练==================================
for i in range(max_iterations):
    batch_mask = np.random.choice(train_size, batch_size)
    x_batch = x_train[batch_mask]
    t_batch = t_train[batch_mask]
    
    for key in optimizers.keys():
        grads = networks[key].gradient(x_batch, t_batch)
        optimizers[key].update(networks[key].params, grads)
    
        loss = networks[key].loss(x_batch, t_batch)
        train_loss[key].append(loss)
    
    if i % 100 == 0:
        print( "===========" + "iteration:" + str(i) + "===========")
        for key in optimizers.keys():
            loss = networks[key].loss(x_batch, t_batch)
            print(key + ":" + str(loss))

# ===========================3.绘制图形==================================
markers = {"SGD": "o", "Momentum": "x", "AdaGrad": "s", "Adam": "D"}
x = np.arange(max_iterations)
for key in optimizers.keys():
    plt.plot(x, smooth_curve(train_loss[key]), marker=markers[key], markevery=100, label=key)
plt.xlabel("iterations")
plt.ylabel("loss")
plt.ylim(0, 1)
plt.legend()
plt.show()

avatar

7.常见方法中存在的问题

avatar

Python
深度学习
神经网络
机器学习
  • 作者:李延松(联系作者)
  • 发表时间:2020-08-04 12:50
  • 版本声明:自由转载-非商用-非衍生-保持署名(创意共享3.0许可证)
  • 公众号转载:请在文末添加作者公众号二维码

评论

留言