牛頓法和最速下降法的Python實(shí)現(xiàn)

1 牛頓法

1.1 牛頓法的Python程序

from sympy import *
import numpy as np
# 假設(shè)多元函數(shù)是二維形式
# x_init為二維向量(x1, x2)
def newton_dou(step, x_init, obj):
    i = 1 # 記錄迭代次數(shù)的變量
    while i <= step:
        if i == 1:
            grandient_obj = np.array([diff(obj, x1).subs(x1, x_init[0]).subs(x2, x_init[1]), diff(obj, x2).subs(x1, x_init[0]).subs(x2, x_init[1])], dtype=float) # 初始點(diǎn)的梯度值
            hessian_obj = np.array([[diff(obj, x1, 2), diff(diff(obj, x1), x2)], [diff(diff(obj, x2), x1), diff(obj, x2, 2)]], dtype=float) # 初始點(diǎn)的hessian矩陣
            inverse = np.linalg.inv(hessian_obj) # hessian矩陣求逆
            x_new = x_init - np.matmul(inverse, grandient_obj) # 第一次迭代公式
            print(x_new)
            # print('迭代第%d次:%.5f' %(i, x_new))
            i = i + 1
        else:
            grandient_obj = np.array([diff(obj, x1).subs(x1, x_new[0]).subs(x2, x_new[1]), diff(obj, x2).subs(x1, x_new[0]).subs(x2, x_new[1])], dtype=float) # 當(dāng)前點(diǎn)的梯度值
            hessian_obj = np.array([[diff(obj, x1, 2), diff(diff(obj, x1), x2)], [diff(diff(obj, x2), x1), diff(obj, x2, 2)]], dtype=float) # 當(dāng)前點(diǎn)的hessian矩陣
            inverse = np.linalg.inv(hessian_obj) # hessian矩陣求逆
            x_new = x_new - np.matmul(inverse, grandient_obj) # 迭代公式
            print(x_new)
            # print('迭代第%d次:%.5f' % (i, x_new))
            i = i + 1
    return x_new

x0 = np.array([0, 0], dtype=float)
x1 = symbols("x1")
x2 = symbols("x2")
newton_dou(5, x0, x1**2+2*x2**2-2*x1*x2-2*x2)

1.2 牛頓法的結(jié)果分析

????程序執(zhí)行的結(jié)果如下:

[1. 1.]
[1. 1.]
[1. 1.]
[1. 1.]
[1. 1.]
Process finished with exit code 0

????經(jīng)過實(shí)際計(jì)算函數(shù)f(x)=x_1^2+2x_2^2-2x_1x_2-2x_2的極值點(diǎn)為(1,1),只需一次迭代就能收斂到極值點(diǎn)。

2 最速下降法

2.1 最速下降法的Python程序

# !/usr/bin/python
# -*- coding:utf-8 -*-
from sympy import *
import numpy as np
import matplotlib.pyplot as plt
def sinvar(fun):
    s_p = solve(diff(fun)) #stationary point
    return s_p
def value_enter(fun, x, i):
    value = fun[i].subs(x1, x[0]).subs(x2, x[1])
    return value
def grandient_l2(grand, x_now):
    grand_l2 = sqrt(pow(value_enter(grand, x_now, 0), 2)+pow(value_enter(grand, x_now, 1), 2))
    return grand_l2
def msd(x_init, obj):
    i = 1
    grandient_obj = np.array([diff(obj, x1), diff(obj, x2)])
    error = grandient_l2(grandient_obj, x_init)
    plt.plot(x_init[0], x_init[1], 'ro')
    while(error>0.001):
        if i == 1:
            grandient_obj_x = np.array([value_enter(grandient_obj, x_init, 0), value_enter(grandient_obj, x_init, 1)])
            t = symbols('t')
            x_eta = x_init - t * grandient_obj_x
            eta = sinvar(obj.subs(x1, x_eta[0]).subs(x2, x_eta[1]))
            x_new = x_init - eta*grandient_obj_x
            print(x_new)
            i = i + 1
            error = grandient_l2(grandient_obj, x_new)
            plt.plot(x_new[0], x_new[1], 'ro')
        else:
            grandient_obj_x = np.array([value_enter(grandient_obj, x_new, 0), value_enter(grandient_obj, x_new, 1)])
            t = symbols('t')
            x_eta = x_new - t * grandient_obj_x
            eta = sinvar(obj.subs(x1, x_eta[0]).subs(x2, x_eta[1]))
            x_new = x_new - eta*grandient_obj_x
            print(x_new)
            i = i + 1
            error = grandient_l2(grandient_obj, x_new)
            plt.plot(x_new[0], x_new[1], 'ro')
    plt.show()
x_0 = np.array([0, 0], dtype=float)
x1 = symbols("x1")
x2 = symbols("x2")
result = msd(x_0, x1**2+2*x2**2-2*x1*x2-2*x2)
print(result)

2.2 最速下降法結(jié)果分析

????程序執(zhí)行的結(jié)果如下:

[0 0.500000000000000]
[0.500000000000000 0.500000000000000]
[0.500000000000000 0.750000000000000]
[0.750000000000000 0.750000000000000]
[0.750000000000000 0.875000000000000]
[0.875000000000000 0.875000000000000]
[0.875000000000000 0.937500000000000]
[0.937500000000000 0.937500000000000]
[0.937500000000000 0.968750000000000]
[0.968750000000000 0.968750000000000]
[0.968750000000000 0.984375000000000]
[0.984375000000000 0.984375000000000]
[0.984375000000000 0.992187500000000]
[0.992187500000000 0.992187500000000]
[0.992187500000000 0.996093750000000]
[0.996093750000000 0.996093750000000]
[0.996093750000000 0.998046875000000]
[0.998046875000000 0.998046875000000]
[0.998046875000000 0.999023437500000]
[0.999023437500000 0.999023437500000]
[0.999023437500000 0.999511718750000]
None

Process finished with exit code 0

????最速下降法選擇的停止條件為迭代的梯度值如果小于0.001,則停止迭代,通過結(jié)果可看出設(shè)定停止條件的情況下,極值點(diǎn)并沒有完全收斂到(1,1)點(diǎn),在Python中畫出了迭代的極值點(diǎn)(如圖1所示)。如果繼續(xù)縮小停止條件,則程序會(huì)延長(zhǎng)時(shí)間增加迭代次數(shù),最后極值點(diǎn)達(dá)到(1,1)。

圖1 最速下降法迭代的極值點(diǎn)

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容