支持向量機(jī)系列(一)——線性可分情形下的SVM

Linear Support Vector Machines in the Linearly Separable Case

Problem Description

Assume we have a learning set of data, \mathscr{L}= \{(\mathbf{x}_i,y_i):i=1,2,\cdots,n\} where \mathbf{x}_i\in \mathbb{R}^r and y_i\in\{1,-1\}. The binary classification problem is to use \mathscr{L} to construct a function f: \mathbb{R}^r \rightarrow \mathbf{R} so that
\begin{align*} C(\mathbf{x})&= sign(f(\mathbf{x}))\\ &=sign (\beta_0 + \mathbf{x}^{\top}{\boldsymbol\beta}) \end{align*} is a classifier.

If \mathscr{L} is linearly separable, then the optimization problem is given by
\begin{align*} \min\limits_{\beta_0,{\boldsymbol\beta}} \ &\dfrac{1}{2}\|{\boldsymbol\beta}\|^2\\ \qquad \qquad s.t. \quad& y_i (\beta_0 + \mathbf{x}_i^{\top} {\boldsymbol\beta})\geq 1 ,\qquad i=1,2,\cdots,n \end{align*}

Primal Problem Given by Lagrangian Multipliers

By using Lagrangian multipliers, the primal function is given by
\begin{align*} F_P(\beta_0,{\boldsymbol\beta},{\boldsymbol\alpha})&= \frac{1}{2}\|{\boldsymbol\beta}\|^2 + \sum_{i=1}^{n}\alpha_i [1-y_i(\beta_0 + \mathbf{x}_i^{\top} {\boldsymbol\beta})] \end{align*}
where {\boldsymbol\alpha}= \begin{pmatrix} \alpha_1&\cdots&\alpha_n \end{pmatrix}^{\top}\succeq \mathbf{0} is the Lagrangian coefficients. So the primal problem is equivalent to
\begin{align*} \min\limits_{\beta_0,{\boldsymbol\beta}}\max\limits_{{\boldsymbol \alpha}}\ & F_P(\beta_0,{\boldsymbol\beta},{\boldsymbol\alpha}) \\ s.t. \quad& {\boldsymbol\alpha} \succeq \mathbf{0} \end{align*}

The Karush–Kuhn–Tucker conditions give necessary and su?cient conditions for a solution to a constrained optimization problem:
\begin{align*} \dfrac{\partial F_P(\beta_0, {\boldsymbol\beta},{\boldsymbol\alpha})}{\partial\beta_0} & =- \sum_{i=1}^{n}\alpha_i y_i=0\\ \dfrac{\partial F_P(\beta_0, {\boldsymbol\beta},{\boldsymbol\alpha})}{\partial{\boldsymbol\beta}} & ={\boldsymbol\beta}- \sum_{i=1}^{n}\alpha_i \mathbf{x}_i =0\\ 1-y_i(\beta_0 + \mathbf{x}_i^{\top} {\boldsymbol\beta})&\leq 0\\ \alpha_i&\geq 0\\ \alpha_i [1-y_i(\beta_0 + \mathbf{x}_i^{\top} {\boldsymbol\beta})]&=0 \end{align*}
KKT conditions yields
\begin{align*} \sum_{i=1}^{n}\alpha_iy_i&=0\\ {\boldsymbol\beta} &= \sum_{i=1}^{n}\alpha_iy_i \mathbf{x}_i \end{align*}
and \beta_0 is implicitly determined by the KKT complementarity condition, by choosing any i for which \alpha_i \neq 0 and computing \beta_0 (note that it is numerically safer to take the mean value of \beta_0 resulting from all such equations).

Applying KKT conditions to the primal function simplifies the dual function
\begin{align*} F_D({\boldsymbol\alpha})&= \min\limits_{\beta_0,{\boldsymbol\beta}}F_P(\beta_0,{\boldsymbol\beta},{\boldsymbol\alpha}) \\ &= \frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_i\alpha_jy_iy_j \mathbf{x}_i^{\top}\mathbf{x}_j - \sum_{i=1}^{n} \sum_{j=1}^{n}\alpha_i\alpha_jy_iy_j \mathbf{x}_i^{\top}\mathbf{x}_j+ \sum_{i=1}^{n}\alpha_i\\ &= -\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_i\alpha_jy_iy_j \mathbf{x}_i^{\top}\mathbf{x}_j + \sum_{i=1}^{n}\alpha_i\\ &= -\frac{1}{2}{\boldsymbol\alpha}^{\top}\mathbf{H}{\boldsymbol\alpha}+ \mathbf{1}^{\top}_n {\boldsymbol\alpha} \end{align*}
where \mathbf{H}= \begin{pmatrix} \langle y_i \mathbf{x}_i, y_j \mathbf{x}_j\rangle \end{pmatrix}.

Dual Problem

When KKT conditions are satisfied, the primal problem is equivalent to the dual problem
\begin{align*} \max\limits_{{\boldsymbol \alpha}}\min\limits_{\beta_0,{\boldsymbol\beta}}\ &F_P(\beta_0,{\boldsymbol\beta},{\boldsymbol\alpha}) \\ s.t. \quad& {\boldsymbol\alpha} \succeq \mathbf{0} \end{align*}
i.e.
\begin{align*} \max\limits_{{\boldsymbol \alpha}}\ &F_D({\boldsymbol\alpha}) \\ s.t. \quad& {\boldsymbol\alpha} \succeq \mathbf{0}\\ &{\boldsymbol\alpha}^{\top}\mathbf{y}=0 \\ \end{align*}

If {\boldsymbol\alpha}^* solves this optimization problem, then
{\boldsymbol\beta}^*= \sum_{i=1}^{n} \alpha_i^* y_i \mathbf{x}_i
By using KKT complementarity condition,
{\boldsymbol\beta}^*= \sum_{i\in SV} \alpha_i^* y_i \mathbf{x}_i
where SV\subset\{1,2,\cdots,n\} is the set of supporting vectors.

Then \beta_0^*= \dfrac{1}{|SV|} \sum_{i\in SV} \dfrac{1-y_i \mathbf{x}_i^{\top}{\boldsymbol\beta}^*}{y_i}

Since
\begin{align*} \|{\boldsymbol\beta}\|^2&= \sum_{i=1}^{n}\alpha_i^*y_i \mathbf{x}_i^{\top} {\boldsymbol\beta} \\ &=\sum_{i=1}^{n}\alpha_i^*[1-y_i\beta_0]\\ &=\sum_{i=1}^{n}\alpha_i^* \end{align*}
the maximum margin is given by \dfrac{2}{\|{\boldsymbol\beta}\|}=\dfrac{2}{\sqrt{\sum_{i=1}^{n}\alpha_i^*}}.

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 《世說新語》中的儉嗇表面意思是小氣,節(jié)約,他是他們《世說新語》后我并不這么認(rèn)為了,因?yàn)樗€有更多的意思。讓我?guī)е?..
    任嘟嘟嘟閱讀 1,552評(píng)論 0 0
  • 學(xué)習(xí)計(jì)劃:教材第12---14頁 學(xué)習(xí)《反省為時(shí)不晚》 ?作業(yè)安排:想一想,我們?yōu)槭裁磳?duì)孩子失去了愛?! 學(xué)習(xí)反饋...
    MaxTZ閱讀 250評(píng)論 0 0
  • 那一聲聲早晚安,是多么蒼白無力。 對(duì)于一個(gè)感性的我來說,我又不知如何表達(dá)。 但我也只能說出這樣平淡無味的話。 等,...
    凡夫俗子y閱讀 125評(píng)論 5 6
  • 1.下載安裝文件(64位) 2.安裝install,選擇默認(rèn)安裝路徑 3.mysql配置 1).配置環(huán)境變量,將安...
    東城_86閱讀 260評(píng)論 0 0

友情鏈接更多精彩內(nèi)容