scikit-learn中的LogitsticRegression參數(shù)含義

鏈接:http://70b86a48.wiz03.com/share/s/1MK6F81-vQ1i2DFlsT0ux-iU2qccii0xCkjZ2Si7Lw1pfOQ3

截圖:

原文:

class?sklearn.linear_model.LogisticRegression(penalty=’l2’,?dual=False,?tol=0.0001,?C=1.0,?fit_intercept=True,?intercept_scaling=1,?class_weight=None,?random_state=None,?solver=’liblinear’,?max_iter=100,?multi_class=’ovr’,?verbose=0,?warm_start=False,?n_jobs=1)

Parameters:

penalty?: str, ‘l1’ or ‘l2’, default: ‘l2’

Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties.

New in version 0.19:?l1 penalty with SAGA solver (allowing ‘multinomial’ + L1)

dual?: bool, default: False

Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.

tol?: float, default: 1e-4

Tolerance for stopping criteria.

C?: float, default: 1.0

Inverse of regularization strength; must be a positive float. Like in support vector machines, smaller values specify stronger regularization.

fit_intercept?: bool, default: True

Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.

intercept_scaling?: float, default 1.

Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes?intercept_scaling?*?synthetic_feature_weight.

Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.

class_weight?: dict or ‘balanced’, default: None

Weights associated with classes in the form?{class_label:?weight}. If not given, all classes are supposed to have weight one.

The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as?n_samples?/?(n_classes?*?np.bincount(y)).

Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.

New in version 0.17:?class_weight=’balanced’

random_state?: int, RandomState instance or None, optional, default: None

The seed of the pseudo random number generator to use when shuffling the data. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by?np.random. Used when?solver?== ‘sag’ or ‘liblinear’.

solver?: {‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’},

default: ‘liblinear’ Algorithm to use in the optimization problem.

For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and

‘saga’ are faster for large ones.

For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’

handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes.

‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty, whereas

‘liblinear’ and ‘saga’ handle L1 penalty.

Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.

New in version 0.17:?Stochastic Average Gradient descent solver.

New in version 0.19:?SAGA solver.

max_iter?: int, default: 100

Useful only for the newton-cg, sag and lbfgs solvers. Maximum number of iterations taken for the solvers to converge.

multi_class?: str, {‘ovr’, ‘multinomial’}, default: ‘ovr’

Multiclass option can be either ‘ovr’ or ‘multinomial’. If the option chosen is ‘ovr’, then a binary problem is fit for each label. Else the loss minimised is the multinomial loss fit across the entire probability distribution. Does not work for liblinear solver.

New in version 0.18:?Stochastic Average Gradient descent solver for ‘multinomial’ case.

verbose?: int, default: 0

For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.

warm_start?: bool, default: False

When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver.

New in version 0.17:?warm_start?to support?lbfgs,?newton-cg,?sag,?saga?solvers.

n_jobs?: int, default: 1

Number of CPU cores used when parallelizing over classes if multi_class=’ovr’”. This parameter is ignored when the?``solver``is set to ‘liblinear’ regardless of whether ‘multi_class’ is specified or not. If given a value of -1, all cores are used.

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 開在陽光側(cè)面的山丹花蒼涼而苦澀的黃土塬咳出的一灘梗在心頭的血 單薄的怯羞濃艷的紅 吃黃土喝涼風(fēng)長大的山丹花,燃燒的...
    竹無心a閱讀 902評論 24 31
  • 九重天上,太晨宮內(nèi),司命手持一封請柬而來,東華在書房內(nèi),處理著近日剛剛飛升的仙君的定官階之事,司命進(jìn)入書房,拱手行...
    轉(zhuǎn)角花開閱讀 3,318評論 1 41
  • 總要為自己的選擇買單,既然發(fā)生了我都欣然接受,依然相信所有經(jīng)過之路都是必經(jīng)之路。 那倒計時的100秒中包含了你我所...
    維多利亞沒有夜閱讀 303評論 0 0
  • Frank He 3/10/2018 約定數(shù)據(jù)源用橢圓形,數(shù)據(jù)池用方形,函數(shù)用同心圓,變量用圓形,loss用菱形,...
    驚起卻回首閱讀 1,061評論 0 2
  • 暖風(fēng)四月 伴著綿綿細(xì)雨 春光灑在書頁上 淋濕了讀者的心靈 讀書日 讓我們在紛繁復(fù)雜中感受靜謐簡單 去書中感受這春日...
    jennydeng_閱讀 219評論 0 0

友情鏈接更多精彩內(nèi)容