Hands on Machine Learning目錄

第1章 機(jī)器學(xué)習(xí)概覽
第2章 一個(gè)完整的機(jī)器學(xué)習(xí)項(xiàng)目
第3章 分類
第4章 訓(xùn)練模型
第5章 支持向量機(jī)
第6章 決策樹(shù)
第7章 集體學(xué)習(xí)和隨機(jī)森林
第8章 降維
第9章 安裝運(yùn)行TensorFlow
第10章 人工神經(jīng)網(wǎng)絡(luò)簡(jiǎn)介
第11章 深層神經(jīng)網(wǎng)絡(luò)訓(xùn)練
第12章 分布式TensorFlow橫穿設(shè)備和服務(wù)器
第13章 卷積神經(jīng)網(wǎng)絡(luò)
第14章 遞歸神經(jīng)網(wǎng)絡(luò)
第15章 自編碼器
第16章 強(qiáng)化學(xué)習(xí)


書籍簡(jiǎn)介
第1章 機(jī)器學(xué)習(xí)概覽
第2章 一個(gè)完整的機(jī)器學(xué)習(xí)項(xiàng)目 (作者暫未翻譯完)
其他部分暫時(shí)準(zhǔn)備看英文版


Preface 序言

Part I. The Fundamentals of Machine Learning

機(jī)器學(xué)習(xí)基本原理

1. The Machine Learning Landscape

第1章 機(jī)器學(xué)習(xí)概覽

  1. What Is Machine Learning? 什么是機(jī)器學(xué)習(xí)?
  2. Why Use Machine Learning? 為什么使用機(jī)器學(xué)習(xí)?
  3. Types of Machine Learning Systems 機(jī)器學(xué)習(xí)系統(tǒng)的類型
  • Supervised/Unsupervised Learning 監(jiān)督/非監(jiān)督學(xué)習(xí)
  • Batch and Online Learning 批量和在線學(xué)習(xí)
  • Instance-Based Versus Model-Based Learning 基于實(shí)例與基于模型學(xué)習(xí)
  1. Main Challenges of Machine Learning 機(jī)器學(xué)習(xí)的主要挑戰(zhàn)
  • Insufficient Quantity of Training Data 訓(xùn)練數(shù)據(jù)數(shù)量不足
  • Non representative Training Data 沒(méi)有代表性的訓(xùn)練數(shù)據(jù)
  • Poor-Quality Data 劣質(zhì)數(shù)據(jù)
  • Irrelevant Features 不相關(guān)的特性
  • Over fitting the Training Data 過(guò)擬合的訓(xùn)練數(shù)據(jù)
  • Under fitting the Training Data 欠擬合訓(xùn)練數(shù)據(jù)
    Stepping Back 回顧
  1. Testing and Validating 測(cè)試和確認(rèn)
  2. Exercises 練習(xí)

2. End-to-End Machine Learning Project

第2章 一個(gè)完整的機(jī)器學(xué)習(xí)項(xiàng)目

  1. Working with Real Data 使用真實(shí)數(shù)據(jù)
  2. Look at the Big Picture 項(xiàng)目概覽
  • Frame the Problem 劃定問(wèn)題
  • Select a Performance Measure 選擇性能指標(biāo)
  1. Check the Assumptions 核實(shí)假設(shè)
  • Get the Data 獲取數(shù)據(jù)
  • Create the Workspace 創(chuàng)建工作空間
  • Download the Data 下載數(shù)據(jù)
  • Take a Quick Look at the Data Structure 速覽數(shù)據(jù)結(jié)構(gòu)
  • Create a Test Set 創(chuàng)建測(cè)試集
  1. Discover and Visualize the Data to Gain Insights 發(fā)現(xiàn)并可視化數(shù)據(jù)幫助理解
  • Visualizing Geographical Data 可視化地理數(shù)據(jù)
  • Looking for Correlations 尋找關(guān)聯(lián)
  • Experimenting with Attribute Combinations 嘗試屬性組合
  1. Prepare the Data for Machine Learning Algorithms 為機(jī)器學(xué)習(xí)算法準(zhǔn)備數(shù)據(jù)
  • Data Cleaning 數(shù)據(jù)清洗
  • Handling Text and Categorical Attributes 處理文本和分類屬性
  • Custom Transformers 自定義Transformers
  • Feature Scaling 特征縮放
  • Transformation Pipelines 轉(zhuǎn)化管道
  1. Select and Train a Model 選擇和訓(xùn)練模型
  • Training and Evaluating on the Training Set 在測(cè)試集訓(xùn)練和評(píng)估
  • Better Evaluation Using Cross-Validation 使用交叉驗(yàn)證做更好的評(píng)估
  1. Fine-Tune Your Model 微調(diào)模型
  • Grid Search 網(wǎng)格搜索
  • Randomized Search 隨機(jī)搜索
  • Ensemble Methods 集成方法
  • Analyze the Best Models and Their Errors 分析最好的模型和他們的錯(cuò)誤
  • Evaluate Your System on the Test Set 在測(cè)試集評(píng)估系統(tǒng)
  1. Launch, Monitor, and Maintain Your System 發(fā)布、監(jiān)視和管理系統(tǒng)
  2. Try It Out! 試試看!
    Exercises 練習(xí)

3. Classification 分類

  1. MNIST(Mixed National Institute of Standards and Technology database)
  2. Training a Binary Classifier 訓(xùn)練一個(gè)二元分類器
  3. Performance Measures 性能測(cè)量
  • Measuring Accuracy Using Cross-Validation 使用交叉驗(yàn)證測(cè)量準(zhǔn)確率
  • Confusion Matrix 混合矩陣
  • Precision and Recall 準(zhǔn)確率和召回率
  • Precision/Recall Tradeoff 準(zhǔn)確率/召回率權(quán)衡
  • The ROC Curve ROC曲線
  1. Multiclass Classification 多級(jí)分類
  2. Error Analysis 錯(cuò)誤分析
  3. Multilabel Classification 多標(biāo)簽分類
  4. Multioutput Classification 多輸出分類
    Exercises 練習(xí)

4. Training Models 訓(xùn)練模型

  1. Linear Regression 線性回歸
  • The Normal Equation 標(biāo)準(zhǔn)方程
  • Computational Complexity 計(jì)算復(fù)雜度
  1. Gradient Descent 梯度下降
  • Batch Gradient Descent 批量梯度下降
  • Stochastic Gradient Descent 隨機(jī)梯度下降
  • Mini-batch Gradient Descent 小批量梯度下降
  1. Polynomial Regression 多項(xiàng)式回歸
  2. Learning Curves 學(xué)習(xí)曲線
  3. Regularized Linear Models 正規(guī)化的線性模型
  • Ridge Regression 脊回歸
  • Lasso Regression 套索回歸
  • Elastic Net 彈性網(wǎng)絡(luò)
  • Early Stopping 提前停止
  1. Logistic Regression 邏輯回歸
  • Estimating Probabilities 估計(jì)概率
  • Training and Cost Function 訓(xùn)練和成本函數(shù)
  • Decision Boundaries 決定邊界
  • Softmax Regression Softmax回歸
    Exercises 練習(xí)

5. Support Vector Machines (SVM)支持向量機(jī)

  1. Linear SVM Classification 線性支持向量機(jī)分類
  • Soft Margin Classification 軟間隔分類
  1. Nonlinear SVM Classification 非線性支持向量機(jī)分類
  • Polynomial Kernel 多項(xiàng)式核
  • Adding Similarity Features 添加相似特性
  • Gaussian RBF Kernel 高斯徑向基核
  • Computational Complexity 計(jì)算復(fù)雜度
  1. SVM Regression SVM回歸
  2. Under the Hood 底層
  • Decision Function and Predictions 決策函數(shù)和預(yù)測(cè)
  • Training Objective 訓(xùn)練目標(biāo)
  • Quadratic Programming 二次規(guī)劃
  • The Dual Problem 對(duì)偶問(wèn)題
  • Kernelized SVM 核化SVM
  • Online SVMs 在線SVM
    Exercises 練習(xí)

6. Decision Trees 決策樹(shù)

Training and Visualizing a Decision Tree 167
Making Predictions 169
Estimating Class Probabilities 171
The CART Training Algorithm 171
Computational Complexity 172
Gini Impurity or Entropy? 172
Regularization Hyperparameters 173
Regression 175
Instability 177
Exercises 178

7. Ensemble Learning and Random Forests

Voting Classifiers 181
Bagging and Pasting 185
Bagging and Pasting in Scikit-Learn 186
Out-of-Bag Evaluation 187
Random Patches and Random Subspaces 188
Random Forests 189
Extra-Trees 190
Feature Importance 190
Boosting 191
AdaBoost 192
Gradient Boosting 195
Stacking 200
Exercises 202

8. Dimensionality Reduction 降維

The Curse of Dimensionality 206
Main Approaches for Dimensionality Reduction 207
Projection 207
Manifold Learning 210
PCA 211
Preserving the Variance 211
Principal Components 212
Projecting Down to d Dimensions 213
Using Scikit-Learn 214
Explained Variance Ratio 214
Choosing the Right Number of Dimensions 215
PCA for Compression 216
Incremental PCA 217
Randomized PCA 218
Kernel PCA 218
Selecting a Kernel and Tuning Hyperparameters 219
LLE 221
Other Dimensionality Reduction Techniques 223
Exercises 224
Part II. Neural Networks and Deep Learning 神經(jīng)網(wǎng)絡(luò)和深度學(xué)習(xí)

9. Up and Running with TensorFlow 安裝運(yùn)行TensorFlow

Installation 232
Creating Your First Graph and Running It in a Session 232
Managing Graphs 234
Lifecycle of a Node Value 235
Linear Regression with TensorFlow 235
Implementing Gradient Descent 237
Manually Computing the Gradients 237
Using autodiff 238
Using an Optimizer 239
Feeding Data to the Training Algorithm 239
Saving and Restoring Models 241
Visualizing the Graph and Training Curves Using TensorBoard 242
Name Scopes 245
Modularity 246
Sharing Variables 248
Exercises 251

10. Introduction to Artificial Neural Networks 人工神經(jīng)網(wǎng)絡(luò)簡(jiǎn)介

From Biological to Artificial Neurons 254
Biological Neurons 255
Logical Computations with Neurons 256
The Perceptron 257
Multi-Layer Perceptron and Backpropagation 261
Training an MLP with TensorFlow’s High-Level API 264
Training a DNN Using Plain TensorFlow 265
Construction Phase 265
Execution Phase 269
Using the Neural Network 270
Fine-Tuning Neural Network Hyperparameters 270
Number of Hidden Layers 270
Number of Neurons per Hidden Layer 272
Activation Functions 272
Exercises 273

11. Training Deep Neural Nets 深層神經(jīng)網(wǎng)絡(luò)訓(xùn)練

Vanishing/Exploding Gradients Problems 275
Xavier and He Initialization 277
Nonsaturating Activation Functions 279
Batch Normalization 282
Gradient Clipping 286
Reusing Pretrained Layers 286
Reusing a TensorFlow Model 287
Reusing Models from Other Frameworks 288
Freezing the Lower Layers 289
Caching the Frozen Layers 290
Tweaking, Dropping, or Replacing the Upper Layers 290
Model Zoos 291
Unsupervised Pretraining 291
Pretraining on an Auxiliary Task 292
Faster Optimizers 293
Momentum optimization 294
Nesterov Accelerated Gradient 295
AdaGrad 296
RMSProp 298
Adam Optimization 298
Learning Rate Scheduling 300
Avoiding Overfitting Through Regularization 302
Early Stopping 303
?1 and ?2 Regularization 303
Dropout 304
Max-Norm Regularization 307
Data Augmentation 309
Practical Guidelines 310
Exercises 311

12. Distributing TensorFlow Across Devices and Servers 分布式TensorFlow橫穿設(shè)備和服務(wù)器

Multiple Devices on a Single Machine 314
Installation 314
Managing the GPU RAM 317
Placing Operations on Devices 318
Parallel Execution 321
Control Dependencies 323
Multiple Devices Across Multiple Servers 323
Opening a Session 325
The Master and Worker Services 325
Pinning Operations Across Tasks 326
Sharding Variables Across Multiple Parameter Servers 327
Sharing State Across Sessions Using Resource Containers 328
Asynchronous Communication Using TensorFlow Queues 329
Loading Data Directly from the Graph 335
Parallelizing Neural Networks on a TensorFlow Cluster 342
One Neural Network per Device 342
In-Graph Versus Between-Graph Replication 343
Model Parallelism 345
Data Parallelism 347
Exercises 352

13. Convolutional Neural Networks 卷積神經(jīng)網(wǎng)絡(luò)

The Architecture of the Visual Cortex 354
Convolutional Layer 355
Filters 357
Stacking Multiple Feature Maps 358
TensorFlow Implementation 360
Memory Requirements 362
Pooling Layer 363
CNN Architectures 365
LeNet-5 366
AlexNet 367
GoogLeNet 368
ResNet 372
Exercises 376

14. Recurrent Neural Networks 遞歸神經(jīng)網(wǎng)絡(luò)

Recurrent Neurons 380
Memory Cells 382
Input and Output Sequences 382
Basic RNNs in TensorFlow 384
Static Unrolling Through Time 385
Dynamic Unrolling Through Time 387
Handling Variable Length Input Sequences 387
Handling Variable-Length Output Sequences 388
Training RNNs 389
Training a Sequence Classifier 389
Training to Predict Time Series 392
Creative RNN 396
Deep RNNs 396
Distributing a Deep RNN Across Multiple GPUs 397
Applying Dropout 399
The Difficulty of Training over Many Time Steps 400
LSTM Cell 401
Peephole Connections 403
GRU Cell 404
Natural Language Processing 405
Word Embeddings 405
An Encoder–Decoder Network for Machine Translation 407
Exercises 410

15. Autoencoders 自編碼器

Efficient Data Representations 412
Performing PCA with an Undercomplete Linear Autoencoder 413
Stacked Autoencoders 415
TensorFlow Implementation 416
Tying Weights 417
Training One Autoencoder at a Time 418
Visualizing the Reconstructions 420
Visualizing Features 421
Unsupervised Pretraining Using Stacked Autoencoders 422
Denoising Autoencoders 424
TensorFlow Implementation 425
Sparse Autoencoders 426
TensorFlow Implementation 427
Variational Autoencoders 428
Generating Digits 431
Other Autoencoders 432
Exercises 433

16. Reinforcement Learning 強(qiáng)化學(xué)習(xí)

Learning to Optimize Rewards 438
Policy Search 440
Introduction to OpenAI Gym 441
Neural Network Policies 444
Evaluating Actions: The Credit Assignment Problem 447
Policy Gradients 448
Markov Decision Processes 453
Temporal Difference Learning and Q-Learning 457
Exploration Policies 459
Approximate Q-Learning 460
Learning to Play Ms. Pac-Man Using Deep Q-Learning Exercises 469
Thank You! 感謝!

A. Exercise Solutions 練習(xí)解答

B. Machine Learning Project Checklist 機(jī)器學(xué)習(xí)項(xiàng)目清單

C. SVM Dual Problem SVM對(duì)偶問(wèn)題

D. Autodiff 自動(dòng)差分

E. Other Popular ANN Architectures 其他流行的ANN架構(gòu)

Index 索引

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容