2. Readme-zh_CN.md

[圖片上傳失敗...(image-47bff3-1555426723726)]
[圖片上傳失敗...(image-23a1a9-1555426723726)]
[圖片上傳失敗...(image-a9f209-1555426723726)]
[圖片上傳失敗...(image-16fb99-1555426723726)]

Piggy Metrics

個人財務的簡易解決方案

This is a proof-of-concept application, which demonstrates Microservice Architecture Pattern using Spring Boot, Spring Cloud and Docker。
With a pretty neat user interface, by the way。

[圖片上傳失敗...(image-49ef69-1555426723726)]
[圖片上傳失敗...(image-984dbb-1555426723726)]

業(yè)務服務

PiggyMetrics 分解為三個核心微服務。 全部都是獨立可部署應用, 根據(jù)各自的業(yè)務域進行編排。

<img width="880" alt="Functional services" src="https://cloud。githubusercontent。com/assets/6069066/13900465/730f2922-ee20-11e5-8df0-e7b51c668847。png">

賬戶服務

涵蓋了通用用戶登錄邏輯以及驗證: 收入/支持 項目, 儲蓄以及賬戶設置。

方法 路徑 備注 用戶驗證 用戶界面
GET /accounts/{account} 獲取指定賬號信息
GET /accounts/current 獲取當前賬戶信息 × ×
GET /accounts/demo 獲取演示賬戶信息 (收入/支出信息, 等) ×
PUT /accounts/current 保存當前賬戶信息 × ×
POST /accounts/ 注冊新賬號 ×

統(tǒng)計信息

對每個賬號的主要統(tǒng)計數(shù)據(jù)進行計算,并為捕獲時序。( 不知道咋翻譯 譯者注 )數(shù)據(jù)點包含值,標準化為基礎貨幣和時間段。此數(shù)據(jù)用于跟蹤帳戶生命周期中的現(xiàn)金流動態(tài)。

方法 路徑 備注 用戶驗證 用戶界面
GET /statistics/{account} 指定賬戶的統(tǒng)計信息
GET /statistics/current 獲取當前統(tǒng)計信息 × ×
GET /statistics/demo 獲取演示賬戶統(tǒng)計信息 ×
PUT /statistics/{account} 創(chuàng)建或更新指定賬戶的時序數(shù)據(jù)

通知服務

存儲了用戶通訊錄信息以及通知設置 (譬如提醒和備份頻率)。
定時任務從其他服務收集了需要的信息以及發(fā)送郵件消息到訂閱用戶。

方法 路徑 備注 用戶驗證 用戶界面
GET /notifications/settings/current 獲取當前用戶通知設置 × ×
PUT /notifications/settings/current 保存當前用戶通知設置 × ×

備注

  • 每個微服務都有自己的數(shù)據(jù)庫, 導致沒有辦法繞過 API 以及直接訪問持久化數(shù)據(jù)
  • 這個項目, 我使用了MongoDB 作為每個微服務的主要數(shù)據(jù)庫。
    這對于這種多語言 持久化架構是有好處的(譯的有點奇怪 譯者注) (選擇這個類型的數(shù)據(jù)庫是最好的!!)。
  • 服務與服務間的通信比較簡單: 微服務交互只使用異步Restful API。
    現(xiàn)實系統(tǒng)中的常見做法是使用交互樣式的組合。
    例如, 使用異步 GET 請求去檢索數(shù)據(jù)以及異步去通過消息代理進行創(chuàng)建/更新操作,以分離服務和緩沖消息。但是,這將使我們進入結果一致性世界。

基礎服務

分布式系統(tǒng)中有許多常見的模式,可以幫助我們使所描述的核心服務工作起來。[Spring Cloud](http://projects.spring.io/spring-cloud/)提供了增強Spring引導應用程序行為以實現(xiàn)這些模式的強大工具。我將簡要介紹它們。
<img width="880" alt="Infrastructure services" src="https://cloud。githubusercontent。com/assets/6069066/13906840/365c0d94-eefa-11e5-90ad-9d74804ca412。png">

配置中心 服務

Spring Cloud Config 是用于分布式系統(tǒng)的水平可擴展的集中配置服務。它使用可插入的存儲庫層,當前支持本地存儲、Git和Subversion。

在這個項目中,我使用 native profile, 它只從本地類路徑加載配置文件。 你可以在 Config service resources 看到 shared 目錄。
現(xiàn)在,當通知服務請求其配置時,使用shared/notification-service。ymlshared/application。yml (在所有客戶端應用程序之間共享)。

客戶端配置

只需只用 spring-cloud-starter-config 依賴, 自動配置就可以完成剩下的了。

現(xiàn)在,您不需要在應用程序中嵌入任何屬性。 只需要提供 bootstrap.yml 應用名和配置中心地址:

spring:
  application:
    name: notification-service
  cloud:
    config:
      uri: http://config:8888
      fail-fast: true
使用 Spring Cloud Config, 可以動態(tài)的切換應用的配置。

例如, [EmailService bean](https://github.com/jinweibin/PiggyMetrics/blob/master/notification-service/src/main/java/com
/piggymetrics/notification/service/EmailServiceImpl.java) 使用 @RefreshScope 注解。
這就意味著不需要重啟以及重新編譯的情況就可以通知應用服務變更電子郵件內(nèi)容和副標題。

首先將配置中心修改參數(shù),然后,發(fā)送刷新請求以通知服務參數(shù)更新:
curl -H "Authorization: Bearer #token#" -XPOST http://127.0.0.1:8000/notifications/refresh

另外, 你也可以使用 Git 的 Webhooks webhooks to automate this process

備注
  • 動態(tài)刷新有一些限制。 @RefreshScope 在有 @Configuration 注解的類不支持還有處理不了有 @Scheduled 注解的方法
  • fail-fast 屬性表示如果在不能連接配置中心的時候會在啟動時馬上失敗。
  • 這里有些重要的筆記 security notes

鑒權服務

鑒權任務被分攤到各個微服務上,那些被 OAuth2 tokens 授權的后臺服務資源。
Auth Server is used for user authorization as well as for secure machine-to-machine communication inside a perimeter。
鑒權服務器用于用戶鑒權,也用于在外圍環(huán)境中進行安全的機器到機器通信。。

這個項目用戶鑒權方式使用的是 Password credentials 授權方式
(因為他只給本地Piggmetrics用戶界面使用) ,另外微服務的授權使用 Client Credentials 授權。

Spring Cloud Security 提供了方便的注解以及自動配置使應用能夠更加簡單的實現(xiàn)服務端以及客戶端的鑒權 。
在這里你可以學到更多 文檔 也可以在 Auth Server code確認詳細配置。

對于客戶端, 所有的鑒權工作都和原來基于 session 的鑒權方式一樣, 你可以在 request 中獲取 Principal 對象, 基于表達式和@PreAuthorize注解去驗證用戶的角色或者其他內(nèi)容
每個PiggyMetrics的客戶端(賬戶服務,統(tǒng)計服務,通知服務和瀏覽器)后端服務都擁有server作用域,瀏覽器則擁有ui。
所以我們也可以保護控制器不受外部訪問的影響, 例如:

@PreAuthorize("#oauth2。hasScope('server')")
@RequestMapping(value = "accounts/{name}", method = RequestMethod.GET)
public List<DataPoint> getStatisticsByAccountName(@PathVariable String name) {
    return statisticsService.findByAccountName(name);
}

API 網(wǎng)關

如你所見, 這邊有3個核心服務,他們向其他的客戶端暴露外部API接口。
在真實系統(tǒng)中,這個數(shù)字會隨著系統(tǒng)的復雜性增長得非常之快。
事實上, 為了渲染一個復雜的網(wǎng)頁可能要觸發(fā)上百上千個服務。

理論上, 客戶端可以直接發(fā)送請求到各個微服務供應商去。
但是很明顯的問題是, 這個操作會有很大的挑戰(zhàn)以及限制, 像是必須知道所有節(jié)點的地址, 分別對每一條信息執(zhí)行HTTP請求, 然后在一個客戶端去合并結果。
另一個問題是后端可能使用的是非Web友好協(xié)議。

通常來說, 使用 API 網(wǎng)關可能是一個更好的方法。
It is a single entry point into the system, used to handle requests by routing them to the appropriate backend service or by invoking multiple backend services and aggregating the results。
這樣進入系統(tǒng)就只有一個入口, 可以通過將請求路由到適合的后端服務或者多個好多服務aggregating the results
此外,它還可以用于身份驗證、監(jiān)控、壓力和金絲雀測試、服務遷移、靜態(tài)響應處理、主動流量管理。

Netflix 開源了 這樣的邊緣服務,
現(xiàn)在我們就可以使用 Spring Cloud 的@EnableZuulProxy 注解去開啟它。
In this project, I use Zuul to store static content (ui application) and to route requests to appropriate
這個項目里, 我使用了 Zuul 去存儲靜態(tài)資源內(nèi)容 ( 用戶界面應用 ) 還有去路由請求到合適的微服務去。
Here's a simple prefix-based routing configuration for Notification service:
這里是一個簡單的基于前綴的通知服務的路由配置:

zuul:
  routes:
    notification-service:
        path: /notifications/**
        serviceId: notification-service
        stripPrefix: false

以上配置以為著所有以 /notifications 開頭的請求都會被路由到通知服務去。
這邊沒有往常的硬編碼的地址。 Zuul 使用了 服務發(fā)現(xiàn)
機制去定位通知服務的所有實例然后 [負載均衡](https://github.com/jinweibin/PiggyMetrics/blob/master/README
.md#http-client-load-balancer-and-circuit-breaker)。

服務發(fā)現(xiàn)

另一種常見的架構模式是服務發(fā)現(xiàn)。
這可以自動檢測到服務實例的網(wǎng)絡位置,
它可以根據(jù)服務的故障,升級或者是自動伸縮來動態(tài)的分配地址。

服務發(fā)現(xiàn)的關鍵就是注冊中心。
這個項目使用了Netflix Eureka 作為服務的注冊中心。
Eureka is a good example of the client-side discovery pattern,
Eureka 是一個很好的客戶端發(fā)現(xiàn)模式的例子,
when client is responsible for determining locations of available service instances (using Registry server) and load balancing requests across them。

With Spring Boot, you can easily build Eureka Registry with spring-cloud-starter-eureka-server dependency, @EnableEurekaServer annotation and simple configuration properties。

Client support enabled with @EnableDiscoveryClient annotation an bootstrap。yml with application name:

spring:
  application:
    name: notification-service

Now, on application startup, it will register with Eureka Server and provide meta-data, such as host and port, health indicator URL, home page etc。 Eureka receives heartbeat messages from each instance belonging to a service。 If the heartbeat fails over a configurable timetable, the instance will be removed from the registry。

Also, Eureka provides a simple interface, where you can track running services and a number of available instances: http://localhost:8761

Load balancer, Circuit breaker and Http client

Netflix OSS provides another great set of tools。

Ribbon

Ribbon is a client side load balancer which gives you a lot of control over the behaviour of HTTP and TCP clients。 Compared to a traditional load balancer, there is no need in additional hop for every over-the-wire invocation - you can contact desired service directly。

Out of the box, it natively integrates with Spring Cloud and Service Discovery。 Eureka Client provides a dynamic list of available servers so Ribbon could balance between them。

Hystrix

Hystrix is the implementation of Circuit Breaker pattern, which gives a control over latency and failure from dependencies accessed over the network。 The main idea is to stop cascading failures in a distributed environment with a large number of microservices。 That helps to fail fast and recover as soon as possible - important aspects of fault-tolerant systems that self-heal。

Besides circuit breaker control, with Hystrix you can add a fallback method that will be called to obtain a default value in case the main command fails。

Moreover, Hystrix generates metrics on execution outcomes and latency for each command, that we can use to monitor system behavior。

Feign

Feign is a declarative Http client, which seamlessly integrates with Ribbon and Hystrix。 Actually, with one spring-cloud-starter-feign dependency and @EnableFeignClients annotation you have a full set of Load balancer, Circuit breaker and Http client with sensible ready-to-go default configuration。

Here is an example from Account Service:

@FeignClient(name = "statistics-service")
public interface StatisticsServiceClient {

    @RequestMapping(method = RequestMethod。PUT, value = "/statistics/{accountName}", consumes = MediaType。APPLICATION_JSON_UTF8_VALUE)
    void updateStatistics(@PathVariable("accountName") String accountName, Account account);

}
  • Everything you need is just an interface
  • You can share @RequestMapping part between Spring MVC controller and Feign methods
  • Above example specifies just desired service id - statistics-service, thanks to autodiscovery through Eureka (but obviously you can access any resource with a specific url)

Monitor dashboard

In this project configuration, each microservice with Hystrix on board pushes metrics to Turbine via Spring Cloud Bus (with AMQP broker)。 The Monitoring project is just a small Spring boot application with Turbine and Hystrix Dashboard

See below how to get it up and running。

Let's see our system behavior under load: Account service calls Statistics service and it responses with a vary imitation delay。 Response timeout threshold is set to 1 second。

<img width="880" src="https://cloud。githubusercontent。com/assets/6069066/14194375/d9a2dd80-f7be-11e5-8bcc-9a2fce753cfe。png">

<img width="212" src="https://cloud。githubusercontent。com/assets/6069066/14127349/21e90026-f628-11e5-83f1-60108cb33490。gif"> <img width="212" src="https://cloud。githubusercontent。com/assets/6069066/14127348/21e6ed40-f628-11e5-9fa4-ed527bf35129。gif"> <img width="212" src="https://cloud。githubusercontent。com/assets/6069066/14127346/21b9aaa6-f628-11e5-9bba-aaccab60fd69。gif"> <img width="212" src="https://cloud。githubusercontent。com/assets/6069066/14127350/21eafe1c-f628-11e5-8ccd-a6b6873c046a。gif">
0 ms delay 500 ms delay 800 ms delay 1100 ms delay
Well behaving system。 The throughput is about 22 requests/second。 Small number of active threads in Statistics service。 The median service time is about 50 ms。 The number of active threads is growing。 We can see purple number of thread-pool rejections and therefore about 30-40% of errors, but circuit is still closed。 Half-open state: the ratio of failed commands is more than 50%, the circuit breaker kicks in。 After sleep window amount of time, the next request is let through。 100 percent of the requests fail。 The circuit is now permanently open。 Retry after sleep time won't close circuit again, because the single request is too slow。

Log analysis

Centralized logging can be very useful when attempting to identify problems in a distributed environment。 Elasticsearch, Logstash and Kibana stack lets you search and analyze your logs, utilization and network activity data with ease。
Ready-to-go Docker configuration described in my other project。

Distributed tracing

Analyzing problems in distributed systems can be difficult, for example, tracing requests that propagate from one microservice to another。 It can be quite a challenge to try to find out how a request travels through the system, especially if you don't have any insight into the implementation of a microservice。 Even when there is logging, it is hard to tell which action correlates to a single request。

Spring Cloud Sleuth solves this problem by providing support for distributed tracing。 It adds two types of IDs to the logging: traceId and spanId。 The spanId represents a basic unit of work, for example sending an HTTP request。 The traceId contains a set of spans forming a tree-like structure。 For example, with a distributed big-data store, a trace might be formed by a PUT request。 Using traceId and spanId for each operation we know when and where our application is as it processes a request, making reading our logs much easier。

The logs are as follows, notice the [appname,traceId,spanId,exportable] entries from the Slf4J MDC:

2018-07-26 23:13:49。381  WARN [gateway,3216d0de1384bb4f,3216d0de1384bb4f,false] 2999 --- [nio-4000-exec-1] o。s。c。n。z。f。r。s。AbstractRibbonCommand    : The Hystrix timeout of 20000ms for the command account-service is set lower than the combination of the Ribbon read and connect timeout, 80000ms。
2018-07-26 23:13:49。562  INFO [account-service,3216d0de1384bb4f,404ff09c5cf91d2e,false] 3079 --- [nio-6000-exec-1] c。p。account。service。AccountServiceImpl   : new account has been created: test
  • appname: The name of the application that logged the span from the property spring。application。name
  • traceId: This is an ID that is assigned to a single request, job, or action
  • spanId: The ID of a specific operation that took place
  • exportable: Whether the log should be exported to Zipkin

Security

An advanced security configuration is beyond the scope of this proof-of-concept project。 For a more realistic simulation of a real system, consider to use https, JCE keystore to encrypt Microservices passwords and Config server properties content (see documentation for details)。

Infrastructure automation

Deploying microservices, with their interdependence, is much more complex process than deploying monolithic application。 It is important to have fully automated infrastructure。 We can achieve following benefits with Continuous Delivery approach:

  • The ability to release software anytime
  • Any build could end up being a release
  • Build artifacts once - deploy as needed

Here is a simple Continuous Delivery workflow, implemented in this project:

<img width="880" src="https://cloud。githubusercontent。com/assets/6069066/14159789/0dd7a7ce-f6e9-11e5-9fbb-a7fe0f4431e3。png">

In this configuration, Travis CI builds tagged images for each successful git push。 So, there are always latest image for each microservice on Docker Hub and older images, tagged with git commit hash。 It's easy to deploy any of them and quickly rollback, if needed。

How to run all the things?

Keep in mind, that you are going to start 8 Spring Boot applications, 4 MongoDB instances and RabbitMq。 Make sure you have 4 Gb RAM available on your machine。 You can always run just vital services though: Gateway, Registry, Config, Auth Service and Account Service。

Before you start

  • Install Docker and Docker Compose。
  • Export environment variables: CONFIG_SERVICE_PASSWORD, NOTIFICATION_SERVICE_PASSWORD, STATISTICS_SERVICE_PASSWORD, ACCOUNT_SERVICE_PASSWORD, MONGODB_PASSWORD (make sure they were exported: printenv)
  • Make sure to build the project: mvn package [-DskipTests]

Production mode

In this mode, all latest images will be pulled from Docker Hub。
Just copy docker-compose。yml and hit docker-compose up

Development mode

If you'd like to build images yourself (with some changes in the code, for example), you have to clone all repository and build artifacts with maven。 Then, run docker-compose -f docker-compose。yml -f docker-compose。dev。yml up

docker-compose。dev。yml inherits docker-compose。yml with additional possibility to build images locally and expose all containers ports for convenient development。

Important endpoints

Notes

All Spring Boot applications require already running Config Server for startup。 But we can start all containers simultaneously because of depends_on docker-compose option。

Also, Service Discovery mechanism needs some time after all applications startup。 Any service is not available for discovery by clients until the instance, the Eureka server and the client all have the same metadata in their local cache, so it could take 3 heartbeats。 Default heartbeat period is 30 seconds。

Contributions are welcome!

PiggyMetrics is open source, and would greatly appreciate your help。 Feel free to suggest and implement improvements。

?著作權歸作者所有,轉載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

相關閱讀更多精彩內(nèi)容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi閱讀 7,854評論 0 10
  • The Inner Game of Tennis W Timothy Gallwey Jonathan Cape ...
    網(wǎng)事_79a3閱讀 12,915評論 3 20
  • 1、收藏夾效應在當下互聯(lián)網(wǎng)高度發(fā)展的時代,知識的共享性普遍存在。收藏夾從微信到QQ到微博,甚至博客也都設置了這個功...
    眉心沒有美人痣閱讀 430評論 0 0
  • 從烏市到上海 火車上遇到了很多形形色色的人和事 和古麗換鋪的年輕人是個人很好的大學生。 幫他提東西,會在我身后弱弱...
    凱紫閱讀 263評論 0 0
  • 在南城某個角落里,一個大賽場上的周圍坐滿了來自南城各個行業(yè)的人,好吧,其實也就百十來號人。 “兄弟們,都給我提起精...
    南宮憶閱讀 468評論 0 1

友情鏈接更多精彩內(nèi)容