storm集群 安裝筆記

015年03月19日 23:28:19

本文主要是參照strom的管網(wǎng)中的document中來進(jìn)行安裝,管網(wǎng)地址

1,首先需要安裝zookeeper集群.可參考管網(wǎng)或網(wǎng)絡(luò)上安裝(很簡單).

2,在storm的work機(jī)器上和nimbus機(jī)器上安裝相關(guān)的依賴.即需安裝jdk1.6+和python2.6+版本.

3,下載storm的二進(jìn)制文件,我這里下載了0.93版本的.

4,解壓storm的tar包到指定的目錄(STORM_DIR).

5,將STORM_DIR配置到環(huán)境變量中.并在此目錄下新建一個(gè)目錄叫l(wèi)ogs,這樣storm的log的日志就會(huì)輸出到這個(gè)logs目錄下面.log日志的配置是在logback下面的cluster.xml里進(jìn)行配置.

6,修改conf/storm.yaml文件.修改內(nèi)容如下:

[html]?view plain?copy

storm.zookeeper.servers:??

?????-?"nacey-master"??

?????-?"nacey-node2"??

nimbus.host:?"nacey-master"??

storm.local.dir:?"/home/nacey/storm/data"??

supervisor.slots.ports:??

?????-?6700??

?????-?6701??

?????-?6702??

?????-?6703??

ui.port:?8081???

注意這里每個(gè)鍵后面必須空格后才能加鍵的值.不然啟動(dòng)的時(shí)候會(huì)提示文件加載錯(cuò)誤.

其實(shí)這個(gè)配置文件中的配置會(huì)覆蓋到defaults.yaml這個(gè)文件.默認(rèn)的配置如下:

[html]?view plain?copy

#?Licensed?to?the?Apache?Software?Foundation?(ASF)?under?one??

#?or?more?contributor?license?agreements.??See?the?NOTICE?file??

#?distributed?with?this?work?for?additional?information??

#?regarding?copyright?ownership.??The?ASF?licenses?this?file??

#?to?you?under?the?Apache?License,?Version?2.0?(the??

#?"License");?you?may?not?use?this?file?except?in?compliance??

#?with?the?License.??You?may?obtain?a?copy?of?the?License?at??

#??

#?http://www.apache.org/licenses/LICENSE-2.0??

#??

#?Unless?required?by?applicable?law?or?agreed?to?in?writing,?software??

#?distributed?under?the?License?is?distributed?on?an?"AS?IS"?BASIS,??

#?WITHOUT?WARRANTIES?OR?CONDITIONS?OF?ANY?KIND,?either?express?or?implied.??

#?See?the?License?for?the?specific?language?governing?permissions?and??

#?limitations?under?the?License.??



###########?These?all?have?default?values?as?shown??

###########?Additional?configuration?goes?into?storm.yaml??


java.library.path:?"/usr/local/lib:/opt/local/lib:/usr/lib"??


###?storm.*?configs?are?general?configurations??

#?the?local?dir?is?where?jars?are?kept??

storm.local.dir:?"storm-local"??

storm.zookeeper.servers:??

????-?"localhost"??

storm.zookeeper.port:?2181??

storm.zookeeper.root:?"/storm"??

storm.zookeeper.session.timeout:?20000??

storm.zookeeper.connection.timeout:?15000??

storm.zookeeper.retry.times:?5??

storm.zookeeper.retry.interval:?1000??

storm.zookeeper.retry.intervalceiling.millis:?30000??

storm.zookeeper.auth.user:?null??

storm.zookeeper.auth.password:?null??

storm.cluster.mode:?"distributed"?#?can?be?distributed?or?local??

storm.local.mode.zmq:?false??

storm.thrift.transport:?"backtype.storm.security.auth.SimpleTransportPlugin"??

storm.principal.tolocal:?"backtype.storm.security.auth.DefaultPrincipalToLocal"??

storm.group.mapping.service:?"backtype.storm.security.auth.ShellBasedGroupsMapping"??

storm.messaging.transport:?"backtype.storm.messaging.netty.Context"??

storm.nimbus.retry.times:?5??

storm.nimbus.retry.interval.millis:?2000??

storm.nimbus.retry.intervalceiling.millis:?60000??

storm.auth.simple-white-list.users:?[]??

storm.auth.simple-acl.users:?[]??

storm.auth.simple-acl.users.commands:?[]??

storm.auth.simple-acl.admins:?[]??

storm.meta.serialization.delegate:?"backtype.storm.serialization.ThriftSerializationDelegate"??


###?nimbus.*?configs?are?for?the?master??

nimbus.host:?"localhost"??

nimbus.thrift.port:?6627??

nimbus.thrift.threads:?64??

nimbus.thrift.max_buffer_size:?1048576??

nimbus.childopts:?"-Xmx1024m"??

nimbus.task.timeout.secs:?30??

nimbus.supervisor.timeout.secs:?60??

nimbus.monitor.freq.secs:?10??

nimbus.cleanup.inbox.freq.secs:?600??

nimbus.inbox.jar.expiration.secs:?3600??

nimbus.task.launch.secs:?120??

nimbus.reassign:?true??

nimbus.file.copy.expiration.secs:?600??

nimbus.topology.validator:?"backtype.storm.nimbus.DefaultTopologyValidator"??

nimbus.credential.renewers.freq.secs:?600??


###?ui.*?configs?are?for?the?master??

ui.host:?0.0.0.0??

ui.port:?8080??

ui.childopts:?"-Xmx768m"??

ui.actions.enabled:?true??

ui.filter:?null??

ui.filter.params:?null??

ui.users:?null??

ui.header.buffer.bytes:?4096??

ui.http.creds.plugin:?backtype.storm.security.auth.DefaultHttpCredentialsPlugin??


logviewer.port:?8000??

logviewer.childopts:?"-Xmx128m"??

logviewer.cleanup.age.mins:?10080??

logviewer.appender.name:?"A1"??


logs.users:?null??


drpc.port:?3772??

drpc.worker.threads:?64??

drpc.max_buffer_size:?1048576??

drpc.queue.size:?128??

drpc.invocations.port:?3773??

drpc.invocations.threads:?64??

drpc.request.timeout.secs:?600??

drpc.childopts:?"-Xmx768m"??

drpc.http.port:?3774??

drpc.https.port:?-1??

drpc.https.keystore.password:?""??

drpc.https.keystore.type:?"JKS"??

drpc.http.creds.plugin:?backtype.storm.security.auth.DefaultHttpCredentialsPlugin??

drpc.authorizer.acl.filename:?"drpc-auth-acl.yaml"??

drpc.authorizer.acl.strict:?false??


transactional.zookeeper.root:?"/transactional"??

transactional.zookeeper.servers:?null??

transactional.zookeeper.port:?null??


###?supervisor.*?configs?are?for?node?supervisors??

#?Define?the?amount?of?workers?that?can?be?run?on?this?machine.?Each?worker?is?assigned?a?port?to?use?for?communication??

supervisor.slots.ports:??

????-?6700??

????-?6701??

????-?6702??

????-?6703??

supervisor.childopts:?"-Xmx256m"??

supervisor.run.worker.as.user:?false??

#how?long?supervisor?will?wait?to?ensure?that?a?worker?process?is?started??

supervisor.worker.start.timeout.secs:?120??

#how?long?between?heartbeats?until?supervisor?considers?that?worker?dead?and?tries?to?restart?it??

supervisor.worker.timeout.secs:?30??

#how?many?seconds?to?sleep?for?before?shutting?down?threads?on?worker??

supervisor.worker.shutdown.sleep.secs:?1??

#how?frequently?the?supervisor?checks?on?the?status?of?the?processes?it's?monitoring?and?restarts?if?necessary??

supervisor.monitor.frequency.secs:?3??

#how?frequently?the?supervisor?heartbeats?to?the?cluster?state?(for?nimbus)??

supervisor.heartbeat.frequency.secs:?5??

supervisor.enable:?true??

supervisor.supervisors:?[]??

supervisor.supervisors.commands:?[]??



###?worker.*?configs?are?for?task?workers??

worker.childopts:?"-Xmx768m"??

worker.gc.childopts:?""??

worker.heartbeat.frequency.secs:?1??


#?control?how?many?worker?receiver?threads?we?need?per?worker??

topology.worker.receiver.thread.count:?1??


task.heartbeat.frequency.secs:?3??

task.refresh.poll.secs:?10??

task.credentials.poll.secs:?30??


zmq.threads:?1??

zmq.linger.millis:?5000??

zmq.hwm:?0??



storm.messaging.netty.server_worker_threads:?1??

storm.messaging.netty.client_worker_threads:?1??

storm.messaging.netty.buffer_size:?5242880?#5MB?buffer??

#?Since?nimbus.task.launch.secs?and?supervisor.worker.start.timeout.secs?are?120,?other?workers?should?also?wait?at?least?that?long?before?giving?up?on?connecting?to?the?other?worker.?The?reconnection?period?need?also?be?bigger?than?storm.zookeeper.session.timeout(default?is?20s),?so?that?we?can?abort?the?reconnection?when?the?target?worker?is?dead.??

storm.messaging.netty.max_retries:?300??

storm.messaging.netty.max_wait_ms:?1000??

storm.messaging.netty.min_wait_ms:?100??


#?If?the?Netty?messaging?layer?is?busy(netty?internal?buffer?not?writable),?the?Netty?client?will?try?to?batch?message?as?more?as?possible?up?to?the?size?of?storm.messaging.netty.transfer.batch.size?bytes,?otherwise?it?will?try?to?flush?message?as?soon?as?possible?to?reduce?latency.??

storm.messaging.netty.transfer.batch.size:?262144??

#?Sets?the?backlog?value?to?specify?when?the?channel?binds?to?a?local?address??

storm.messaging.netty.socket.backlog:?500??

#?We?check?with?this?interval?that?whether?the?Netty?channel?is?writable?and?try?to?write?pending?messages?if?it?is.??

storm.messaging.netty.flush.check.interval.ms:?10??


#?By?default,?the?Netty?SASL?authentication?is?set?to?false.??Users?can?override?and?set?it?true?for?a?specific?topology.??

storm.messaging.netty.authentication:?false??


#?default?number?of?seconds?group?mapping?service?will?cache?user?group??

storm.group.mapping.service.cache.duration.secs:?120??


###?topology.*?configs?are?for?specific?executing?storms??

topology.enable.message.timeouts:?true??

topology.debug:?false??

topology.workers:?1??

topology.acker.executors:?null??

topology.tasks:?null??

#?maximum?amount?of?time?a?message?has?to?complete?before?it's?considered?failed??

topology.message.timeout.secs:?30??

topology.multilang.serializer:?"backtype.storm.multilang.JsonSerializer"??

topology.skip.missing.kryo.registrations:?false??

topology.max.task.parallelism:?null??

topology.max.spout.pending:?null??

topology.state.synchronization.timeout.secs:?60??

topology.stats.sample.rate:?0.05??

topology.builtin.metrics.bucket.size.secs:?60??

topology.fall.back.on.java.serialization:?true??

topology.worker.childopts:?null??

topology.executor.receive.buffer.size:?1024?#batched??

topology.executor.send.buffer.size:?1024?#individual?messages??

topology.receiver.buffer.size:?8?#?setting?it?too?high?causes?a?lot?of?problems?(heartbeat?thread?gets?starved,?throughput?plummets)??

topology.transfer.buffer.size:?1024?#?batched??

topology.tick.tuple.freq.secs:?null??

topology.worker.shared.thread.pool.size:?4??

topology.disruptor.wait.strategy:?"com.lmax.disruptor.BlockingWaitStrategy"??

topology.spout.wait.strategy:?"backtype.storm.spout.SleepSpoutWaitStrategy"??

topology.sleep.spout.wait.strategy.time.ms:?1??

topology.error.throttle.interval.secs:?10??

topology.max.error.report.per.interval:?5??

topology.kryo.factory:?"backtype.storm.serialization.DefaultKryoFactory"??

topology.tuple.serializer:?"backtype.storm.serialization.types.ListDelegateSerializer"??

topology.trident.batch.emit.interval.millis:?500??

topology.testing.always.try.serialize:?false??

topology.classpath:?null??

topology.environment:?null??

topology.bolts.outgoing.overflow.buffer.enable:?false??


dev.zookeeper.path:?"/tmp/dev-storm-zookeeper"??

備注:如果你的機(jī)器的網(wǎng)絡(luò)地址存在ipv6的地址,storm啟動(dòng)的時(shí)候默認(rèn)是啟用ipv6的地址,但實(shí)際上storm是不能使用ipv6的地址,故需在啟動(dòng)腳本中(storm)增加-Djava.net.preferIPv4Stack=true.如果使用了ipv6有可能進(jìn)程啟動(dòng)是正常的,但是在訪問stormui的時(shí)候,頁面會(huì)提示如下錯(cuò)誤:

Storm UI getting Internal Server Error :org.apache.thrift7.transport.TTransportException: java.net.ConnectException: Connection refused

按照如上的修改后.即可解決如上錯(cuò)誤.

7,啟動(dòng).

? storm nimbus &

? storm supervisor &

? storm ui &

8,測試 ,通過瀏覽器訪問http://{nimbus host}:8080.能夠正常訪問stormui.且響應(yīng)了

Storm UI

Cluster Summary

VersionNimbus uptimeSupervisorsUsed slotsFree slotsTotal slotsExecutorsTasks

0.9.31m 45s104400

Topology summary

Supervisor summary

IdHostUptimeSlotsUsed slots

cbe9e1bd-5e43-4749-b187-c9a2c89081banacey-master1m 18s40

Nimbus Configuration

KeyValue

dev.zookeeper.path/tmp/dev-storm-zookeeper

drpc.childopts-Xmx768m

drpc.invocations.port3773

drpc.port3772

drpc.queue.size128

drpc.request.timeout.secs600

drpc.worker.threads64

java.library.path/usr/local/lib:/opt/local/lib:/usr/lib

logviewer.appender.nameA1

logviewer.childopts-Xmx128m

logviewer.port8000

nimbus.childopts-Xmx1024m

nimbus.cleanup.inbox.freq.secs

然后可將此storm打包發(fā)布到集群中的其他機(jī)器即可.

在其他機(jī)器上執(zhí)行storm supervisor即可.

http://blog.csdn.net/nacey5201/article/details/44467755

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • Strom集群結(jié)構(gòu)是有一個(gè)主節(jié)點(diǎn)(nimbus)和多個(gè)工作節(jié)點(diǎn)(supervisor)組成的主從結(jié)構(gòu),主節(jié)點(diǎn)通過配...
    看山遠(yuǎn)兮閱讀 3,008評論 0 7
  • 目錄 場景假設(shè) 調(diào)優(yōu)步驟和方法 Storm 的部分特性 Storm 并行度 Storm 消息機(jī)制 Storm UI...
    mtide閱讀 17,274評論 30 60
  • Date: Nov 17-24, 2017 1. 目的 積累Storm為主的流式大數(shù)據(jù)處理平臺對實(shí)時(shí)數(shù)據(jù)處理的相關(guān)...
    一只很努力爬樹的貓閱讀 2,325評論 0 4
  • 1. Storm介紹: Storm是實(shí)時(shí)流計(jì)算框架。企業(yè)中典型實(shí)時(shí)分析框架搭建模式: Flume + Kafka ...
    奉先閱讀 1,785評論 0 3
  • 辦公室的人都在寂靜著,只有空調(diào)發(fā)出的聲音,在安靜的環(huán)境里,聲音便顯的很大,我抬起頭,看見外面的天氣灰蒙蒙的,...
    風(fēng)雪樓閱讀 268評論 0 0

友情鏈接更多精彩內(nèi)容