OpenStack安裝手冊
部署架構(gòu)
為了更好的展現(xiàn)OpenStack各組件分布式部署的特點(diǎn),以及邏輯網(wǎng)絡(luò)配置的區(qū)別,本實(shí)驗(yàn)不采用All in One 的部署模式,而是采用多節(jié)點(diǎn)分開部署的方式,方便后續(xù)學(xué)習(xí)研究。

網(wǎng)絡(luò)拓?fù)?/h2>
網(wǎng)絡(luò)拓?fù)?/div>
環(huán)境準(zhǔn)備
本實(shí)驗(yàn)采用Virtualbox Windows 版作為虛擬化平臺,模擬相應(yīng)的物理網(wǎng)絡(luò)和物理服務(wù)器,如果需要部署到真實(shí)的物理環(huán)境,此步驟可以直接替換為在物理機(jī)上相應(yīng)的配置,其原理相同。
Virtualbox 下載地址:https://www.virtualbox.org/wiki/Downloads
虛擬網(wǎng)絡(luò)
需要新建3個(gè)虛擬網(wǎng)絡(luò)Net0、Net1和Net2,其在virtual box 中對應(yīng)配置如下。
Net0:
Network name: VirtualBox host-only Ethernet Adapter#2
Purpose: administrator / management network
IP block: 10.20.0.0/24
DHCP: disable
Linux device: eth0
Net1:
Network name: VirtualBox host-only Ethernet Adapter#3
Purpose: public network
DHCP: disable
IP block: 172.16.0.0/24
Linux device: eth1
Net2:
Network name: VirtualBox host-only Ethernet Adapter#4
Purpose: Storage/private network
DHCP: disable
IP block: 192.168.4.0/24
Linux device: eth2
虛擬機(jī)
需要新建3個(gè)虛擬機(jī)VM0、VM1和VM2,其對應(yīng)配置如下。
VM0:
Name: controller0
vCPU:1
Memory :1G
Disk:30G
Networks: net1
VM1:
Name : network0
vCPU:1
Memory :1G
Disk:30G
Network:net1,net2,net3
VM2:
Name: compute0
vCPU:2
Memory :2G
Disk:30G
Networks:net1,net3
網(wǎng)絡(luò)設(shè)置
controller0
eth0:10.20.0.10 (management network)
eht1:(disabled)
eht2:(disabled)
network0
eth0:10.20.0.20 (management network)
eht1:172.16.0.20 (public/external network)
eht2:192.168.4.20 (private network)
compute0
eth0:10.20.0.30 (management network)
eht1:(disabled)
eht2:192.168.4.30 (private network)
compute1 (optional)
eth0:10.20.0.31 (management network)
eht1:(disabled)
eht2:192.168.4.31 (private network)
操作系統(tǒng)準(zhǔn)備
本實(shí)驗(yàn)使用Linux 發(fā)行版 CentOS 6.5 x86_64,在安裝操作系統(tǒng)過程中,選擇的初始安裝包為“基本”安裝包,安裝完成系統(tǒng)以后還需要額外配置如下YUM 倉庫。
ISO文件下載:http://mirrors.163.com/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-bin-DVD1.iso
EPEL源: http://dl.fedoraproject.org/pub/epel/6/x86_64/
RDO源: https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/
自動配置執(zhí)行如此命令即可,源安裝完成后更新所有RPM包,由于升級了kernel 需要重新啟動操作系統(tǒng)。
yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
//上面的鏈接沒了,用下面的試試
wget https://raw.githubusercontent.com/naototty/centos7-rdo-icehouse/master/rdo-release-icehouse-4.noarch.rpm --user-agent="Mozilla/5.0 (X11;U;Linux i686;en-US;rv:1.9.0.3) Geco/2008092416 Firefox/3.0.3" --no-check-certificate
rpm -ivh rdo-release-icehouse-4.noarch.rpm
wget https://raw.githubusercontent.com/mu228/ssr/master/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm
yum update -y
reboot -h 0
接下來可以開始安裝配置啦!
公共配置(all nodes)
以下命令需要在每一個(gè)節(jié)點(diǎn)都執(zhí)行。
修改hosts 文件
vi /etc/hosts
127.0.0.1 localhost
::1 localhost
10.20.0.10 controller0
10.20.0.20 network0
10.20.0.30 compute0
禁用 selinux
vi /etc/selinux/config
SELINUX=disabled
安裝NTP 服務(wù)
yum install ntp -y
service ntpd start
chkconfig ntpd on
修改NTP配置文件,配置從controller0時(shí)間同步。(除了controller0以外)
vi /etc/ntp.conf
server 10.20.0.10
fudge 10.20.0.10 stratum 10 # LCL is unsynchronized
立即同步并檢查時(shí)間同步配置是否正確。(除了controller0以外)
ntpdate -u 10.20.0.10
service ntpd restart
ntpq -p
清空防火墻規(guī)則
vi /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
重啟防火墻,查看是否生效
service iptables restart
iptables -L
安裝openstack-utils,方便后續(xù)直接可以通過命令行方式修改配置文件
yum install -y openstack-utils
基本服務(wù)安裝與配置(controller0 node)
基本服務(wù)包括NTP 服務(wù)、MySQL數(shù)據(jù)庫服務(wù)和AMQP服務(wù),本實(shí)例采用MySQL 和Qpid 作為這兩個(gè)服務(wù)的實(shí)現(xiàn)。
修改NTP配置文件,配置從127.127.1.0 時(shí)間同步。
vi /etc/ntp.conf
server 127.127.1.0
重啟ntp service
service ntpd restart
MySQL 服務(wù)安裝
yum install -y mysql mysql-server MySQL-python
//centos7沒有mysql換成mariadb
yum install -y mariadb-server
修改MySQL配置
vi /etc/my.cnf
[mysqld]
bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
啟動MySQL服務(wù)
service mysqld start
chkconfig mysqld on
//centos7
service mariadb start
chkconfig mariadb on
交互式配置MySQL root 密碼,設(shè)置密碼為“openstack”
mysql_secure_installation
Qpid 安裝消息服務(wù),設(shè)置客戶端不需要驗(yàn)證使用服務(wù)
yum install -y qpid-cpp-server
vi /etc/qpidd.conf
auth=no
配置修改后,重啟Qpid后臺服務(wù)
service qpidd start
chkconfig qpidd on
控制節(jié)點(diǎn)安裝(controller0)
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=controller0
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.10
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
Keyston 安裝與配置
安裝keystone 包
yum install openstack-keystone python-keystoneclient -y
為keystone 設(shè)置admin 賬戶的 tokn
ADMIN_TOKEN=$(openssl rand -hex 10)
echo $ADMIN_TOKEN
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
配置數(shù)據(jù)連接
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:openstack@controller0/keystone
openstack-config --set /etc/keystone/keystone.conf DEFAULT debug True
openstack-config --set /etc/keystone/keystone.conf DEFAULT verbose True
設(shè)置Keystone 用 PKI tokens
keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl
為Keystone 建表
mysql -uroot -popenstack -e "CREATE DATABASE keystone;"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller0' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'openstack';"
初始化Keystone數(shù)據(jù)庫
su -s /bin/sh -c "keystone-manage db_sync"
也可以直接用openstack-db 工具初始數(shù)據(jù)庫
openstack-db --init --service keystone --password openstack
啟動keystone 服務(wù)
service openstack-keystone start
chkconfig openstack-keystone on
設(shè)置認(rèn)證信息
export OS_SERVICE_TOKEN=`echo $ADMIN_TOKEN`
export OS_SERVICE_ENDPOINT=http://controller0:35357/v2.0
創(chuàng)建管理員和系統(tǒng)服務(wù)使用的租戶
keystone tenant-create --name=admin --description="Admin Tenant"
keystone tenant-create --name=service --description="Service Tenant"
創(chuàng)建管理員用戶
keystone user-create --name=admin --pass=admin --email=admin@example.com
創(chuàng)建管理員角色
keystone role-create --name=admin
為管理員用戶分配"管理員"角色
keystone user-role-add --user=admin --tenant=admin --role=admin
為keystone 服務(wù)建立 endpoints
keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
為keystone 建立 servie 和 endpoint 關(guān)聯(lián)
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://controller0:5000/v2.0 \
--internalurl=http://controller0:5000/v2.0 \
--adminurl=http://controller0:35357/v2.0
驗(yàn)證keystone 安裝的正確性
取消先前的Token變量,不然會干擾新建用戶的驗(yàn)證。
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
先用命令行方式驗(yàn)證
keystone --os-username=admin --os-password=admin --os-auth-url=http://controller0:35357/v2.0 token-get
keystone --os-username=admin --os-password=admin --os-tenant-name=admin --os-auth-url=http://controller0:35357/v2.0 token-get
讓后用設(shè)置環(huán)境變量認(rèn)證,保存認(rèn)證信息
vi ~/keystonerc
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller0:35357/v2.0
source 該文件使其生效
source keystonerc
keystone token-get
Keystone 安裝結(jié)束。
Glance 安裝與配置
安裝Glance 的包
yum install openstack-glance python-glanceclient -y
配置Glance 連接數(shù)據(jù)庫
openstack-config --set /etc/glance/glance-api.conf DEFAULT sql_connection mysql://glance:openstack@controller0/glance
openstack-config --set /etc/glance/glance-registry.conf DEFAULT sql_connection mysql://glance:openstack@controller0/glance
初始化Glance數(shù)據(jù)庫
openstack-db --init --service glance --password openstack
創(chuàng)建glance 用戶
keystone user-create --name=glance --pass=glance --email=glance@example.com
并分配service角色
keystone user-role-add --user=glance --tenant=service --role=admin
創(chuàng)建glance 服務(wù)
keystone service-create --name=glance --type=image --description="Glance Image Service"
創(chuàng)建keystone 的endpoint
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ image / {print $2}') \
--publicurl=http://controller0:9292 \
--internalurl=http://controller0:9292 \
--adminurl=http://controller0:9292
用openstack util 修改glance api 和 register 配置文件
openstack-config --set /etc/glance/glance-api.conf DEFAULT debug True
openstack-config --set /etc/glance/glance-api.conf DEFAULT verbose True
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password glance
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf DEFAULT debug True
openstack-config --set /etc/glance/glance-registry.conf DEFAULT verbose True
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password glance
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
啟動glance 相關(guān)的兩個(gè)服務(wù)
service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on
下載最Cirros鏡像驗(yàn)證glance 安裝是否成功
wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
glance image-create --progress --name="CirrOS 0.3.1" --disk-format=qcow2 --container-format=ovf --is-public=true < cirros-0.3.1-x86_64-disk.img
查看剛剛上傳的image
glance image-list
如果顯示相應(yīng)的image 信息說明安裝成功。
Nova 安裝與配置
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
在keystone中創(chuàng)建nova相應(yīng)的用戶和服務(wù)
keystone user-create --name=nova --pass=nova --email=nova@example.com
keystone user-role-add --user=nova --tenant=service --role=admin
keystone 注冊服務(wù)
keystone service-create --name=nova --type=compute --description="Nova Compute Service"
keystone 注冊endpoint
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://controller0:8774/v2/%\(tenant_id\)s \
--internalurl=http://controller0:8774/v2/%\(tenant_id\)s \
--adminurl=http://controller0:8774/v2/%\(tenant_id\)s
配置nova MySQL 連接
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:openstack@controller0/nova
初始化數(shù)據(jù)庫
openstack-db --init --service nova --password openstack
配置nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT debug True
openstack-config --set /etc/nova/nova.conf DEFAULT verbose True
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller0
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.20.0.10
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 10.20.0.10
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.20.0.10
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova
添加api-paste.ini 的 Keystone認(rèn)證信息
openstack-config --set /etc/nova/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory
openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_host controller0
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name service
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password nova
啟動服務(wù)
service openstack-nova-api start
service openstack-nova-cert start
service openstack-nova-consoleauth start
service openstack-nova-scheduler start
service openstack-nova-conductor start
service openstack-nova-novncproxy start
添加到系統(tǒng)服務(wù)
chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on
檢查服務(wù)是否正常
nova-manage service list
root@controller0 ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-consoleauth controller0 internal enabled :-) 2013-11-12 11:14:56
nova-cert controller0 internal enabled :-) 2013-11-12 11:14:56
nova-scheduler controller0 internal enabled :-) 2013-11-12 11:14:56
nova-conductor controller0 internal enabled :-) 2013-11-12 11:14:56
檢查進(jìn)程
[root@controller0 ~]# ps -ef|grep nova
nova 7240 1 1 23:11 ? 00:00:02 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 7252 1 1 23:11 ? 00:00:01 /usr/bin/python /usr/bin/nova-cert --logfile /var/log/nova/cert.log
nova 7264 1 1 23:11 ? 00:00:01 /usr/bin/python /usr/bin/nova-consoleauth --logfile /var/log/nova/consoleauth.log
nova 7276 1 1 23:11 ? 00:00:01 /usr/bin/python /usr/bin/nova-scheduler --logfile /var/log/nova/scheduler.log
nova 7288 1 1 23:11 ? 00:00:01 /usr/bin/python /usr/bin/nova-conductor --logfile /var/log/nova/conductor.log
nova 7300 1 0 23:11 ? 00:00:00 /usr/bin/python /usr/bin/nova-novncproxy --web /usr/share/novnc/
nova 7336 7240 0 23:11 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 7351 7240 0 23:11 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 7352 7240 0 23:11 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
Neutron server安裝與配置
安裝Neutron server 相關(guān)包
yum install -y openstack-neutron openstack-neutron-ml2 python-neutronclient
在keystone中創(chuàng)建 Neutron 相應(yīng)的用戶和服務(wù)
keystone user-create --name neutron --pass neutron --email neutron@example.com
keystone user-role-add --user neutron --tenant service --role admin
keystone service-create --name neutron --type network --description "OpenStack Networking"
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ network / {print $2}') \
--publicurl http://controller0:9696 \
--adminurl http://controller0:9696 \
--internalurl http://controller0:9696
為Neutron 在MySQL建數(shù)據(jù)庫
mysql -uroot -popenstack -e "CREATE DATABASE neutron;"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller0' IDENTIFIED BY 'openstack';"
配置MySQL
openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:openstack@controller0/neutron
配置Neutron Keystone 認(rèn)證
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron
配置Neutron qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller0
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller0:8774/v2
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_username nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }')
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_password nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_auth_url http://controller0:35357/v2.0
配置Neutron ml2 plugin 用openvswitch
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
配置nova 使用Neutron 作為network 服務(wù)
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller0:9696
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller0:35357/v2.0
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_SECRET
重啟nova controller 上的服務(wù)
service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-conductor restart
啟動Neutron server
service neutron-server start
chkconfig neutron-server on
網(wǎng)路節(jié)點(diǎn)安裝(network0 node)
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=network0
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.20
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.20
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.20
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
先安裝Neutron 相關(guān)的包
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
允許ip forward
vi /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
立即生效
sysctl -p
配置Neutron keysone 認(rèn)證
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron
配置qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller0
配置Neutron 使用ml + openvswitch +gre
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.4.20
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutronopenvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
配置l3
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True
配置dhcp agent
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
配置metadata agent
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller0:5000/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET
service openvswitch start
chkconfig openvswitch on
ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth1
修改eth1和br-ext 網(wǎng)絡(luò)配置
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=none
PROMISC=yes
vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
TYPE=Bridge
ONBOOT=no
BOOTPROTO=none
重啟網(wǎng)絡(luò)服務(wù)
service network restart
為br-ext 添加ip
ip link set br-ex up
sudo ip addr add 172.16.0.20/24 dev br-ex
啟動Neutron 服務(wù)
service neutron-openvswitch-agent start
service neutron-l3-agent start
service neutron-dhcp-agent start
service neutron-metadata-agent start
chkconfig neutron-openvswitch-agent on
chkconfig neutron-l3-agent on
chkconfig neutron-dhcp-agent on
chkconfig neutron-metadata-agent on
計(jì)算節(jié)點(diǎn)安裝((compute0 node)
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=compute0
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.30
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.30
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.30
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
安裝nova 相關(guān)包
yum install -y openstack-nova-compute
配置nova
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:openstack@controller0/nova
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller0
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.20.0.30
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.20.0.30
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://controller0:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller0
啟動compute 節(jié)點(diǎn)服務(wù)
service libvirtd start
service messagebus start
service openstack-nova-compute start
chkconfig libvirtd on
chkconfig messagebus on
chkconfig openstack-nova-compute on
在controller 節(jié)點(diǎn)檢查compute服務(wù)是否啟動
nova-manage service list
多出計(jì)算節(jié)點(diǎn)服務(wù)
[root@controller0 ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-consoleauth controller0 internal enabled :-) 2014-07-19 09:04:18
nova-cert controller0 internal enabled :-) 2014-07-19 09:04:19
nova-conductor controller0 internal enabled :-) 2014-07-19 09:04:20
nova-scheduler controller0 internal enabled :-) 2014-07-19 09:04:20
nova-compute compute0 nova enabled :-) 2014-07-19 09:04:19
安裝neutron ml2 和openvswitch agent
yum install openstack-neutron-ml2 openstack-neutron-openvswitch
配置Neutron Keystone 認(rèn)證
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron
配置Neutron qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller0
配置Neutron 使用 ml2 for ovs and gre
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.4.30
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutronopenvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
配置 Nova 使用Neutron 提供網(wǎng)絡(luò)服務(wù)
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller0:9696
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller0:35357/v2.0
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_SECRET
service openvswitch start
chkconfig openvswitch on
ovs-vsctl add-br br-int
service openstack-nova-compute restart
service neutron-openvswitch-agent start
chkconfig neutron-openvswitch-agent on
檢查agent 是否啟動正常
neutron agent-list
啟動正常顯示
[root@controller0 ~]# neutron agent-list
+--------------------------------------+--------------------+----------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+----------+-------+----------------+
| 2c5318db-6bc2-4d09-b728-bbdd677b1e72 | L3 agent | network0 | :-) | True |
| 4a79ff75-6205-46d0-aec1-37f55a8d87ce | Open vSwitch agent | network0 | :-) | True |
| 5a5bd885-4173-4515-98d1-0edc0fdbf556 | Open vSwitch agent | compute0 | :-) | True |
| 5c9218ce-0ebd-494a-b897-5e2df0763837 | DHCP agent | network0 | :-) | True |
| 76f2069f-ba84-4c36-bfc0-3c129d49cbb1 | Metadata agent | network0 | :-) | True |
+--------------------------------------+--------------------+----------+-------+----------------+
創(chuàng)建初始網(wǎng)絡(luò)
創(chuàng)建外部網(wǎng)絡(luò)
neutron net-create ext-net --shared --router:external=True
為外部網(wǎng)絡(luò)添加subnet
neutron subnet-create ext-net --name ext-subnet
--allocation-pool start=172.16.0.100,end=172.16.0.200
--disable-dhcp --gateway 172.16.0.1 172.16.0.0/24
創(chuàng)建住戶網(wǎng)絡(luò)
首先創(chuàng)建demo用戶、租戶已經(jīng)分配角色關(guān)系
keystone user-create --name=demo --pass=demo --email=demo@example.com
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=member --tenant=demo
創(chuàng)建租戶網(wǎng)絡(luò)demo-net
neutron net-create demo-net
為租戶網(wǎng)絡(luò)添加subnet
neutron subnet-create demo-net --name demo-subnet --gateway 192.168.1.1 192.168.1.0/24
為租戶網(wǎng)絡(luò)創(chuàng)建路由,并連接到外部網(wǎng)絡(luò)
neutron router-create demo-router
將demo-net 連接到路由器
neutron router-interface-add demo-router
4 }')
設(shè)置demo-router 默認(rèn)網(wǎng)關(guān)
neutron router-gateway-set demo-router ext-net
啟動一個(gè)instance
nova boot --flavor m1.tiny --image
2 }') --nic net-id=
2 }') --security-group default demo-instance1
Dashboard 安裝
安裝Dashboard 相關(guān)包
yum install memcached python-memcached mod_wsgi openstack-dashboard
配置mencached
vi /etc/openstack-dashboard/local_settings
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}
配置Keystone hostname
vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller0"
啟動Dashboard 相關(guān)服務(wù)
service httpd start
service memcached start
chkconfig httpd on
chkconfig memcached on
打開瀏覽器驗(yàn)證,用戶名:admin 密碼:admin
Cinder 安裝
Cinder controller 安裝
先在controller0 節(jié)點(diǎn)安裝 cinder api
yum install openstack-cinder -y
配置cinder數(shù)據(jù)庫連接
openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:openstack@controller0/cinder
初始化數(shù)據(jù)庫
mysql -uroot -popenstack -e "CREATE DATABASE cinder;""
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller0' IDENTIFIED BY 'openstack';"
su -s /bin/sh -c "cinder-manage db sync" cinder
或者用openstack-db 工具初始化數(shù)據(jù)庫
openstack-db --init --service cinder --password openstack
在Keystone中創(chuàng)建cinder 系統(tǒng)用戶
keystone user-create --name=cinder --pass=cinder --email=cinder@example.com
keystone user-role-add --user=cinder --tenant=service --role=admin
在Keystone注冊一個(gè)cinder 的 service
keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"
創(chuàng)建一個(gè) cinder 的 endpoint
keystone endpoint-create
--service-id=
2}')
--publicurl=http://controller0:8776/v1/%(tenant_id)s
--internalurl=http://controller0:8776/v1/%(tenant_id)s
--adminurl=http://controller0:8776/v1/%(tenant_id)s
在Keystone注冊一個(gè)cinderv2 的 service
keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"
創(chuàng)建一個(gè) cinderv2 的 endpoint
keystone endpoint-create
--service-id=
2}')
--publicurl=http://controller0:8776/v2/%(tenant_id)s
--internalurl=http://controller0:8776/v2/%(tenant_id)s
--adminurl=http://controller0:8776/v2/%(tenant_id)s
配置cinder Keystone認(rèn)證
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder
配置qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname controller0
啟動cinder controller 相關(guān)服務(wù)
service openstack-cinder-api start
service openstack-cinder-scheduler start
chkconfig openstack-cinder-api on
chkconfig openstack-cinder-scheduler on
Cinder block storage 節(jié)點(diǎn)安裝
執(zhí)行下面的操作之前,當(dāng)然別忘了需要安裝公共部分內(nèi)容!(比如ntp,hosts 等)
開始配置之前,Cinder0 創(chuàng)建一個(gè)新磁盤,用于block 的分配
/dev/sdb
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=cinder0
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.40
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.40
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.40
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
網(wǎng)絡(luò)拓?fù)?/h2>
include-cinder
安裝Cinder 相關(guān)包
yum install -y openstack-cinder scsi-target-utils
創(chuàng)建 LVM physical and logic 卷,作為cinder 塊存儲的實(shí)現(xiàn)
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
Add a filter entry to the devices section in the /etc/lvm/lvm.conf file to keep LVM from scanning devices used by virtual machines
添加一個(gè)過濾器保證 虛擬機(jī)能掃描到LVM
vi /etc/lvm/lvm.conf
devices {
...
filter = [ "a/sda1/", "a/sdb/", "r/.*/"]
...
}
配置Keystone 認(rèn)證
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/cinder/cinder.conf keystone_authtokenauth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder
配置qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname controller0
配置數(shù)據(jù)庫連接
openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:openstack@controller0/cinder
配置Glance server
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host controller0
配置cinder-volume 的 my_ip , 這個(gè)ip決定了存儲數(shù)據(jù)跑在哪網(wǎng)卡上
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.4.40
配置 iSCSI target 服務(wù)發(fā)現(xiàn) Block Storage volumes
vi /etc/tgt/targets.conf
include /etc/cinder/volumes/*
啟動cinder-volume 服務(wù)
service openstack-cinder-volume start
service tgtd start
chkconfig openstack-cinder-volume on
chkconfig tgtd on
Swift 安裝
安裝存儲節(jié)點(diǎn)
在執(zhí)行下面的操作之前,當(dāng)然別忘了需要安裝公共部分內(nèi)容哦!(比如ntp,hosts 等)
在開始配置之前,為Swift0 創(chuàng)建一個(gè)新磁盤,用于Swift 數(shù)據(jù)的存儲,比如:
/dev/sdb
磁盤創(chuàng)建好后,啟動OS為新磁盤分區(qū)
fdisk /dev/sdb
mkfs.xfs /dev/sdb1
echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mkdir -p /srv/node/sdb1
mount /srv/node/sdb1
chown -R swift:swift /srv/node
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=swift0
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.50
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.50
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.50
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
網(wǎng)絡(luò)拓?fù)?/h2>
這里省去Cinder 節(jié)點(diǎn)部分
include-swift
安裝swift storage 節(jié)點(diǎn)相關(guān)的包
yum install -y openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd
配置object,container ,account 的配置文件
openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip 10.20.0.50
openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip 10.20.0.50
openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip 10.20.0.50
在rsynd 配置文件中配置要同步的文件目錄
vi /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.4.50
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
vi /etc/xinetd.d/rsync
disable = no
service xinetd start
chkconfig xinetd on
Create the swift recon cache directory and set its permissions:
mkdir -p /var/swift/recon
chown -R swift:swift /var/swift/recon
安裝swift-proxy 服務(wù)
為swift 在Keystone 中創(chuàng)建一個(gè)用戶
keystone user-create --name=swift --pass=swift --email=swift@example.com
為swift 用戶添加root 用戶
keystone user-role-add --user=swift --tenant=service --role=admin
為swift 添加一個(gè)對象存儲服務(wù)
keystone service-create --name=swift --type=object-store --description="OpenStack Object Storage"
為swift 添加endpoint
keystone endpoint-create
--service-id=
2}')
--publicurl='http://controller0:8080/v1/AUTH_%(tenant_id)s'
--internalurl='http://controller0:8080/v1/AUTH_%(tenant_id)s'
--adminurl=http://controller0:8080
安裝swift-proxy 相關(guān)軟件包
yum install -y openstack-swift-proxy memcached python-swiftclient python-keystone-auth-token
添加配置文件,完成配置后再copy 該文件到每個(gè)storage 節(jié)點(diǎn)
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix xrfuniounenqjnw
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix fLIbertYgibbitZ
scp /etc/swift/swift.conf root@10.20.0.50:/etc/swift/
修改memcached 默認(rèn)監(jiān)聽ip地址
vi /etc/sysconfig/memcached
OPTIONS="-l 10.20.0.10"
啟動mencached
service memcached restart
chkconfig memcached on
修改過proxy server配置
vi /etc/swift/proxy-server.conf
openstack-config --set /etc/swift/proxy-server.conf filter:keystone operator_roles Member,admin,swiftoperator
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_host controller0
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_port 35357
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_user swift
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_tenant_name service
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_password swift
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken delay_auth_decision true
構(gòu)建ring 文件
cd /etc/swift
swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1
swift-ring-builder account.builder add z1-10.20.0.50:6002R10.20.0.50:6005/sdb1 100
swift-ring-builder container.builder add z1-10.20.0.50:6001R10.20.0.50:6004/sdb1 100
swift-ring-builder object.builder add z1-10.20.0.50:6000R10.20.0.50:6003/sdb1 100
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance
拷貝ring 文件到storage 節(jié)點(diǎn)
scp *ring.gz root@10.20.0.50:/etc/swift/
修改proxy server 和storage 節(jié)點(diǎn)Swift 配置文件的權(quán)限
ssh root@10.20.0.50 "chown -R swift:swift /etc/swift"
chown -R swift:swift /etc/swift
在controller0 上啟動proxy service
service openstack-swift-proxy start
chkconfig openstack-swift-proxy on
在Swift0 上 啟動storage 服務(wù)
service openstack-swift-object start
service openstack-swift-object-replicator start
service openstack-swift-object-updater start
service openstack-swift-object-auditor start
service openstack-swift-container start
service openstack-swift-container-replicator start
service openstack-swift-container-updater start
service openstack-swift-container-auditor start
service openstack-swift-account start
service openstack-swift-account-replicator start
service openstack-swift-account-reaper start
service openstack-swift-account-auditor start
設(shè)置開機(jī)啟動
chkconfig openstack-swift-object on
chkconfig openstack-swift-object-replicator on
chkconfig openstack-swift-object-updater on
chkconfig openstack-swift-object-auditor on
chkconfig openstack-swift-container on
chkconfig openstack-swift-container-replicator on
chkconfig openstack-swift-container-updater on
chkconfig openstack-swift-container-auditor on
chkconfig openstack-swift-account on
chkconfig openstack-swift-account-replicator on
chkconfig openstack-swift-account-reaper on
chkconfig openstack-swift-account-auditor on
在controller 節(jié)點(diǎn)驗(yàn)證 Swift 安裝
swift stat
上傳兩個(gè)文件測試
swift upload myfiles test.txt
swift upload myfiles test2.txt
下載剛上傳的文件
swift download myfiles
擴(kuò)展一個(gè)新的swift storage 節(jié)點(diǎn)
在開始配置之前,為Swift0 創(chuàng)建一個(gè)新磁盤,用于Swift 數(shù)據(jù)的存儲,比如:
/dev/sdb
磁盤創(chuàng)建好后,啟動OS為新磁盤分區(qū)
fdisk /dev/sdb
mkfs.xfs /dev/sdb1
echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mkdir -p /srv/node/sdb1
mount /srv/node/sdb1
chown -R swift:swift /srv/node
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=swift1
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.51
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.51
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.51
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
yum install -y openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd
配置object,container ,account 的配置文件
openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip 10.20.0.51
openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip 10.20.0.51
openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip 10.20.0.51
在rsynd 配置文件中配置要同步的文件目錄
vi /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.4.51
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
vi /etc/xinetd.d/rsync
disable = no
service xinetd start
chkconfig xinetd on
Create the swift recon cache directory and set its permissions:
mkdir -p /var/swift/recon
chown -R swift:swift /var/swift/recon
重新平衡存儲
swift-ring-builder account.builder add z1-10.20.0.51:6002R10.20.0.51:6005/sdb1 100
swift-ring-builder container.builder add z1-10.20.0.51:6001R10.20.0.51:6004/sdb1 100
swift-ring-builder object.builder add z1-10.20.0.51:6000R10.20.0.51:6003/sdb1 100
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance
到此,OpenStack 核心組件所有節(jié)點(diǎn)安裝完畢!
參考
羅勇《OpenStack 手動安裝手冊(Icehouse)》
最后編輯于 :?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。相關(guān)閱讀更多精彩內(nèi)容
- 附件:https://download.csdn.net/download/dajitui2024/1122228...
- OpenStack簡介 OpenStack既是一個(gè)社區(qū),也是一個(gè)項(xiàng)目和一個(gè)開源軟件,它提供了一個(gè)部署云的操作平臺或...
- 注: 所有命令都要切換到 root 用戶下執(zhí)行,通過 sudo su 命令可切換到 root 用戶 ,這樣就不需要...
- 在安裝centos7的計(jì)算機(jī)中通過RDO的packstack安裝工具自動安裝單節(jié)點(diǎn)OpenStack測試平臺,這里...

環(huán)境準(zhǔn)備
本實(shí)驗(yàn)采用Virtualbox Windows 版作為虛擬化平臺,模擬相應(yīng)的物理網(wǎng)絡(luò)和物理服務(wù)器,如果需要部署到真實(shí)的物理環(huán)境,此步驟可以直接替換為在物理機(jī)上相應(yīng)的配置,其原理相同。
Virtualbox 下載地址:https://www.virtualbox.org/wiki/Downloads
虛擬網(wǎng)絡(luò)
需要新建3個(gè)虛擬網(wǎng)絡(luò)Net0、Net1和Net2,其在virtual box 中對應(yīng)配置如下。
Net0:
Network name: VirtualBox host-only Ethernet Adapter#2
Purpose: administrator / management network
IP block: 10.20.0.0/24
DHCP: disable
Linux device: eth0
Net1:
Network name: VirtualBox host-only Ethernet Adapter#3
Purpose: public network
DHCP: disable
IP block: 172.16.0.0/24
Linux device: eth1
Net2:
Network name: VirtualBox host-only Ethernet Adapter#4
Purpose: Storage/private network
DHCP: disable
IP block: 192.168.4.0/24
Linux device: eth2
虛擬機(jī)
需要新建3個(gè)虛擬機(jī)VM0、VM1和VM2,其對應(yīng)配置如下。
VM0:
Name: controller0
vCPU:1
Memory :1G
Disk:30G
Networks: net1
VM1:
Name : network0
vCPU:1
Memory :1G
Disk:30G
Network:net1,net2,net3
VM2:
Name: compute0
vCPU:2
Memory :2G
Disk:30G
Networks:net1,net3
網(wǎng)絡(luò)設(shè)置
controller0
eth0:10.20.0.10 (management network)
eht1:(disabled)
eht2:(disabled)
network0
eth0:10.20.0.20 (management network)
eht1:172.16.0.20 (public/external network)
eht2:192.168.4.20 (private network)
compute0
eth0:10.20.0.30 (management network)
eht1:(disabled)
eht2:192.168.4.30 (private network)
compute1 (optional)
eth0:10.20.0.31 (management network)
eht1:(disabled)
eht2:192.168.4.31 (private network)
操作系統(tǒng)準(zhǔn)備
本實(shí)驗(yàn)使用Linux 發(fā)行版 CentOS 6.5 x86_64,在安裝操作系統(tǒng)過程中,選擇的初始安裝包為“基本”安裝包,安裝完成系統(tǒng)以后還需要額外配置如下YUM 倉庫。
ISO文件下載:http://mirrors.163.com/centos/6.5/isos/x86_64/CentOS-6.5-x86_64-bin-DVD1.iso
EPEL源: http://dl.fedoraproject.org/pub/epel/6/x86_64/
RDO源: https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/
自動配置執(zhí)行如此命令即可,源安裝完成后更新所有RPM包,由于升級了kernel 需要重新啟動操作系統(tǒng)。
yum install -y http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
//上面的鏈接沒了,用下面的試試
wget https://raw.githubusercontent.com/naototty/centos7-rdo-icehouse/master/rdo-release-icehouse-4.noarch.rpm --user-agent="Mozilla/5.0 (X11;U;Linux i686;en-US;rv:1.9.0.3) Geco/2008092416 Firefox/3.0.3" --no-check-certificate
rpm -ivh rdo-release-icehouse-4.noarch.rpm
wget https://raw.githubusercontent.com/mu228/ssr/master/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm
yum update -y
reboot -h 0
接下來可以開始安裝配置啦!
公共配置(all nodes)
以下命令需要在每一個(gè)節(jié)點(diǎn)都執(zhí)行。
修改hosts 文件
vi /etc/hosts
127.0.0.1 localhost
::1 localhost
10.20.0.10 controller0
10.20.0.20 network0
10.20.0.30 compute0
禁用 selinux
vi /etc/selinux/config
SELINUX=disabled
安裝NTP 服務(wù)
yum install ntp -y
service ntpd start
chkconfig ntpd on
修改NTP配置文件,配置從controller0時(shí)間同步。(除了controller0以外)
vi /etc/ntp.conf
server 10.20.0.10
fudge 10.20.0.10 stratum 10 # LCL is unsynchronized
立即同步并檢查時(shí)間同步配置是否正確。(除了controller0以外)
ntpdate -u 10.20.0.10
service ntpd restart
ntpq -p
清空防火墻規(guī)則
vi /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
重啟防火墻,查看是否生效
service iptables restart
iptables -L
安裝openstack-utils,方便后續(xù)直接可以通過命令行方式修改配置文件
yum install -y openstack-utils
基本服務(wù)安裝與配置(controller0 node)
基本服務(wù)包括NTP 服務(wù)、MySQL數(shù)據(jù)庫服務(wù)和AMQP服務(wù),本實(shí)例采用MySQL 和Qpid 作為這兩個(gè)服務(wù)的實(shí)現(xiàn)。
修改NTP配置文件,配置從127.127.1.0 時(shí)間同步。
vi /etc/ntp.conf
server 127.127.1.0
重啟ntp service
service ntpd restart
MySQL 服務(wù)安裝
yum install -y mysql mysql-server MySQL-python
//centos7沒有mysql換成mariadb
yum install -y mariadb-server
修改MySQL配置
vi /etc/my.cnf
[mysqld]
bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
啟動MySQL服務(wù)
service mysqld start
chkconfig mysqld on
//centos7
service mariadb start
chkconfig mariadb on
交互式配置MySQL root 密碼,設(shè)置密碼為“openstack”
mysql_secure_installation
Qpid 安裝消息服務(wù),設(shè)置客戶端不需要驗(yàn)證使用服務(wù)
yum install -y qpid-cpp-server
vi /etc/qpidd.conf
auth=no
配置修改后,重啟Qpid后臺服務(wù)
service qpidd start
chkconfig qpidd on
控制節(jié)點(diǎn)安裝(controller0)
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=controller0
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.10
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
Keyston 安裝與配置
安裝keystone 包
yum install openstack-keystone python-keystoneclient -y
為keystone 設(shè)置admin 賬戶的 tokn
ADMIN_TOKEN=$(openssl rand -hex 10)
echo $ADMIN_TOKEN
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
配置數(shù)據(jù)連接
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:openstack@controller0/keystone
openstack-config --set /etc/keystone/keystone.conf DEFAULT debug True
openstack-config --set /etc/keystone/keystone.conf DEFAULT verbose True
設(shè)置Keystone 用 PKI tokens
keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl
為Keystone 建表
mysql -uroot -popenstack -e "CREATE DATABASE keystone;"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller0' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'openstack';"
初始化Keystone數(shù)據(jù)庫
su -s /bin/sh -c "keystone-manage db_sync"
也可以直接用openstack-db 工具初始數(shù)據(jù)庫
openstack-db --init --service keystone --password openstack
啟動keystone 服務(wù)
service openstack-keystone start
chkconfig openstack-keystone on
設(shè)置認(rèn)證信息
export OS_SERVICE_TOKEN=`echo $ADMIN_TOKEN`
export OS_SERVICE_ENDPOINT=http://controller0:35357/v2.0
創(chuàng)建管理員和系統(tǒng)服務(wù)使用的租戶
keystone tenant-create --name=admin --description="Admin Tenant"
keystone tenant-create --name=service --description="Service Tenant"
創(chuàng)建管理員用戶
keystone user-create --name=admin --pass=admin --email=admin@example.com
創(chuàng)建管理員角色
keystone role-create --name=admin
為管理員用戶分配"管理員"角色
keystone user-role-add --user=admin --tenant=admin --role=admin
為keystone 服務(wù)建立 endpoints
keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
為keystone 建立 servie 和 endpoint 關(guān)聯(lián)
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ identity / {print $2}') \
--publicurl=http://controller0:5000/v2.0 \
--internalurl=http://controller0:5000/v2.0 \
--adminurl=http://controller0:35357/v2.0
驗(yàn)證keystone 安裝的正確性
取消先前的Token變量,不然會干擾新建用戶的驗(yàn)證。
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
先用命令行方式驗(yàn)證
keystone --os-username=admin --os-password=admin --os-auth-url=http://controller0:35357/v2.0 token-get
keystone --os-username=admin --os-password=admin --os-tenant-name=admin --os-auth-url=http://controller0:35357/v2.0 token-get
讓后用設(shè)置環(huán)境變量認(rèn)證,保存認(rèn)證信息
vi ~/keystonerc
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller0:35357/v2.0
source 該文件使其生效
source keystonerc
keystone token-get
Keystone 安裝結(jié)束。
Glance 安裝與配置
安裝Glance 的包
yum install openstack-glance python-glanceclient -y
配置Glance 連接數(shù)據(jù)庫
openstack-config --set /etc/glance/glance-api.conf DEFAULT sql_connection mysql://glance:openstack@controller0/glance
openstack-config --set /etc/glance/glance-registry.conf DEFAULT sql_connection mysql://glance:openstack@controller0/glance
初始化Glance數(shù)據(jù)庫
openstack-db --init --service glance --password openstack
創(chuàng)建glance 用戶
keystone user-create --name=glance --pass=glance --email=glance@example.com
并分配service角色
keystone user-role-add --user=glance --tenant=service --role=admin
創(chuàng)建glance 服務(wù)
keystone service-create --name=glance --type=image --description="Glance Image Service"
創(chuàng)建keystone 的endpoint
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ image / {print $2}') \
--publicurl=http://controller0:9292 \
--internalurl=http://controller0:9292 \
--adminurl=http://controller0:9292
用openstack util 修改glance api 和 register 配置文件
openstack-config --set /etc/glance/glance-api.conf DEFAULT debug True
openstack-config --set /etc/glance/glance-api.conf DEFAULT verbose True
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password glance
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf DEFAULT debug True
openstack-config --set /etc/glance/glance-registry.conf DEFAULT verbose True
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password glance
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
啟動glance 相關(guān)的兩個(gè)服務(wù)
service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on
下載最Cirros鏡像驗(yàn)證glance 安裝是否成功
wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
glance image-create --progress --name="CirrOS 0.3.1" --disk-format=qcow2 --container-format=ovf --is-public=true < cirros-0.3.1-x86_64-disk.img
查看剛剛上傳的image
glance image-list
如果顯示相應(yīng)的image 信息說明安裝成功。
Nova 安裝與配置
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
在keystone中創(chuàng)建nova相應(yīng)的用戶和服務(wù)
keystone user-create --name=nova --pass=nova --email=nova@example.com
keystone user-role-add --user=nova --tenant=service --role=admin
keystone 注冊服務(wù)
keystone service-create --name=nova --type=compute --description="Nova Compute Service"
keystone 注冊endpoint
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/ compute / {print $2}') \
--publicurl=http://controller0:8774/v2/%\(tenant_id\)s \
--internalurl=http://controller0:8774/v2/%\(tenant_id\)s \
--adminurl=http://controller0:8774/v2/%\(tenant_id\)s
配置nova MySQL 連接
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:openstack@controller0/nova
初始化數(shù)據(jù)庫
openstack-db --init --service nova --password openstack
配置nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT debug True
openstack-config --set /etc/nova/nova.conf DEFAULT verbose True
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller0
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.20.0.10
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 10.20.0.10
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.20.0.10
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova
添加api-paste.ini 的 Keystone認(rèn)證信息
openstack-config --set /etc/nova/api-paste.ini filter:authtoken paste.filter_factory keystoneclient.middleware.auth_token:filter_factory
openstack-config --set /etc/nova/api-paste.ini filter:authtoken auth_host controller0
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_tenant_name service
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_user nova
openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_password nova
啟動服務(wù)
service openstack-nova-api start
service openstack-nova-cert start
service openstack-nova-consoleauth start
service openstack-nova-scheduler start
service openstack-nova-conductor start
service openstack-nova-novncproxy start
添加到系統(tǒng)服務(wù)
chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on
檢查服務(wù)是否正常
nova-manage service list
root@controller0 ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-consoleauth controller0 internal enabled :-) 2013-11-12 11:14:56
nova-cert controller0 internal enabled :-) 2013-11-12 11:14:56
nova-scheduler controller0 internal enabled :-) 2013-11-12 11:14:56
nova-conductor controller0 internal enabled :-) 2013-11-12 11:14:56
檢查進(jìn)程
[root@controller0 ~]# ps -ef|grep nova
nova 7240 1 1 23:11 ? 00:00:02 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 7252 1 1 23:11 ? 00:00:01 /usr/bin/python /usr/bin/nova-cert --logfile /var/log/nova/cert.log
nova 7264 1 1 23:11 ? 00:00:01 /usr/bin/python /usr/bin/nova-consoleauth --logfile /var/log/nova/consoleauth.log
nova 7276 1 1 23:11 ? 00:00:01 /usr/bin/python /usr/bin/nova-scheduler --logfile /var/log/nova/scheduler.log
nova 7288 1 1 23:11 ? 00:00:01 /usr/bin/python /usr/bin/nova-conductor --logfile /var/log/nova/conductor.log
nova 7300 1 0 23:11 ? 00:00:00 /usr/bin/python /usr/bin/nova-novncproxy --web /usr/share/novnc/
nova 7336 7240 0 23:11 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 7351 7240 0 23:11 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 7352 7240 0 23:11 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
Neutron server安裝與配置
安裝Neutron server 相關(guān)包
yum install -y openstack-neutron openstack-neutron-ml2 python-neutronclient
在keystone中創(chuàng)建 Neutron 相應(yīng)的用戶和服務(wù)
keystone user-create --name neutron --pass neutron --email neutron@example.com
keystone user-role-add --user neutron --tenant service --role admin
keystone service-create --name neutron --type network --description "OpenStack Networking"
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ network / {print $2}') \
--publicurl http://controller0:9696 \
--adminurl http://controller0:9696 \
--internalurl http://controller0:9696
為Neutron 在MySQL建數(shù)據(jù)庫
mysql -uroot -popenstack -e "CREATE DATABASE neutron;"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller0' IDENTIFIED BY 'openstack';"
配置MySQL
openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:openstack@controller0/neutron
配置Neutron Keystone 認(rèn)證
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron
配置Neutron qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller0
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller0:8774/v2
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_username nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }')
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_password nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_auth_url http://controller0:35357/v2.0
配置Neutron ml2 plugin 用openvswitch
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
配置nova 使用Neutron 作為network 服務(wù)
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller0:9696
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller0:35357/v2.0
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_SECRET
重啟nova controller 上的服務(wù)
service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-conductor restart
啟動Neutron server
service neutron-server start
chkconfig neutron-server on
網(wǎng)路節(jié)點(diǎn)安裝(network0 node)
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=network0
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.20
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.20
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.20
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
先安裝Neutron 相關(guān)的包
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
允許ip forward
vi /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
立即生效
sysctl -p
配置Neutron keysone 認(rèn)證
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron
配置qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller0
配置Neutron 使用ml + openvswitch +gre
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.4.20
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutronopenvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
配置l3
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True
配置dhcp agent
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
配置metadata agent
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller0:5000/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET
service openvswitch start
chkconfig openvswitch on
ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth1
修改eth1和br-ext 網(wǎng)絡(luò)配置
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=none
PROMISC=yes
vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
TYPE=Bridge
ONBOOT=no
BOOTPROTO=none
重啟網(wǎng)絡(luò)服務(wù)
service network restart
為br-ext 添加ip
ip link set br-ex up
sudo ip addr add 172.16.0.20/24 dev br-ex
啟動Neutron 服務(wù)
service neutron-openvswitch-agent start
service neutron-l3-agent start
service neutron-dhcp-agent start
service neutron-metadata-agent start
chkconfig neutron-openvswitch-agent on
chkconfig neutron-l3-agent on
chkconfig neutron-dhcp-agent on
chkconfig neutron-metadata-agent on
計(jì)算節(jié)點(diǎn)安裝((compute0 node)
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=compute0
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.30
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.30
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.30
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
安裝nova 相關(guān)包
yum install -y openstack-nova-compute
配置nova
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:openstack@controller0/nova
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller0
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.20.0.30
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.20.0.30
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://controller0:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller0
啟動compute 節(jié)點(diǎn)服務(wù)
service libvirtd start
service messagebus start
service openstack-nova-compute start
chkconfig libvirtd on
chkconfig messagebus on
chkconfig openstack-nova-compute on
在controller 節(jié)點(diǎn)檢查compute服務(wù)是否啟動
nova-manage service list
多出計(jì)算節(jié)點(diǎn)服務(wù)
[root@controller0 ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-consoleauth controller0 internal enabled :-) 2014-07-19 09:04:18
nova-cert controller0 internal enabled :-) 2014-07-19 09:04:19
nova-conductor controller0 internal enabled :-) 2014-07-19 09:04:20
nova-scheduler controller0 internal enabled :-) 2014-07-19 09:04:20
nova-compute compute0 nova enabled :-) 2014-07-19 09:04:19
安裝neutron ml2 和openvswitch agent
yum install openstack-neutron-ml2 openstack-neutron-openvswitch
配置Neutron Keystone 認(rèn)證
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron
配置Neutron qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller0
配置Neutron 使用 ml2 for ovs and gre
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.4.30
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutronopenvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
配置 Nova 使用Neutron 提供網(wǎng)絡(luò)服務(wù)
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller0:9696
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller0:35357/v2.0
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret METADATA_SECRET
service openvswitch start
chkconfig openvswitch on
ovs-vsctl add-br br-int
service openstack-nova-compute restart
service neutron-openvswitch-agent start
chkconfig neutron-openvswitch-agent on
檢查agent 是否啟動正常
neutron agent-list
啟動正常顯示
[root@controller0 ~]# neutron agent-list
+--------------------------------------+--------------------+----------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+----------+-------+----------------+
| 2c5318db-6bc2-4d09-b728-bbdd677b1e72 | L3 agent | network0 | :-) | True |
| 4a79ff75-6205-46d0-aec1-37f55a8d87ce | Open vSwitch agent | network0 | :-) | True |
| 5a5bd885-4173-4515-98d1-0edc0fdbf556 | Open vSwitch agent | compute0 | :-) | True |
| 5c9218ce-0ebd-494a-b897-5e2df0763837 | DHCP agent | network0 | :-) | True |
| 76f2069f-ba84-4c36-bfc0-3c129d49cbb1 | Metadata agent | network0 | :-) | True |
+--------------------------------------+--------------------+----------+-------+----------------+
創(chuàng)建初始網(wǎng)絡(luò)
創(chuàng)建外部網(wǎng)絡(luò)
neutron net-create ext-net --shared --router:external=True
為外部網(wǎng)絡(luò)添加subnet
neutron subnet-create ext-net --name ext-subnet
--allocation-pool start=172.16.0.100,end=172.16.0.200
--disable-dhcp --gateway 172.16.0.1 172.16.0.0/24
創(chuàng)建住戶網(wǎng)絡(luò)
首先創(chuàng)建demo用戶、租戶已經(jīng)分配角色關(guān)系
keystone user-create --name=demo --pass=demo --email=demo@example.com
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=member --tenant=demo
創(chuàng)建租戶網(wǎng)絡(luò)demo-net
neutron net-create demo-net
為租戶網(wǎng)絡(luò)添加subnet
neutron subnet-create demo-net --name demo-subnet --gateway 192.168.1.1 192.168.1.0/24
為租戶網(wǎng)絡(luò)創(chuàng)建路由,并連接到外部網(wǎng)絡(luò)
neutron router-create demo-router
將demo-net 連接到路由器
neutron router-interface-add demo-router 4 }')
設(shè)置demo-router 默認(rèn)網(wǎng)關(guān)
neutron router-gateway-set demo-router ext-net
啟動一個(gè)instance
nova boot --flavor m1.tiny --image 2 }') --nic net-id=
2 }') --security-group default demo-instance1
Dashboard 安裝
安裝Dashboard 相關(guān)包
yum install memcached python-memcached mod_wsgi openstack-dashboard
配置mencached
vi /etc/openstack-dashboard/local_settings
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}
配置Keystone hostname
vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller0"
啟動Dashboard 相關(guān)服務(wù)
service httpd start
service memcached start
chkconfig httpd on
chkconfig memcached on
打開瀏覽器驗(yàn)證,用戶名:admin 密碼:admin
Cinder 安裝
Cinder controller 安裝
先在controller0 節(jié)點(diǎn)安裝 cinder api
yum install openstack-cinder -y
配置cinder數(shù)據(jù)庫連接
openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:openstack@controller0/cinder
初始化數(shù)據(jù)庫
mysql -uroot -popenstack -e "CREATE DATABASE cinder;""
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'openstack';"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller0' IDENTIFIED BY 'openstack';"
su -s /bin/sh -c "cinder-manage db sync" cinder
或者用openstack-db 工具初始化數(shù)據(jù)庫
openstack-db --init --service cinder --password openstack
在Keystone中創(chuàng)建cinder 系統(tǒng)用戶
keystone user-create --name=cinder --pass=cinder --email=cinder@example.com
keystone user-role-add --user=cinder --tenant=service --role=admin
在Keystone注冊一個(gè)cinder 的 service
keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"
創(chuàng)建一個(gè) cinder 的 endpoint
keystone endpoint-create
--service-id=2}')
--publicurl=http://controller0:8776/v1/%(tenant_id)s
--internalurl=http://controller0:8776/v1/%(tenant_id)s
--adminurl=http://controller0:8776/v1/%(tenant_id)s
在Keystone注冊一個(gè)cinderv2 的 service
keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"
創(chuàng)建一個(gè) cinderv2 的 endpoint
keystone endpoint-create
--service-id=2}')
--publicurl=http://controller0:8776/v2/%(tenant_id)s
--internalurl=http://controller0:8776/v2/%(tenant_id)s
--adminurl=http://controller0:8776/v2/%(tenant_id)s
配置cinder Keystone認(rèn)證
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder
配置qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname controller0
啟動cinder controller 相關(guān)服務(wù)
service openstack-cinder-api start
service openstack-cinder-scheduler start
chkconfig openstack-cinder-api on
chkconfig openstack-cinder-scheduler on
Cinder block storage 節(jié)點(diǎn)安裝
執(zhí)行下面的操作之前,當(dāng)然別忘了需要安裝公共部分內(nèi)容!(比如ntp,hosts 等)
開始配置之前,Cinder0 創(chuàng)建一個(gè)新磁盤,用于block 的分配
/dev/sdb
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=cinder0
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.40
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.40
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.40
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
網(wǎng)絡(luò)拓?fù)?/h2>
include-cinder
安裝Cinder 相關(guān)包
yum install -y openstack-cinder scsi-target-utils
創(chuàng)建 LVM physical and logic 卷,作為cinder 塊存儲的實(shí)現(xiàn)
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
Add a filter entry to the devices section in the /etc/lvm/lvm.conf file to keep LVM from scanning devices used by virtual machines
添加一個(gè)過濾器保證 虛擬機(jī)能掃描到LVM
vi /etc/lvm/lvm.conf
devices {
...
filter = [ "a/sda1/", "a/sdb/", "r/.*/"]
...
}
配置Keystone 認(rèn)證
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller0:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller0
openstack-config --set /etc/cinder/cinder.conf keystone_authtokenauth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder
配置qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname controller0
配置數(shù)據(jù)庫連接
openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:openstack@controller0/cinder
配置Glance server
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host controller0
配置cinder-volume 的 my_ip , 這個(gè)ip決定了存儲數(shù)據(jù)跑在哪網(wǎng)卡上
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.4.40
配置 iSCSI target 服務(wù)發(fā)現(xiàn) Block Storage volumes
vi /etc/tgt/targets.conf
include /etc/cinder/volumes/*
啟動cinder-volume 服務(wù)
service openstack-cinder-volume start
service tgtd start
chkconfig openstack-cinder-volume on
chkconfig tgtd on
Swift 安裝
安裝存儲節(jié)點(diǎn)
在執(zhí)行下面的操作之前,當(dāng)然別忘了需要安裝公共部分內(nèi)容哦!(比如ntp,hosts 等)
在開始配置之前,為Swift0 創(chuàng)建一個(gè)新磁盤,用于Swift 數(shù)據(jù)的存儲,比如:
/dev/sdb
磁盤創(chuàng)建好后,啟動OS為新磁盤分區(qū)
fdisk /dev/sdb
mkfs.xfs /dev/sdb1
echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mkdir -p /srv/node/sdb1
mount /srv/node/sdb1
chown -R swift:swift /srv/node
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=swift0
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.50
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.50
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.50
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
網(wǎng)絡(luò)拓?fù)?/h2>
這里省去Cinder 節(jié)點(diǎn)部分
include-swift
安裝swift storage 節(jié)點(diǎn)相關(guān)的包
yum install -y openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd
配置object,container ,account 的配置文件
openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip 10.20.0.50
openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip 10.20.0.50
openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip 10.20.0.50
在rsynd 配置文件中配置要同步的文件目錄
vi /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.4.50
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
vi /etc/xinetd.d/rsync
disable = no
service xinetd start
chkconfig xinetd on
Create the swift recon cache directory and set its permissions:
mkdir -p /var/swift/recon
chown -R swift:swift /var/swift/recon
安裝swift-proxy 服務(wù)
為swift 在Keystone 中創(chuàng)建一個(gè)用戶
keystone user-create --name=swift --pass=swift --email=swift@example.com
為swift 用戶添加root 用戶
keystone user-role-add --user=swift --tenant=service --role=admin
為swift 添加一個(gè)對象存儲服務(wù)
keystone service-create --name=swift --type=object-store --description="OpenStack Object Storage"
為swift 添加endpoint
keystone endpoint-create
--service-id=2}')
--publicurl='http://controller0:8080/v1/AUTH_%(tenant_id)s'
--internalurl='http://controller0:8080/v1/AUTH_%(tenant_id)s'
--adminurl=http://controller0:8080
安裝swift-proxy 相關(guān)軟件包
yum install -y openstack-swift-proxy memcached python-swiftclient python-keystone-auth-token
添加配置文件,完成配置后再copy 該文件到每個(gè)storage 節(jié)點(diǎn)
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_prefix xrfuniounenqjnw
openstack-config --set /etc/swift/swift.conf swift-hash swift_hash_path_suffix fLIbertYgibbitZ
scp /etc/swift/swift.conf root@10.20.0.50:/etc/swift/
修改memcached 默認(rèn)監(jiān)聽ip地址
vi /etc/sysconfig/memcached
OPTIONS="-l 10.20.0.10"
啟動mencached
service memcached restart
chkconfig memcached on
修改過proxy server配置
vi /etc/swift/proxy-server.conf
openstack-config --set /etc/swift/proxy-server.conf filter:keystone operator_roles Member,admin,swiftoperator
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_host controller0
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken auth_port 35357
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_user swift
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_tenant_name service
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken admin_password swift
openstack-config --set /etc/swift/proxy-server.conf filter:authtoken delay_auth_decision true
構(gòu)建ring 文件
cd /etc/swift
swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1
swift-ring-builder account.builder add z1-10.20.0.50:6002R10.20.0.50:6005/sdb1 100
swift-ring-builder container.builder add z1-10.20.0.50:6001R10.20.0.50:6004/sdb1 100
swift-ring-builder object.builder add z1-10.20.0.50:6000R10.20.0.50:6003/sdb1 100
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance
拷貝ring 文件到storage 節(jié)點(diǎn)
scp *ring.gz root@10.20.0.50:/etc/swift/
修改proxy server 和storage 節(jié)點(diǎn)Swift 配置文件的權(quán)限
ssh root@10.20.0.50 "chown -R swift:swift /etc/swift"
chown -R swift:swift /etc/swift
在controller0 上啟動proxy service
service openstack-swift-proxy start
chkconfig openstack-swift-proxy on
在Swift0 上 啟動storage 服務(wù)
service openstack-swift-object start
service openstack-swift-object-replicator start
service openstack-swift-object-updater start
service openstack-swift-object-auditor start
service openstack-swift-container start
service openstack-swift-container-replicator start
service openstack-swift-container-updater start
service openstack-swift-container-auditor start
service openstack-swift-account start
service openstack-swift-account-replicator start
service openstack-swift-account-reaper start
service openstack-swift-account-auditor start
設(shè)置開機(jī)啟動
chkconfig openstack-swift-object on
chkconfig openstack-swift-object-replicator on
chkconfig openstack-swift-object-updater on
chkconfig openstack-swift-object-auditor on
chkconfig openstack-swift-container on
chkconfig openstack-swift-container-replicator on
chkconfig openstack-swift-container-updater on
chkconfig openstack-swift-container-auditor on
chkconfig openstack-swift-account on
chkconfig openstack-swift-account-replicator on
chkconfig openstack-swift-account-reaper on
chkconfig openstack-swift-account-auditor on
在controller 節(jié)點(diǎn)驗(yàn)證 Swift 安裝
swift stat
上傳兩個(gè)文件測試
swift upload myfiles test.txt
swift upload myfiles test2.txt
下載剛上傳的文件
swift download myfiles
擴(kuò)展一個(gè)新的swift storage 節(jié)點(diǎn)
在開始配置之前,為Swift0 創(chuàng)建一個(gè)新磁盤,用于Swift 數(shù)據(jù)的存儲,比如:
/dev/sdb
磁盤創(chuàng)建好后,啟動OS為新磁盤分區(qū)
fdisk /dev/sdb
mkfs.xfs /dev/sdb1
echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mkdir -p /srv/node/sdb1
mount /srv/node/sdb1
chown -R swift:swift /srv/node
主機(jī)名設(shè)置
vi /etc/sysconfig/network
HOSTNAME=swift1
網(wǎng)卡配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.20.0.51
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=172.16.0.51
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.4.51
NETMASK=255.255.255.0
網(wǎng)絡(luò)配置文件修改完后重啟網(wǎng)絡(luò)服務(wù)
serice network restart
yum install -y openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd
配置object,container ,account 的配置文件
openstack-config --set /etc/swift/account-server.conf DEFAULT bind_ip 10.20.0.51
openstack-config --set /etc/swift/container-server.conf DEFAULT bind_ip 10.20.0.51
openstack-config --set /etc/swift/object-server.conf DEFAULT bind_ip 10.20.0.51
在rsynd 配置文件中配置要同步的文件目錄
vi /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.4.51
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
vi /etc/xinetd.d/rsync
disable = no
service xinetd start
chkconfig xinetd on
Create the swift recon cache directory and set its permissions:
mkdir -p /var/swift/recon
chown -R swift:swift /var/swift/recon
重新平衡存儲
swift-ring-builder account.builder add z1-10.20.0.51:6002R10.20.0.51:6005/sdb1 100
swift-ring-builder container.builder add z1-10.20.0.51:6001R10.20.0.51:6004/sdb1 100
swift-ring-builder object.builder add z1-10.20.0.51:6000R10.20.0.51:6003/sdb1 100
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance
到此,OpenStack 核心組件所有節(jié)點(diǎn)安裝完畢!
參考
羅勇《OpenStack 手動安裝手冊(Icehouse)》
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。
相關(guān)閱讀更多精彩內(nèi)容
- 附件:https://download.csdn.net/download/dajitui2024/1122228...
- OpenStack簡介 OpenStack既是一個(gè)社區(qū),也是一個(gè)項(xiàng)目和一個(gè)開源軟件,它提供了一個(gè)部署云的操作平臺或...
- 注: 所有命令都要切換到 root 用戶下執(zhí)行,通過 sudo su 命令可切換到 root 用戶 ,這樣就不需要...
- 在安裝centos7的計(jì)算機(jī)中通過RDO的packstack安裝工具自動安裝單節(jié)點(diǎn)OpenStack測試平臺,這里...