In the Cloud¶
第一章 随便说些什么¶
本人书写过程中也在接受新的思考方式方面的训练,所以从开始到结尾难免有些易察觉到的写作方式变化。
我写这个的尽量多解释概念、可重现的操作以及学习指导,以及提供发现解决问题的思路。其中大部分操作细节我尽量使用现有公司的产品(比如Mirantis、Cloudera),而不是从源码部署,因为版本升级带来的配置细节问题交给他们处理会更通用些。同时我现在更倾向于使用现实的模型来解释或者构建一个原理或者是架构,从而方便记忆和扩展。
1.1 我所看到的¶
从07年那会儿,甚至更早,拥有千万用户(包括盗版受害者在内)的行业先锋VMware,又有Google数据中心以及Amazon各种在线服务,这些实打实的东西遵循计算能力的摩尔定律,再顺应日益增长的商业需求,就有了“云”和“大数据”这两个让许多企业再次躁动的概念。
不管大家都怎么看,先引用一句关于“大数据”的经典,虽然与云计算不太相干:
“Big Data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.”
我始终认为认知很大一部分都来自于实践,即先有实践,才来理论,同时理论可以被提高被改变,然后再用于实践。
那就这样,但行好事,莫问前程。
作为公司的一名小技术人员,从客户交流到现场部署、从SDN交换机到Neutron、从QEMU到IAAS,我都有过了解或接触,并将其记录成文,是总结,亦是探索。期间难免会有这样或者那样的想法,而将这些想法实践出来的过程是会让人觉得生活依旧是美好的。
接下来是以我目前水平能看到的:
- 和多数行业一样,不管哪个公司,他们技术和关系,在不同客户那里都有不同的权重,这点的南北差异尤为明显;
- 以我所遇到的客户来看,部分客户不会明确的说使用“云”,他们的需求在多数情况下可以有这样或者那样的传统方案替代;明确要求使用“云”的客户,总会要求周边指标(用户接入、终端协议、USB重定向、视频流占用带宽、UPS、存储复用等)以及各种各样的定制,这是国内许多公有私有混合云厂商的一个痛处;
- 云计算的本意不是虚拟机,也不是管理方便的集群,国内好多公司搞的云计算都是在卖传统计算机集群所能提供的服务,要有公司来建分水岭。
- 尽量不要从技术角度出发去给客户提供技术,而是去关心客户现在面临的问题,我们再提供技术。(这句说的不好,意思还差点)
所以,在需求上的灵活变通,也是有些许重要性的,不要把技术人员的固执带到客户那。
1.2 今天天气怎样¶
7:50 手机闹铃响了,随之而来的是你订阅的RSS、Flipboard推送的新闻。随手翻阅下,完后Siri说今天天气挺暖和的,可以穿你最喜欢的高领毛衣,起床。
8:30 高速公路上,地图告诉你前方拥堵,正好这时领导打来电话,你告诉Siri说“接听”。挂了电话以后,红灯亮了,汽车记录下你这次堵车的时间,并默默再次计算预计到时。
9:20 到达公司门口,没有保安,取而代之的是连接公司LDAP服务器的瞳孔识别。
10:00 收到信息,来自楼下快件派发柜,你订购的球鞋到了。
10:30 客户预约的拜访,你告诉Siri说把公司地址导航路线发送给客户张三。
11:10 通过公司IM告诉前台临时允许车牌号为K9031的车辆进来并发送车位信息给客户。
11:20 和客户在临时会客厅谈话,期间你把演说内容通过手势(食指一划)推送到客户邮箱,他们感到很意外(加分点)。
12:30 午餐时间,Siri还记得你在决心减肥期间,提醒你不要吃最喜欢的红烧肉,末了来一句“两个人被老虎追,谁最危险?Bingo,胖的那个”。
14:30 有客户如约送来了10T的财务数据,你通知技术部小李对这些数据进行方案II处理,百分之30的处理任务交给武汉机房的机器,因为这些从机柜到主板都是你依照节能环保的原则主持设计的。
16:00 Siri告诉你,抱歉,六点钟有局部降雨。
17:00 你把今天的备忘录存进公司派发给你的虚拟桌面,下班。
19:00 老婆大人的晚饭做好了,你也再次连上虚拟桌面把备忘录整理归档。
20:30 跑步结束,它告诉你这几周换季期间,要增加运动量,你说“可以”。
23:00 休息,Siri通过文字悄悄告诉你,明天结婚纪念日,记住。
所有已知未知的暗流,让人有了继续折腾下去的欲望。
我是一个着重实践的人,是的,比如磁盘不拆开的话我可能就对扇区、磁道、柱面没什么深刻概念(其实拆了也没多深刻)。所以,这一些列文章将尽量从操作或者现象中总结规律。一些原理性的东西会尽量用我觉得容易理解的形式表现出来。

整本“书”的结构将会是这个样子:
介绍下主流“云”的现状,会引用许多现成(尽量)客观的信息片段;畅想一下;从中选择部分组成一个完整系统,进行搭建(或模拟)并本地调优(避免过早优化);避免烂尾,结尾送两首短诗吧。
1.3 根据需求来架构¶
所有我们所需要的东西,至此有了一个整体的印象。接下来,我列举几个关键词,从中挑出一部分来组建我们“云”。
- 存储:将要构建的基础设施的基础,所以一定要对其可用性及速度有所保证。
- 虚拟化:通过软件模拟芯片以及处理器的运行结构,作为八九十年代生人(忽然觉得时间好快),多数人最熟悉的应该是红白机模拟器了吧。
- 集群:这个概念里有并行还有分布,两者比较明显的区别是是否在某一时间段内有共同的计算目的。
- 授权服务:现代数据中心的多数应用,都需要统一的用户数据,如何安全地使用统一的用户数据,也是我们需要考虑的。
- 安全与可信:安全的问题,很重要。不要等到哪天信用卡账户被盗刷才发现安全的重要性;可信,即是服务提供者与接受者具有可信的第三方提供支持。
- 可扩展:这里有两个层面,一个基础设施自身计算与存储能力扩增,另一个是与其他云计算平台的模块级别兼容。
- 外围设备:对于特殊的应用(环境监视、认证、GPU依赖应用等),我们需要一些外围设备来辅助完成(usb设备、显卡等)。对于重要设备或者非常依赖总线带宽的设备(令牌、显卡等)的安装,推荐将其与实施应用的物理机直接绑定,对于其他设备(U盘、摄像头等)推荐使用某一既定物理机进行基于TCP/IP的透传。
- 电源管理:由于集群中存在中央管理,所以有必要使用栅栏(fencing)去关闭与管理失去联系的机器,防止其自建中央管理。
- orchestration:即预配置,不管是虚拟机还是应用程序,我们的目的之一就是达到服务的快速响应。
如此划分的意图是什么呢?基础设施,在保证安全可靠的前提下,对本地资源实现最大化利用,也是我们追求的指标之一。还要注意,确保虚拟机状态监控无有遗漏,宿主机部署保证安全,否则,后期很痛苦,因为你永远无法控制用户使用你的环境做什么。
接下来,看看即将部署的各个层之间的关系:

如你所见,存储与计算是在同一节点上,所有管理服务以虚拟机形态运行,统统高可用。但是,这个架构一定存在一个弊端吧?没错,从整体服务的角度来看,虚拟机作主体的架构中存在一定程度的管理上的不便。传统集群只要关心物理设施及与其绑定的应用即可,它们在某个区的几号柜的那一层;而虚拟机们则可能有些“任性”,没有绑定的情况下会在集群中的某台宿主机中进行迁移。所以,我们需要一个完备的集群管理及报告系统,也会需要一个DataWare House来统计用户行为。
第二章 一个可靠的存储后端¶
2.1 谈谈分布式存储¶
计算机领域中有诸多有意思的东西可以把玩,在这儿且看看分布式存储。
集群文件系统
在某些场景下又可以称作网络文件系统、并行文件系统,在70年代由IBM提出并实现原型。
有几种方法可以实现集群形式,但多数仅仅是节点直连存储而不是将存储之上的文件系统进行合理“分布”。分布式文件系统同时挂载于多个服务器上,并在它们之间共享,可以提供类似于位置无关的数据定位或冗余等特点。并行文件系统是一种集群式的文件系统,它将数据分布于多个节点,其主要目的是提供冗余和提高读写性能。
共享磁盘(Shared-disk)/Storage-area network(SAN)
从应用程序使用的文件级别,到SAN之间的块级别的操作,诸如权限控制和传输,都是发生在客户端节点上。共享磁盘(Shared-disk)文件系统,在并行控制上做了很多工作,以至于其拥有比较一致连贯的文件系统视图,从而避免了多个客户端试图同时访问同一设备时数据断片或丢失的情况发生。其中有种技术叫做围栏(Fencing),就是在某个或某些节点发生断片时,集群自动将这些节点隔离(关机、断网、自恢复),保证其他节点数据访问的正确性。元数据(Metadata)类似目录,可以让所有的机器都能查找使用所有信息,在不同的架构中有不同的保存方式,有的均匀分布于集群,有的存储在中央节点。
实现的方式有iSCSI,AoE,FC,Infiniband等,比较著名的产品有Redhat GFS、Sun QFS、Vmware VMFS等。
分布式文件系统
分布式文件系统则不是块级别的共享的形式了,所有加进来的存储(文件系统)都是整个文件系统的一部分,所有数据的传输也是依靠网络来的。
它的设计有这么几个原则:
- 访问透明 客户端在其上的文件操作与本地文件系统无异
- 位置透明 其上的文件不代表其存储位置,只要给了全名就能访问
- 并发透明 所有客户端持有的文件系统的状态在任何时候都是一致的,不会出现A修改了F文件,但是B愣了半天才发现。
- 失败透明 理解为阻塞操作,不成功不回头。
- 异构性 文件系统可以在多种硬件以及操作系统下部署使用。
- 扩展性 随时添加进新的节点,无视其资格新旧。
- 冗余透明 客户端不需要了解文件存在于多个节点上这一事实。
- 迁移透明 客户端不需要了解文件根据负载均衡策略迁移的状况。
实现的方式有NFS、CIFS、SMB、NCP等,比较著名的产品有Google GFS、Hadoop HDFS、GlusterFS、Lustre等。
FUSE,filesystem in user space。
FUSE全名Filesystem in Userspace,是在类UNIX系统下的一个机制,可以让普通用户创建修改访问文件系统。功能就是连接内核接口与用户控件程序的一座“桥”,目前普遍存在于多个操作系统中,比如Linux、BSD、Solaris、OSX、Android等。
FUSE来源于AVFS,不同于传统文件系统从磁盘读写数据,FUSE在文件系统或磁盘中有“转换”的角色,本身并不会存储数据。
在Linux系统中的实现有很多,比如各种要挂载ntfs文件系统使用到的ntfs-3g,以及即将要用到的glusterfs-fuse。

2.2 Glusterfs简述¶
接下来,说一下我所看到的glusterfs。
首先它可以基于以太网或者Infiniband构建大规模分布式文件系统,其设计原则符合奥卡姆剃刀原则,即“ 若无必要,勿增实体 ”;它的源码部分遵循GPLv3,另一部分遵循GPLv2/LGPLv3;统一对象视图,与UNIX设计哲学类似,所有皆对象;跨平台兼容性高,可作为hadoop、openstack、ovirt、Amazon EC的后端。

Note
砖块(brick):即服务器节点上导出的一个目录,作为glusterfs的最基本单元。
卷(volume):用户最终使用的、由砖块组成的逻辑卷。
GFID:glusterfs中的每一个文件或者目录都有一个独立的128位GFID,与普通文件系统中的inode类似。
节点(peer):即集群中含有砖块并参与构建卷的计算机。
功能介绍¶
具体功能特性请参考 Glusterfs features 。
组合方式¶
gluster支持四种存储逻辑卷组合:普通分布式(Distributed)、条带(Striped)、冗余(Replicated)、条带冗余(Striped-Replicated)
普通分布式 ![]()
条带 ![]()
冗余 ![]()
条带冗余 ![]()
Translator¶
Translator是glusterfs设计时的核心之一,它具有以下功能:
- 将用户发来的请求转化为对存储的请求,可以是一对一、一对多或者一对零(cache)。
- 可用修改请求类型、路径、标志,甚至是数据(加密)。
- 拦截请求(访问控制)。
- 生成新请求(预取)。
类型
根据translator的类型,可用将其分为如下类型:
Translator 类型 功能 Storage 访问本地文件系统。 Debug 提供调试信息。 Cluster 处理集群环境下读写请求。 Encryption 加密/解密传送中的数据。 Protocol 加密/解密传送中的数据。 Performance IO参数调节。 Bindings 增加可扩展性,比如python接口。 System 负责系统访问,比如文件系统控制接口访问。 Scheduler 调度集群环境下文件访问请求。 Features 提供额外文件特性,比如quota,锁机制等。
AFR¶
AFR(Automatic File Replication)是translator的一种,它使用额外机制去控制跟踪文件操作,用于跨砖块复制数据。
支持跨网备份
局域网备份 ![]()
内网备份 ![]()
广域网备份 ![]()
其中,它有以下特点:
- 保持数据一致性
- 发生脑裂时自动恢复,应保证至少一个节点有正确数据
- 为读系列操作提供最新数据结构
DHT¶
DHT(Distributed Hash Table)是glusterfs的真正核心。它决定将每个文件放置至砖块的位置。不同于多副本或者条带模式,它的功能是路由,而不是分割或者拷贝。
工作方式
分布式哈希表的核心是一致性哈希算法,又名环形哈希。它具有的一个性质是当一个存储空间被加入或者删除时,现有得映射关系的改变尽可能小。
假设我们的哈希算出一个32位的哈希值,即一个[0,2^32-1]的空间,现将它首尾相接,即构成一个环形。
假如我们有四个存储砖块,每一个砖块B都有一个哈希值H,假设四个文件及其哈希值表示为(k,v),那么他们在哈希环上即如此表示:

每一个文件哈希k顺时针移动遇到一个H后,就将文件k保存至B。

上图表示的是理想环境下文件与砖块的存储映射,当有砖块失效时,存储位置的映射也就发生了改变。比如砖块B3失效,那么文件v3会被继续顺时针改变至B4上。

当砖块数目发生改变时,为了服务器能平摊负载,我们需要一次rebalance来稍许改变映射关系。rebalance的技巧即是创建一个虚拟的存储位置B’,使所有砖块及其虚拟砖块尽量都存储有文件。


2.3 搭建Glusterfs作为基础存储¶
既然要搭建一个稳健的基础存储,那么glusterfs推荐使用distributed striped replicated方式,这里使用4台预装CentOS 6(SELINUX设置为permissive)的机器进行演示。
添加DNS或者修改hosts文件¶
鉴于笔者所在环境中暂时没有配置独立的DNS,此处先修改hosts文件以完成配置,注意每台机器都要添加:
/etc/hosts
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.10.101 gs1.example.com
192.168.10.102 tgs2.example.com
192.168.10.103 tgs3.example.com
192.168.10.104 gs4.example.com
同样地在所有机器上添加repo:
/etc/yum.repos.d/gluster_epel.repo
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
[glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=0
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
[glusterfs-noarch-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch
enabled=1
skip_if_unavailable=1
gpgcheck=0
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
[glusterfs-source-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes. - Source
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/SRPMS
enabled=0
skip_if_unavailable=1
gpgcheck=1
gpgkey=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key
准备磁盘作为砖块¶
在所有节点上安装xfs用户空间工具:
# yum install -y glusterfs glusterfs-fuse glusterfs-server xfsprogs
# /etc/init.d/glusterd start
# /etc/init.d/glusterfsd start
# chkconfig glusterfsd on
# chkconfig glusterd on
假设每台机器除系统盘之外都有2块1T SATA硬盘,我们需要对其进行分区,创建逻辑卷,格式化并挂载:
# fdisk /dev/sdX << EOF
n
p
1
w
EOF
格式化并挂载:
# mkfs.xfs -i size 512 /dev/sdb1
# mkfs.xfs -i size 512 /dev/sdc1
# mkdir /gluster_brick_root1
# mkdir /gluster_brick_root2
# echo -e "/dev/sdb1\t/gluster_brick_root1\txfs\tdefaults\t0 0\n/dev/sdc1\t/gluster_brick_root2\txfs\tdefaults\t0 0" >> /etc/fstab
# mount -a
# mkdir /gluster_brick_root1/data
# mkdir /gluster_brick_root2/data
Note
为什么要用XFS?
XFS具有元数据日志功能,可以快速恢复数据;同时,可以在线扩容及碎片整理。其他文件系统比如EXT3,EXT4未做充分测试。
添加卷¶
在其中任意台机器上,比如gs2.example.com,执行
# gluster peer probe gs1.example.com
# gluster peer probe gs3.example.com
# gluster peer probe gs4.example.com
使用砖块进行卷的构建:
# gluster
> volume create gluster-vol1 stripe 2 replica 2 \
gs1.example.com:/gluster_brick_root1/data gs2.example.com:/gluster_brick_root1/data \
gs1.example.com:/gluster_brick_root2/data gs2.example.com:/gluster_brick_root2/data \
gs3.example.com:/gluster_brick_root1/data gs4.example.com:/gluster_brick_root1/data \
gs3.example.com:/gluster_brick_root2/data gs4.example.com:/gluster_brick_root2/data force
> volume start gluster-vol1 # 启动卷
> volume status gluster-vol1 # 查看卷状态
Status of volume: gluster-vol1
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick gs1.example.com:/gluster_brick_root1/data 49152 Y 1984
Brick gs2.example.com:/gluster_brick_root1/data 49152 Y 1972
Brick gs1.example.com:/gluster_brick_root2/data 49153 Y 1995
Brick gs2.example.com:/gluster_brick_root2/data 49153 Y 1983
Brick gs3.example.com:/gluster_brick_root1/data 49152 Y 1961
Brick gs4.example.com:/gluster_brick_root1/data 49152 Y 1975
Brick gs3.example.com:/gluster_brick_root2/data 49153 Y 1972
Brick gs4.example.com:/gluster_brick_root2/data 49153 Y 1986
NFS Server on localhost 2049 Y 1999
Self-heal Daemon on localhost N/A Y 2006
NFS Server on gs2.example.com 2049 Y 2007
Self-heal Daemon on gs2.example.com N/A Y 2014
NFS Server on gs2.example.com 2049 Y 1995
Self-heal Daemon on gs2.example.com N/A Y 2002
NFS Server on gs3.example.com 2049 Y 1986
Self-heal Daemon on gs3.example.com N/A Y 1993
Task Status of Volume gluster-vol1
------------------------------------------------------------------------------
There are no active volume tasks
> volume info all 查看所有卷信息
gluster volume info all
Volume Name: gluster-vol1
Type: Distributed-Striped-Replicate
Volume ID: bc8e102c-2b35-4748-ab71-7cf96ce083f3
Status: Started
Number of Bricks: 2 x 2 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: gs1.example.com:/gluster_brick_root1/data
Brick2: gs2.example.com:/gluster_brick_root1/data
Brick3: gs1.example.com:/gluster_brick_root2/data
Brick4: gs2.example.com:/gluster_brick_root2/data
Brick5: gs3.example.com:/gluster_brick_root1/data
Brick6: gs4.example.com:/gluster_brick_root1/data
Brick7: gs3.example.com:/gluster_brick_root2/data
Brick8: gs4.example.com:/gluster_brick_root2/data
挂载卷¶
当以glusterfs挂载时,客户端的hosts文件里需要有的任一节点做解析:
挂载glusterfs的客户端/etc/hosts
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.1.81 gs1.example.com
安装gluster-fuse,将gluster卷作为glusterfs挂载,并写入1M文件查看其在各砖块分配:
# yum install glusterfs glusterfs-fuse
# mount.glusterfs 192.168.1.81:/gluster-vol1 /mnt
# cd /mnt
# dd if=/dev/zero of=a.img bs=1k count=1k
# cp a.img b.img; cp a.img c.img; cp a.img d.img
在四台服务端分别查看:
[root@gs1 ~]# ls -lh /gluster_brick_root*
/gluster_brick_root1/data/:
total 1.0M
-rw-r--r--. 2 root root 512K Apr 22 17:13 a.img
-rw-r--r--. 2 root root 512K Apr 22 17:13 d.img
/gluster_brick_root2/data/:
total 1.0M
-rw-r--r--. 2 root root 512K Apr 22 17:13 a.img
-rw-r--r--. 2 root root 512K Apr 22 17:13 d.img
[root@gs2 ~]# ls -lh /gluster_brick_root*
/gluster_brick_root1/data/:
total 1.0M
-rw-r--r--. 2 root root 512K Apr 22 17:13 a.img
-rw-r--r--. 2 root root 512K Apr 22 17:13 d.img
/gluster_brick_root2/data/:
total 1.0M
-rw-r--r--. 2 root root 512K Apr 22 17:13 a.img
-rw-r--r--. 2 root root 512K Apr 22 17:13 d.img
[root@gs3 ~]# ls -lh /gluster_brick_root*
/gluster_brick_root1/data/:
total 1.0M
-rw-r--r--. 2 root root 512K Apr 22 17:13 b.img
-rw-r--r--. 2 root root 512K Apr 22 17:13 c.img
/gluster_brick_root2/data/:
total 1.0M
-rw-r--r--. 2 root root 512K Apr 22 17:13 b.img
-rw-r--r--. 2 root root 512K Apr 22 17:13 c.img
[root@gs4 ~]# ls -lh /gluster_brick_root*
/gluster_brick_root1/data/:
total 1.0M
-rw-r--r--. 2 root root 512K Apr 22 17:13 b.img
-rw-r--r--. 2 root root 512K Apr 22 17:13 c.img
/gluster_brick_root2/data/:
total 1.0M
-rw-r--r--. 2 root root 512K Apr 22 17:13 b.img
-rw-r--r--. 2 root root 512K Apr 22 17:13 c.img
至此,所有配置结束。
2.4 Glusterfs应用示例及技巧¶
参数调整¶
Option | Description | Default Value | Available Options |
---|---|---|---|
auth.allow | IP addresses of the clients which should be allowed to access the volume. |
|
Valid IP address which includes wild card patterns including , such as 192.168.1. |
auth.reject | IP addresses of the clients which should be denied to access the volume. | NONE (reject none) | Valid IP address which includes wild card patterns including , such as 192.168.2. |
client.grace-timeout | Specifies the duration for the lock state to be maintained on the client after a network disconnection. | 10 | 10 - 1800 secs |
cluster.self-heal-window-size | Specifies the maximum number of blocks per file on which self-heal would happen simultaneously. | 16 | 0 - 1025 blocks |
cluster.data-self-heal-algorithm | Specifies the type of self-heal. If you set the option as “full”, the entire file is copied from source to destinations. If the option is set to “diff” the file blocks that are not in sync are copied to destinations. Reset uses a heuristic model. If the file does not exist on one of the subvolumes, or a zero-byte file exists (created by entry self-heal) the entire content has to be copied anyway, so there is no benefit from using the “diff” algorithm. If the file size is about the same as page size, the entire file can be read and written with a few operations, which will be faster than “diff” which has to read checksums and then read and write. | reset | full/diff/reset |
cluster.min-free-disk | Specifies the percentage of disk space that must be kept free. Might be useful for non-uniform bricks | 10% | Percentage of required minimum free disk space |
cluster.stripe-block-size | Specifies the size of the stripe unit that will be read from or written to. | 128 KB (for all files) | size in bytes |
cluster.self-heal-daemon | Allows you to turn-off proactive self-heal on replicated | On | On/Off |
cluster.ensure-durability | This option makes sure the data/metadata is durable across abrupt shutdown of the brick. | On | On/Off |
diagnostics.brick-log-level | Changes the log-level of the bricks. | INFO | DEBUG/WARNING/ERROR/CRITICAL/NONE/TRACE |
diagnostics.client-log-level | Changes the log-level of the clients. | INFO | DEBUG/WARNING/ERROR/CRITICAL/NONE/TRACE |
diagnostics.latency-measurement | Statistics related to the latency of each operation would be tracked. | Off | On/Off |
diagnostics.dump-fd-stats | Statistics related to file-operations would be tracked. | Off | On |
features.read-only | Enables you to mount the entire volume as read-only for all the clients (including NFS clients) accessing it. | Off | On/Off |
features.lock-heal | Enables self-healing of locks when the network disconnects. | On | On/Off |
features.quota-timeout | For performance reasons, quota caches the directory sizes on client. You can set timeout indicating the maximum duration of directory sizes in cache, from the time they are populated, during which they are considered valid | 0 | 0 - 3600 secs |
geo-replication.indexing | Use this option to automatically sync the changes in the filesystem from Master to Slave. | Off | On/Off |
network.frame-timeout | The time frame after which the operation has to be declared as dead, if the server does not respond for a particular operation. | 1800 (30 mins) | 1800 secs |
network.ping-timeout | The time duration for which the client waits to check if the server is responsive. When a ping timeout happens, there is a network disconnect between the client and server. All resources held by server on behalf of the client get cleaned up. When a reconnection happens, all resources will need to be re-acquired before the client can resume its operations on the server. Additionally, the locks will be acquired and the lock tables updated. This reconnect is a very expensive operation and should be avoided. | 42 Secs | 42 Secs |
nfs.enable-ino32 | For 32-bit nfs clients or applications that do not support 64-bit inode numbers or large files, use this option from the CLI to make Gluster NFS return 32-bit inode numbers instead of 64-bit inode numbers. | Off | On/Off |
nfs.volume-access | Set the access type for the specified sub-volume. | read-write | read-write/read-only |
nfs.trusted-write | If there is an UNSTABLE write from the client, STABLE flag will be returned to force the client to not send a COMMIT request. In some environments, combined with a replicated GlusterFS setup, this option can improve write performance. This flag allows users to trust Gluster replication logic to sync data to the disks and recover when required. COMMIT requests if received will be handled in a default manner by fsyncing. STABLE writes are still handled in a sync manner. | Off | On/Off |
nfs.trusted-sync | All writes and COMMIT requests are treated as async. This implies that no write requests are guaranteed to be on server disks when the write reply is received at the NFS client. Trusted sync includes trusted-write behavior. | Off | On/Off |
nfs.export-dir | This option can be used to export specified comma separated subdirectories in the volume. The path must be an absolute path. Along with path allowed list of IPs/hostname can be associated with each subdirectory. If provided connection will allowed only from these IPs. Format: <dir>[(hostspec[hostspec...])][,...]. Where hostspec can be an IP address, hostname or an IP range in CIDR notation. Note: Care must be taken while configuring this option as invalid entries and/or unreachable DNS servers can introduce unwanted delay in all the mount calls. | No sub directory exported. | Absolute path with allowed list of IP/hostname |
nfs.export-volumes | Enable/Disable exporting entire volumes, instead if used in conjunction with nfs3.export-dir, can allow setting up only subdirectories as exports. | On | On/Off |
nfs.rpc-auth-unix | Enable/Disable the AUTH_UNIX authentication type. This option is enabled by default for better interoperability. However, you can disable it if required. | On | On/Off |
nfs.rpc-auth-null | Enable/Disable the AUTH_NULL authentication type. It is not recommended to change the default value for this option. | On | On/Off |
nfs.rpc-auth-allow<IP- Addresses> | Allow a comma separated list of addresses and/or hostnames to connect to the server. By default, all clients are disallowed. This allows you to define a general rule for all exported volumes. | Reject All | IP address or Host name |
nfs.rpc-auth-reject<IP- Addresses> | Reject a comma separated list of addresses and/or hostnames from connecting to the server. By default, all connections are disallowed. This allows you to define a general rule for all exported volumes. | Reject All | IP address or Host name |
nfs.ports-insecure | Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. | Off | On/Off |
nfs.addr-namelookup | Turn-off name lookup for incoming client connections using this option. In some setups, the name server can take too long to reply to DNS queries resulting in timeouts of mount requests. Use this option to turn off name lookups during address authentication. Note, turning this off will prevent you from using hostnames in rpc-auth.addr.* filters. | On | On/Off |
nfs.register-with-portmap | For systems that need to run multiple NFS servers, you need to prevent more than one from registering with portmap service. Use this option to turn off portmap registration for Gluster NFS. | On | On/Off |
nfs.port <PORT- NUMBER> | Use this option on systems that need Gluster NFS to be associated with a non-default port number. | NA | 38465- 38467 |
nfs.disable | Turn-off volume being exported by NFS | Off | On/Off |
performance.write-behind-window-size | Size of the per-file write-behind buffer. | 1MB | Write-behind cache size |
performance.io-thread-count | The number of threads in IO threads translator. | 16 | 0-65 |
performance.flush-behind | If this option is set ON, instructs write-behind translator to perform flush in background, by returning success (or any errors, if any of previous writes were failed) to application even before flush is sent to backend filesystem. | On | On/Off |
performance.cache-max-file-size | Sets the maximum file size cached by the io-cache translator. Can use the normal size descriptors of KB, MB, GB,TB or PB (for example, 6GB). Maximum size uint64. | 2 ^ 64 -1 bytes | size in bytes |
performance.cache-min-file-size | Sets the minimum file size cached by the io-cache translator. Values same as “max” above | 0B | size in bytes |
performance.cache-refresh-timeout | The cached data for a file will be retained till ‘cache-refresh-timeout’ seconds, after which data re-validation is performed. | 1s | 0-61 |
performance.cache-size | Size of the read cache. | 32 MB | size in bytes |
server.allow-insecure | Allow client connections from unprivileged ports. By default only privileged ports are allowed. This is a global setting in case insecure ports are to be enabled for all exports using a single option. | On | On/Off |
server.grace-timeout | Specifies the duration for the lock state to be maintained on the server after a network disconnection. | 10 | 10 - 1800 secs |
server.statedump-path | Location of the state dump file. | tmp directory of the brick | New directory path |
storage.health-check-interval | Number of seconds between health-checks done on the filesystem that is used for the brick(s). Defaults to 30 seconds, set to 0 to disable. | tmp directory of the brick | New directory path |
具体参数参考 gluster_doc 。
文件权限¶
glusterfs在创建卷时会更改砖块所有者为root.root,对于某些应用请注意更改砖块目录所有者(比如在/etc/rc.local中添加chown,不要更改砖块下隐藏目录.glusterfs)。
砖块组合¶
网上现有的部分文档中所述的砖块划分方式,是将整个磁盘划分为砖块,此种划分方式在某些场景下不是很好(比如存储复用),可以在/brickX下创建目录,比如data1,同时在创建glusterfs卷的时候使用HOST:/brickX/data1作为砖块,以合理利用存储空间。
normal、replica、striped卷组合¶
砖块的划分排序:striped(normal)优先,replica在striped(normal)基础上做冗余;计算大小时,同一replica组中的brick合并为一个砖块,一个striped组可看做一个有效块。
假设我们有4个主机,8个砖块,每个砖块都是5GB,如下图:
创建卷时使用如下命令:
# gluster volume create gluster-vol1 stripe 2 replica 2 \
host1:/brick1 host1:/brick2 host2:/brick1 host2:/brick2 \
host3:/brick1 host3:/brick2 host4:/brick1 host4:/brick2 force
砖块将会按照如下进行组合:
然而,创建卷时使用如下命令:
# gluster volume create gluster-vol1 stripe 2 replica 2 \
host1:/brick1 host2:/brick1 host3:/brick1 host4:/brick1 \
host1:/brick2 host2:/brick2 host3:/brick2 host4:/brick2 force
砖块将会按照如下进行组合:
作为nfs挂载¶
由于glusterfs占用了2049端口,所以其与nfs server一般不能共存于同一台服务器,除非更改nfs服务端口。
# mount -t nfs -o vers=3 server1:/volume1 /mnt
作为cifs挂载¶
先在某一服务器或者客户端将起挂载,再以cifs方式导出:
/etc/smb.conf
[glustertest]
comment = For testing a Gluster volume exported through CIFS
path = /mnt/glusterfs
read only = no
guest ok = yes
修复裂脑(split-brain)¶
裂脑发生以后,各节点信息可能会出现不一致。可以通过以下步骤查看并修复。
- 定位裂脑文件
通过命令
# gluster volume heal info split-brain
或者查看在客户端仍然是Input/Output错误的文件。
- 关闭已经打开的文件或者虚机
- 确定正确副本
- 恢复扩展属性
登录到后台,查看脑裂文件的MD5sum和时间,判断哪个副本是需要保留的。 然后删除不再需要的副本即可。(glusterfs采用硬链接方式,所以需要同时删除.gluster下面的硬连接文件)
首先检查文件的md5值,并且和其他的节点比较,确认是否需要删除此副本。
[root@hostd data0]# md5sum 1443f429-7076-4792-9cb7-06b1ee38d828/images/5c881816-6cdc-4d8a-a8c8-4b068a917c2f/80f33212-7adb-4e24-9f01-336898ae1a2c
6c6b704ce1c0f6d22204449c085882e2 1443f429-7076-4792-9cb7-06b1ee38d828/images/5c881816-6cdc-4d8a-a8c8-4b068a917c2f/80f33212-7adb-4e24-9f01-336898ae1a2c
通过ls -i 和find -inum 找到此文件及其硬连接文件。
删除两个文件
[root@hostd data0]# find -inum 12976365 |xargs rm -rf
脑裂文件恢复完成,此文件可以在挂载点上读写。
砖块复用¶
当卷正在被使用,其中一个砖块被删除,而用户试图再次将其用于卷时,可能会出现“/bricks/app or a prefix of it is already part of a volume”。
解决方法:
# setfattr -x trusted.glusterfs.volume-id $brick_path
# setfattr -x trusted.gfid $brick_path
# rm -rf $brick_path/.glusterfs
高可用业务IP¶
由于挂载存储时需要指定集群中的任意IP,所以我们可以使用Heartbeat/CTDB/Pacemaker等集群软件来保证业务IP的高可用。
可参考
http://clusterlabs.org/wiki/Debian_Lenny_HowTo#Configure_an_IP_resource
第三章 合适的虚拟化平台¶
3.1 虚拟化平台简介¶
Welcome to the core!
云计算目前主流实现有SaaS(Software-as-a-service)、PaaS(Platform-as-a-service)和IaaS(Infrastructure-as-a-service)。IaaS和PaaS都算作基础件,SaaS可以与基础件自由组合或者单独使用。
虚拟化技术已经很受重视而且被推到了一个浪尖。如今诸多开源虚拟化平台,比如XenServer、CloudStack、OpenStack、Eucalyptus、oVirt、OpenVZ、Docker、LXC等,我们都看花了眼,些许慌乱不知哪个适合自己了。
各平台实现方式:全虚拟化,半虚拟化,应用虚拟化。
IaaS云计算平台,综合来说具有以下特性:
- 虚拟化:虚拟化作为云计算平台的核心,是资源利用的主要形式之一。网络、存储、CPU乃至GPU等主要通过虚拟主机进行实体化。
- 分布式:分布式可利用共享的存储,通过网络将资源进行整合,是实现资源化的必备条件。
- 高可用:于规模庞大的云平台,提供存储、管理节点、重要服务的高度可用性是十分必要的。笔者在写这篇文章时,oVirt 3.4已经可以做到管理节点的高度可用。
- 兼容性:云计算平台众多,各家有各家的特点,同一数据中心部署不同的平台的可能性极大,因此,主要服务(比如虚拟网络、存储、虚机等)要有一定兼容性,比如oVirt可以利用OpenStack的Nouveau提供的虚拟网络、Foreman可以方便地在oVirt上部署新机器等。另外,也有DeltaCloud、libvirt等API,用户可以利用它们自由地实现自己的云综合管理工具。
- 资源池化:网络、存储、CPU或者GPU可以综合或者单独划分资源池,通过配额进行分配,从而保证其合理利用。
- 安全性:现代企业对于安全性的要求已经十分苛刻,除去传统数据加密、访问控制,甚至对于社会工程也要有一定防护能力;用户数据具有对非企业管理员具有防护性能,即使将虚拟机磁盘文件拷贝出来也不能直接获取其内容。
- 需求导向性:在计算水平上,优质资源最先提供给重要服务;服务水平上,平台具有可定制能力。
oVirt¶
oVirt目前两种部署方式:
管理独占一台物理机 | ![]() |
高可用管理引擎 | ![]() |
Note
常见名词
管理引擎(engine):提供平台web管理、api,各种扩展服务,vdsm以及libvirt服务的重要节点。
宿主机(node/host):为平台的的功能提供支持,主要是虚拟化能力。
数据中心(data center):以数据共享方式(Shared/Local)划分的容器,可以包含多个集群。
存储域(storage domain):平台所依赖的各种存储空间。
逻辑网络(logic network):物理网络或者虚拟网络的抽象代表。
池(pool):用于批量创建虚拟机。
集群策略(cluster policy):宿主机/虚拟机运行或者迁移时所遵循的原则。
DWH/Reports:可以查看当前状态报告(需要ovirt-engine-reports)。
可信服务: 需要OpenAttestation 。
电源管理: 如果没有物理电源控制器,可以直接指定某一台主机为代理机,以构建更稳健的集群。
CloudStack¶
可能也不错,我没用过。
3.2 搭建oVirt管理引擎¶
搭建oVirt平台的步骤:
- 安装Redhat类操作系统(Redhat、CentOS、Fedora)
- 从yum安装oVirt,并执行engine-setup,或者直接从oVirt提供的 iso 进行安装
- 添加宿主机
- 添加存储域
- 创建虚拟机
系统准备¶
所有机器的SELINUX都设置为permissive。
/etc/selinux/config
SELINUX=permissive
# setenfoce permissive
如有需要,清除iptables规则。
# iptables -F
# service iptables save
每台机器上都要添加作为虚拟机运行的engine的FQDN,此处为ha.example.com。
# echo -e '192.168.10.100\tha.example.com' >> /etc/hosts
存储可以使用之前的glusterfs,方式为NFS_V3,注意将brick的权限设置为vdsm.kvm或者36:36。
Note
普通NFS服务器设置
因为考虑到NFS4的权限配置较为复杂,推荐NFS使用V3,修改nfsmount.conf中的Version为3。
# gluster volume create gluster-vol1 replica 2 \
gs1.example.com:/gluster_brick0 gs2.example.com:/gluster_brick0 \
gs3.example.com:/gluster_brick0 gs4.example.com:/gluster_brick0 \
gs1.example.com:/gluster_brick1 gs2.example.com:/gluster_brick1 \
gs3.example.com:/gluster_brick1 gs4.example.com:/gluster_brick1 force
由于管理端以及节点的网络服务依赖于 network 而非 NetworkManager ,我们需要启用前者禁用后者,在每一台服务器上都进行如下类似配置修改网络。
/etc/sysconfig/network-scripts/ifcfg-eth0
NAME=eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=static
# 注意修改此处的IP
IPADDR=192.168.10.101
NETMASK=255.255.255.0
GATEWAY=192.168.10.1
DNS1=192.168.10.1
# chkconfig NetworkManager off
# chkconfig network on
# service NetworkManager stop; service network restart
添加repo¶
Note
oVirt3.4.2 特别说明
2014年六七月的初次安装oVirt的用户可能会遇到添加宿主机失败的问题,暂时解决办法为卸载python-pthreading-0.1.3-1及以后的版本,安装老版本,比如 ftp://ftp.icm.edu.pl/vol/rzm2/linux-fedora/linux/epel/6/i386/python-pthreading-0.1.3-0.el6.noarch.rpm ,再尝试安装vdsm并添加宿主机。
使用rpm:
# yum localinstall http://plain.resources.ovirt.org/releases/ovirt-release/ovirt-release34.rpm
# yum install ovirt-hosted-engine-setup
或者手动添加:
[ovirt-stable]
name=Latest oVirt Releases
baseurl=http://resources.ovirt.org/releases/stable/rpm/EL/$releasever/
enabled=1
skip_if_unavailable=1
gpgcheck=0
[ovirt-3.4-stable]
name=Latest oVirt 3.4.z Releases
baseurl=http://resources.ovirt.org/releases/3.4/rpm/EL/$releasever/
enabled=1
skip_if_unavailable=1
gpgcheck=0
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=0
[ovirt-glusterfs-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/
enabled=1
skip_if_unavailable=1
gpgcheck=0
[ovirt-glusterfs-noarch-epel]
name=GlusterFS is a clustered file-system capable of scaling to several petabytes.
baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch
enabled=1
skip_if_unavailable=1
gpgcheck=0
从下面两种方式中选择之一进行搭建
搭建普通oVirt虚拟化平台¶
笔者写此文时oVirt已经更新到3.4。
在此,我们会用到之前创建的distributed-replicated存储,这样可用保证系统服务的高度可用性有所提高。
对于初次使用oVirt的用户,建议使用此种搭建方式,太折腾的话就吓走好多目标读者了 。
使用之前的四台机器,分别为gs1.example.com,gs2.example.com,gs3.example.com和gs4.example.com,其中,将gs1作为管理机安装ovirt-engine,其余三台作为节点(node),存储使用已经创建好的glusterfs。

在gs1上运行如下命令。
# yum install ovirt-engine
# engine-setup --offline
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20140508054649.log
Version: otopi-1.2.0 (otopi-1.2.0-1.el6)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== PRODUCT OPTIONS ==--
--== PACKAGES ==--
--== NETWORK CONFIGURATION ==--
Host fully qualified DNS name of this server [gs1.example.com]:
Setup can automatically configure the firewall on this system.
Note: automatic configuration of the firewall may overwrite current settings.
Do you want Setup to configure the firewall? (Yes, No) [Yes]:
The following firewall managers were detected on this system: iptables
Firewall manager to configure (iptables): iptables
[ INFO ] iptables will be configured as firewall manager.
--== DATABASE CONFIGURATION ==--
Where is the Engine database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql and create Engine database, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
--== OVIRT ENGINE CONFIGURATION ==--
Application mode (Both, Virt, Gluster) [Both]:
Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:
Engine admin password:
Confirm engine admin password:
--== PKI CONFIGURATION ==--
Organization name for certificate [example.com]:
--== APACHE CONFIGURATION ==--
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
--== SYSTEM CONFIGURATION ==--
Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:
Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]: no
--== MISC CONFIGURATION ==--
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
--== CONFIGURATION PREVIEW ==--
Engine database name : engine
Engine database secured connection : False
Engine database host : localhost
Engine database user name : engine
Engine database host name validation : False
Engine database port : 5432
PKI organization : example.com
Application mode : both
Firewall manager : iptables
Update Firewall : True
Configure WebSocket Proxy : True
Host FQDN : gs1.example.com
Datacenter storage type : nfs
Configure local Engine database : True
Set application as default page : True
Configure Apache SSL : True
Please confirm installation settings (OK, Cancel) [OK]: ok
[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Initializing PostgreSQL
[ INFO ] Creating PostgreSQL 'engine' database
[ INFO ] Configuring PostgreSQL
[ INFO ] Creating Engine database schema
[ INFO ] Creating CA
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
--== SUMMARY ==--
SSH fingerprint: 1B:FD:08:A2:FD:83:20:8A:65:F5:0D:F6:CB:BF:46:C7
Internal CA 28:7E:D6:6B:F7:F2:6C:B5:60:27:44:C3:7F:3C:22:63:E5:68:DD:F4
Web access is enabled at:
http://gs1.example.com:80/ovirt-engine
https://gs1.example.com:443/ovirt-engine
Please use the user "admin" and password specified in order to login into oVirt Engine
--== END OF SUMMARY ==--
[ INFO ] Starting engine service
[ INFO ] Restarting httpd
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20140508054842-setup.conf'
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140508054649.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of setup completed successfully
至此,管理节点安装结束,参考 3.3 添加节点以及存储域 加入节点以及存储域。
搭建管理端高可用oVirt(hosted engine)¶
高可用,我们可以这么划分:
- 存储的高可用:传统存储使用DRBD/Heartbeat或者独立的存储设备保证高可用,在灵活性、可扩展性、成本上都有一定局限。在与主机同台使用Ceph或者Glusterfs可以较好地保证资源充分利用地同时,又满足了高度可用的要求。
- 管理高可用:因为比如oVirt、OpenStack这种拥有大型数据库的设施不像存储设施那样高效的同步,需要独立的管理运行在集群中的某一台机器上来同步集群消息,所以,管理端的高可用也是十分必要的。
- 虚拟机/服务高可用:虚拟机在宕机时可自动重启,在主机资源紧张时可用迁移到其他负载较低的主机上,从而保证服务的质量以及连续性。

- 宿主机的CPU架构建议选择Westmere(Westmere E56xx/L56xx/X56xx)、Nehalem(Intel Core i7 9xx)、Penryn(Intel Core 2 Duo P9xxx)或者Conroe(Intel Celeron_4x0)中的之一。
- CPU Family table 参阅
- Intel Architecture and Processor Identification With CPUID Model and Family Numbers
- 建议参考第11节提前安装含有oVirt管理的虚拟机,硬盘格式为RAW,从而在安装管理机时作为OVF导入或者覆盖虚拟磁盘,减少失败风险时间。
安装ovirt-hosted-engine-setup,并回答一些问题,注意高亮部分:
# hosted-engine --deploy
[ INFO ] Stage: Initializing
Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]: yes
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Configuration files: []
Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140508182241.log
Version: otopi-1.2.0 (otopi-1.2.0-1.el6)
[ INFO ] Hardware supports virtualization
[ INFO ] Bridge ovirtmgmt already created
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== STORAGE CONFIGURATION ==--
During customization use CTRL-D to abort.
Please specify the storage you would like to use (nfs3, nfs4)[nfs3]:
# 此处的存储域只存储hosted-engine的相关文件,不作为主数据域
# 建议挂载gluster的nfs时使用localhost:data形式
Please specify the full shared storage connection path to use (example: host:/path): 192.168.10.101:/gluster-vol1/ovirt_data/hosted_engine
[ INFO ] Installing on first host
Please provide storage domain name. [hosted_storage]:
Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI.Please enter local datacenter name [hosted_datacenter]:
--== SYSTEM CONFIGURATION ==--
--== NETWORK CONFIGURATION ==--
iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: no
Please indicate a pingable gateway IP address [192.168.10.1]:
--== VM CONFIGURATION ==--
# 虚拟engine的安装方式
Please specify the device to boot the VM from (cdrom, disk, pxe) [cdrom]:
The following CPU types are supported by this host:
- model_Conroe: Intel Conroe Family
Please specify the CPU type to be used by the VM [model_Conroe]:
Please specify path to installation media you would like to use [None]: /tmp/centos.iso
Please specify the number of virtual CPUs for the VM [Defaults to minimum requirement: 2]:
Please specify the disk size of the VM in GB [Defaults to minimum requirement: 25]:
You may specify a MAC address for the VM or accept a randomly generated default [00:16:3e:59:9b:e2]:
Please specify the memory size of the VM in MB [Defaults to minimum requirement: 4096]: 4096
Please specify the console type you would like to use to connect to the VM (vnc, spice) [vnc]:
--== HOSTED ENGINE CONFIGURATION ==--
Enter the name which will be used to identify this host inside the Administrator Portal [hosted_engine_1]:
Enter 'admin@internal' user password that will be used for accessing the Administrator Portal:
Confirm 'admin@internal' user password:
Please provide the FQDN for the engine you would like to use.
This needs to match the FQDN that you will use for the engine installation within the VM.
Note: This will be the FQDN of the VM you are now going to create,
it should not point to the base host or to any other existing machine.
Engine FQDN: ha.example.com
[WARNING] Failed to resolve ha.example.com using DNS, it can be resolved only locally
Please provide the name of the SMTP server through which we will send notifications [localhost]:
Please provide the TCP port number of the SMTP server [25]:
Please provide the email address from which notifications will be sent [root@localhost]:
Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:
[ INFO ] Stage: Setup validation
--== CONFIGURATION PREVIEW ==--
Engine FQDN : ha.example.com
Bridge name : ovirtmgmt
SSH daemon port : 22
Gateway address : 192.168.10.1
Host name for web application : hosted_engine_1
Host ID : 1
Image size GB : 25
Storage connection : 192.168.10.101:/gluster-vol1/ovirt_data/hosted_data/
Console type : vnc
Memory size MB : 4096
MAC address : 00:16:3e:59:9b:e2
Boot type : cdrom
Number of CPUs : 2
ISO image (for cdrom boot) : /tmp/centos.iso
CPU Type : model_Conroe
Please confirm installation settings (Yes, No)[No]: yes
[ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Configuring libvirt
[ INFO ] Configuring VDSM
[ INFO ] Starting vdsmd
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Creating Storage Domain
[ INFO ] Creating Storage Pool
[ INFO ] Connecting Storage Pool
[ INFO ] Verifying sanlock lockspace initialization
[ INFO ] Initializing sanlock lockspace
[ INFO ] Initializing sanlock metadata
[ INFO ] Creating VM Image
[ INFO ] Disconnecting Storage Pool
[ INFO ] Start monitoring domain
[ INFO ] Configuring VM
[ INFO ] Updating hosted-engine configuration
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
The following network ports should be opened:
tcp:5900
tcp:5901
udp:5900
udp:5901
An example of the required configuration for iptables can be found at:
/etc/ovirt-hosted-engine/iptables.example
In order to configure firewalld, copy the files from
/etc/ovirt-hosted-engine/firewalld to /etc/firewalld/services
and execute the following commands:
firewall-cmd -service hosted-console
[ INFO ] Creating VM
You can now connect to the VM with the following command:
/usr/bin/remote-viewer vnc://localhost:5900
Use temporary password "2067OGHU" to connect to vnc console.
Please note that in order to use remote-viewer you need to be able to run graphical applications.
This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding).
Otherwise you can run the command from a terminal in your preferred desktop environment.
If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command:
virsh -c qemu+tls://192.168.1.150/system console HostedEngine
If you need to reboot the VM you will need to start it manually using the command:
hosted-engine --vm-start
You can then set a temporary password using the command:
hosted-engine --add-console-password
The VM has been started. Install the OS and shut down or reboot it. To continue please make a selection:
(1) Continue setup - VM installation is complete
(2) Reboot the VM and restart installation
(3) Abort setup
# 需要在另外一个有图形能力的terminal中运行
# "remote-viewer vnc://192.168.10.101:5900"连接虚拟机。
# 完成engine-setup后关闭虚拟机;可以在虚拟机运行状态下执行
# "hosted-engine --add-console-password"更换控制台密码。
# 如果之前选择cdrom进行安装的话,此处可以在gs1上用已经安装好engine的
# 虚拟磁盘进行覆盖,类似
# "mount -t nfs 192.168.10.101:192.168.10.101:/gluster-vol1/ovirt_data/hosted_data/ /mnt; mv engine-disk.raw /mnt/ovirt_data/hosted_data/.../vm_UUID"
(1, 2, 3)[1]: 1
Waiting for VM to shut down...
[ INFO ] Creating VM
You can now connect to the VM with the following command:
/usr/bin/remote-viewer vnc://localhost:5900
Use temporary password "2067OGHU" to connect to vnc console.
Please note that in order to use remote-viewer you need to be able to run graphical applications.
This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding).
Otherwise you can run the command from a terminal in your preferred desktop environment.
If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command:
virsh -c qemu+tls://192.168.1.150/system console HostedEngine
If you need to reboot the VM you will need to start it manually using the command:
hosted-engine --vm-start
You can then set a temporary password using the command:
hosted-engine --add-console-password
Please install and setup the engine in the VM.
You may also be interested in installing ovirt-guest-agent-common package in the VM.
To continue make a selection from the options below:
(1) Continue setup - engine installation is complete
(2) Power off and restart the VM
(3) Abort setup
# 此处参考第一次操作,连接虚拟机控制台后进行"engine-setup --offline"以安装engine
(1, 2, 3)[1]: 1
[ INFO ] Engine replied: DB Up!Welcome to Health Status!
[ INFO ] Waiting for the host to become operational in the engine. This may take several minutes...
[ INFO ] Still waiting for VDSM host to become operational...
[ INFO ] The VDSM Host is now operational
Please shutdown the VM allowing the system to launch it as a monitored service.
# 到此,需要连接虚拟机控制台关闭虚拟机
The system will wait until the VM is down.
[ INFO ] Enabling and starting HA services
Hosted Engine successfully set up
[ INFO ] Stage: Clean up
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
此时,运行”hosted-engine –vm-start”以启动虚拟管理机。
Note
- 若要重新部署
# vdsClient -s 0 list
# vdsClient -s 0 destroy <ID of the vm you get from the first command>
- 若要添加第二台机器
# yum install ovirt-hosted-engine-setup
# hosted-engine –deploy
然后指定存储路径即可自动判断此为第2+台机器。
3.3 添加节点以及存储域¶
你看到这的话应该已经有了一个数据中心、几个宿主机,也可能有一个虚拟机(engine),还差一个存储虚拟机镜像的地方就可以拥有基本的oVirt平台了。
添加节点(宿主机)¶
对于第11节的普通oVirt、第12节的ha平台,你可能需要添加更多节点以支持更好的SLA(service level agreement)。 添加节点目前有三种方式:
- 通过oVirt的节点ISO安装系统后加入。
- 直接将现有CentOS或者Fedora转化为节点(可以为当前管理机)。
- 指定使用外部提供者(Foreman)。
在此我们使用第二种方法。

添加存储域¶
存储域有3种,Data(数据域)、ISO(ISO域)、Export(导出域)。
其中,数据域是为必需,在创建任何虚拟机之前需要有一个可用的数据域用于存储虚拟磁盘以及快照文件;ISO域中可以存放ISO和VFD格式的系统镜像或者驱动文件,可在多个数据中心间共享;导出域用于导出或导入OVF格式的虚机。
而根据数据域的存储类型,我们有5种(NFS、POSIX兼容、Glusterfs、iSCSI、光纤)可选,在此,选择glusterfs导出的NFS。

Note
确保存储域目录被vdsm.kvm可读,即所有者为36:36,或者vdsm.kvm。 导出域在已加入数据中心后不可共享,如果它意外损坏,请参考 http://blog.lofyer.org/blog/2014/05/11/cloud-6-5-advanced-ovirt/ 手动修复,或者删除dom_md/metadata中的POOL_UUID的值以及_SHA_CKSUM行。 若要使用oVirt的gluster支持,请安装vdsm-gluster包。
3.4 连接虚拟机¶
虚拟机运行后,通过web界面,你可用使用以下几种方式连接虚拟机(可通过控制台选项进行修改):

Spice-Html5¶
首先在服务器端打开spice代理:
# engine-setup --otopi-environment="OVESETUP_CONFIG/websocketProxyConfig=bool:True" # 如果未setup或者要在其他机器setup,可做此步 # yum install -y numpy # 安装numpy以加速转换。 # engine-config -s WebSocketProxy="192.168.10.100:6100" # service ovirt-websocket-proxy restart # service ovirt-engine restart连接之前,要信任以下两处https证书:
然后点击控制台按钮即可在浏览器的新标签中打开spice-html5桌面。
浏览器插件¶
对于Redhat系列系统,可安装spice-xpi插件;Windows系统可以安装SpiceX的控件。
本地客户端¶
访问 virt-manager官网 下载virt-viewer客户端,使用它打开下载到本地的console.vv文件。
spice proxy/gateway - squid代理¶
设置squid代理,将所有spice端口代理至3128端口。
# yum install squid
修改/etc/squid/squid.conf。
# 第5行 acl localhost src 192.168.0.150 # 第41行修改为 http_access allow CONNECT Safe_ports #acl spice_servers dst 192.168.10.0/24 #http_access allow spice_servers启用squid服务。
# chkconfig squid on # service squid restart设置engine的SpiceProxy
# engine-config -s SpiceProxyDefault="http://FQDN_or_外网IP:3128" # service ovirt-engine restart可通过集群设置中设置所有宿主机的Spice代理,或者在虚拟机设置中单一设置某台虚拟机通过代理访问。
RDP插件(仅适用于Windows虚机及IE浏览器)¶
如果虚拟机的操作系统选择为Windows,并且内部启动了远程桌面服务,使用IE浏览器访问用户或者管理员入口时,可以启用RDP控件。
3.5 oVirt使用进阶¶
数据库修改非同步数据¶
如果出现网络错误,很有可能导致数据不同步,从而导致界面上虚拟机状态一直处于异常状态,这点OpenStack也有一样的缺点。连接引擎数据库,修改其中的vm_dynamic, image, vm_static等数据表即可。
engine-config参数配置¶
平台安装完以后,可用通过engine-config命令进行详细参数配置。
# 查看设置说明
# engine-config -l
# 查看当前设置
# engine-config -a
示例:重设管理员密码
# engine-config -s AdminPassword=interactive Please enter a password: # 密码 Please reenter password: # 密码
ovirt-shell与API¶
Restful API(Application User Interface)是oVirt的一大特点,用户可以通过它将其与第三方的界面或者应用进行集成。访问 http://192.168.10.100/api?rsdl 以获取其用法。
Note
访问API使用GET、POST、PUT、DELETE方法
获取内容时使用GET; 添加新内容或者执行动作使用POST; 更新内容使用PUT; 删除内容使用DELETE;
详细用法参考 http://www.ovirt.org/Api,SDK示例参考 http://www.ovirt.org/Testing/PythonApi
ovirt-shell则是全部使用Restful API写成的shell,通过它可以完成图形界面所不能提供的功能。
# ovirt-shell -I -u admin@internal -l https://192.168.10.100/api
============================================================================
>>> connected to oVirt manager 3.4.0.0 <<<
============================================================================
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Welcome to oVirt shell
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[oVirt shell (connected)]#
示例:使用ovirt-shell或者API来连接虚拟机。
- 获取虚拟机列表及其所在宿主机
- ovirtshell
# ovirt-shell -I -u admin@internal -l https://192.168.10.100/api -E "list vms" id : 124e8020-c9d7-4e86-81e1-0d4e28ff1cd4 name : aaa # ovirt-shell -I -u admin@internal -l https://192.168.10.100/api -E "show vm aaa" id : 124e8020-c9d7-4e86-81e1-0d4e28ff1cd4 name : aaa ... display-address : 192.168.10.100 ... display-port : 5912 display-secure_port : 5913 ...
- restapi
# curl -u admin@internal:admin https://192.168.10.100/api/vms | less <name>aaa</name> <description></description> ... <display> <type>spice</type> <address>192.168.10.100</address> <port>5912</port> <secure_port>5913</secure_port> <monitors>1</monitors> <single_qxl_pci>false</single_qxl_pci> <allow_override>false</allow_override> <smartcard_enabled>false</smartcard_enabled> </display> ...
- 获取/设置控制台密码
- ovirtshell
# ovirt-shell -I -u admin@internal -l https://192.168.10.100/api -E "action vm aaa ticket" # [oVirt shell (connected)]# action vm aaa ticket status-state : complete ticket-expiry: 7200 ticket-value : MfY9P5kpmNpw vm-id : 124e8020-c9d7-4e86-81e1-0d4e28ff1cd4
- restapi
# curl -k -u admin@internal:admin https://192.168.10.100/api/vms/124e8020-c9d7-4e86-81e1-0d4e28ff1cd4/ticket -X POST -H "Content-type: application/xml" -d '<action><ticket><expiry>120</expiry></ticket></action>' <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <action> <ticket> <value>jRUqhrks6JiT</value> <expiry>120</expiry> </ticket> ... <status> <state>complete</state> </status> </action>
- 连接虚拟机
除了以上获取的显示端口、宿主机IP,我们需要额外获取一个根证书。
# wget http://192.168.10.100/ca.crt
- ovirt-shell
# ovirt-shell -I -u admin@internal -l https://192.168.10.100/api \ -E "console aaa"
- virt-viewer
# remote-viewer --spice-ca-file=ca.crt spice://192.168.10.100?port=5912&tls-port=5913&password=jRUqhrks6JiT
主机hooks¶
主机hooks位于各个宿主机上,用于扩展oVirt的平台功能,比如网络、USB设备、SRIOV等。原理是在某一事件触发时(比如虚拟机启动之前),修改libvirt启动XML文件、环境变量或者主机配置,从而改变qemu的启动参数。
更多hooks内容请参考 vdsm-hooks 。
示例:使用libvirt内部网络
- 准备所需文件。
拷贝 extnet_vnic.py 至/usr/libexec/vdsm/hooks/before_vm_start/,不要忘记添加可执行权限。
- 查看,添加libvirt网络。
# virsh net-list
在 extnet_vnic.py 文件中 newnet = os.environ.get(‘extnet’) 前一行添加如下代码,替换其中的 default 为要使用的libvirt网络, 其他的hooks脚本多数可以这样修改,也可以在engine-config的CustomProperty中指定 :
# 注意此处使用双引号 params = "default" # 同上部分粗体字,因为大部分hooks检查engine-config中的环境变量,简便起见,我直接在hooks脚本中设置了环境变量 os.environ.__setitem__("extnet",params)
- 虚拟机一定要添加网络配置,否则会启动失败,在虚拟机启动时,第一个网络配置文件会被替换为 default 网络。
Note
如果忘记了libvirt密码,可以用以下命令重置。
# saslpasswd2 -a libvirt root
第三方界面¶
oVirt现在自带的gwt界面对浏览器和客户的要求较高,有一些人由于这个原因抛弃它使用第三方的web界面,参考 oVirt Dash 、 ovirt sample-portals 。
使用virt-install安装系统¶
示例:
# virt-install \
--name centos7 \
--ram 2048 \
--disk path=/var/kvm/images/centos7.img,format=qcow2 \
--vcpus 2 \
--os-type linux \
--os-variant rhel7 \
--graphics none \
--console pty,target_type=serial \
--location 'http://mirrors.aliyun.com/centos/7/os/x86_64/' \
--extra-args 'console=ttyS0,115200n8 serial'
虚拟机文件系统扩容¶
oVirt 3.4 磁盘可以进行在线扩容,但是对于磁盘内的文件系统需要单独支持,在此列出常用Linux以及Windows扩容方法。
Linux文件系统扩容(镜像扩容后操作)
在镜像扩容后进行如下操作。
- 重写分区表
# fdisk /dev/sda > d > 3 > n > 3 > w 然后 # kpartprobe 或者 # reboot
- 在线扩容文件系统
# resize2fs /dev/sda3
Linux文件系统扩容(lvm)
在创建好一个新的分区或者一个新的磁盘后,将新空间(比如10G)添加到PV,VG,扩容LV,然后扩容文件系统。
# vgextend vg_livecd /dev/sdb # lvextend /dev/vg_livecd/lv_root -L +10G # resize2fs /dev/vg_livecd/lv_root
Linux文件系统扩容(libguestfs)
具体内容请参考 libguestfs site 。
- 检视磁盘
# virt-filesystem --all --long -h -a hda.img
- 创建待扩容磁盘副本,同时扩容10G(假设原磁盘大小为10G)
对于RAW格式:
# truncate -r hda.img hda-new.img # truncate -s +10G hda-new.img对于QCOW2等有压缩格式:
# qemu-img create -f qcow2 -o preallocation=metadata hda-new.img 20G
- 扩展分区尺寸
普通分区扩展,/boot分区扩容200M,其余全部给/分区:
# virt-resize --resize /dev/sda1=+200M --expand /dev/sda2 hda.img hda-new.img
LVM分区扩展,扩容lv_root逻辑卷:
# virt-resize --expand /dev/sda2 --LV-expand /dev/vg_livecd/lv_root hda.qcow2 hda-new.qcow2
FAT/NTFS扩容
XP使用Paragon Partion Manager;Windows 7 在磁盘管理中即可扩容。
P2V/V2P¶
V2V
在此以ESXi迁移至oVirt为例。
- 在oVirt上创建一个NFS导出域
- 安装libguestfs,并创建.netrc文件,文件内容为ESXi的登陆信息:
~/.netrc
machine 192.168.1.135 login root password 1234567
- 开始迁移,确保ESXi的虚拟机已正常关闭:
# virt-v2v -ic esx://192.168.1.135/?no_verify=1 -o rhev -os 192.168.1.111:/export --network mgmtnet myvm myvm_myvm: 100% [====================================================]D 0h04m48s virt-v2v: myvm configured with virtio drivers.
- 从导出域导入虚拟机并运行:
导入虚拟机:
![]()
运行虚拟机:
![]()
P2V
oVirt的P2V方式我所知的有三种,一是使用VMWare的P2V工具转化为VM以后,再通过virt-v2v转化为oVirt的VM,二是使用clonezilla或者ghost制作系统后,再将其安装到oVirt中,三则是使用virt-p2v工具。笔者在此使用virt-p2v工具示例,成文时只在CentOS 6.5以上版本测试,CentOS 7未测试,RHEL 7.1以上有此工具。
- 在服务器端安装所需包:
yum install -y virt-p2v virt-v2v
- 将/usr/share/virt-v2v/中的ISO文件dd到U盘或者烧录到光盘,然后在要转化的机器上启动。
- 修改服务器端/etc/virt-v2v.conf,修改rhevm的配置,形如:
<virt-v2v> <profile name="rhev-sparse"> <method>rhev</method> <storage format='qcow2' allocation='sparse'> nfs.example.com:/ovirt-export </storage> <network type="default"> <network type="network" name="ovirtmgmt"/> </network> </profile> </virt-v2v>
- 开始转换(我是使用libvirt的profile示例)
![]()
- 转换完成后关闭计算机,并修改虚拟机的配置以完全适应OS。
UI 插件¶
详细内容请参考 oVirt官网关于UI插件的介绍 以及附录部分内容,在此我仅使用ShellInABox举例,你可以考虑将上面的libguestfs扩容加进来,更多UI Plugin请 git clone git://gerrit.ovirt.org/samples-uiplugins.git 。
ShellInABox oVirt UI plugin
- 在宿主机上安装ShellInABox。
# yum install shellinabox # chkconfig shellinaboxd on修改shellinabox配置 OPTS :
/etc/sysconfig/shellinaboxdOPTS="--disable-ssl --service /:SSH"
- 拷贝uiplugin文件并启动服务。
# git clone git://gerrit.ovirt.org/samples-uiplugins.git # cp -r samples-uiplugins/shellinabox-plugin/* /usr/share/ovirt-engine/ui-plugins/ # service shellinaboxd start # service ovirt-engine restart![]()
Note
shellinabox插件链接问题
由于3.3到3.4之后链接的变化,shellinabox.json中的 /webadmin/webadmin/plugin/ShellBoxPlugin/start.html 需要替换为 plugin/ShellBoxPlugin/start.html 。
Note
shellinabox的root登陆问题
echo -e “pts0npts1npts2” >> /etc/securetty
使用SysPrep/Cloud-Init重置虚拟机信息¶
初始化Redhat系列Linux
# touch /.unconfigured # rm -i /etc/ssh/ssh_host_* # echo "HOSTNAME=localhost.localdomain" >> /etc/sysconfig/network # rm -i /etc/udev/rules.d/70-persistent*删除 /etc/sysconfig/network-scripts/ifcfg-* 文件中的 HWADDR= 字段,删除 /var/log/* 、 /root/.log* ,然后关机即可。
修改密码
# virt-sysprep --root-password password:123456 -a os.img
初始化Windows 7
- 使用http://www.drbl-winroll.org工具批量修改。
- 使用位于 C:\Windows\system32\sysprep 目录下的工具。
![]()
与其他平台集成、与认证服务器集成¶
oVirt平台目前可以使用Foreman,OpenStack Network,OpenStack Image的部分功能,具体实施请参阅附录一内容。
与认证服务器集成时很有可能遇到各种问题,比如与AD集成时不能使用Administrator用户进行engine-manage-domains,与IPA集成时需要修改minss之类的参数,与LDAP集成时需要Kerberos或者使用3.5版本中的aaa认证插件。
后端(libvirt/qemu/kernel)优化¶
如果你觉得现有平台的性能达不到预期,或者有其他的需求,可以从以下几方面进行调节或优化。
- qemu : 我写了 一系列qemu的脚本 ,你可以调节里面的参数直接启动虚拟机进行调试或者优化。
- libvirt : libvirt 的目的是统一各种虚拟化后端的调用方式(kvm/xen/lxc/OpenVZ//VirtualBox/VMWare/Hyper-V等等),主要一个特性是用统一的描述文件来定义虚拟机配置(xml文件),在Linux下你可以使用 Virt Manager 进行调试,相关文档参考 libvirt ref 。
- kernel : 基本上大部分的内核相关配置都可以通过 /sys 或者 /proc 进行调节,而针对内核所谓的“优化”,在非大规模部署的情况下,其优势很难体现出来,还有一方面,目前的KVM效率,CPU、内存管理、网络等方面都比较优秀,在I/O方面还有部分不足,可以在 VIRTIO 上进行相关的优化,还有开启 HugePage 等操作。
第四章 数据抓取与机器学习算法¶
在开始这一章之前,你可能需要补习一下数学知识;还有熟悉下常见工具(语言),不必多年开发经验,会处理常见数据结构、能格式化文件即可。
建议先通读一下 Scrapy 中文文档 ,这样你会省去好多Google的时间;在 知乎 上有许多关于 大数据 、 数据挖掘 的讨论,你可以去看看了解一些业内的动态。
另外,可以使用 Nutch 来爬取,并用 Solr 来构建一个简单的搜索引擎,它们可以跟下一章节的Hadoop集成。
还有一个比较重要的点– Model Thinking ,你需要的不只是建模的知识,还要有建模的思想。数据和算法并不是最重要的,重要的是你如何利用这些数据通过你设计的模型来输出对你有用的结果。
不要以编程开始你的机器学习之旅,这样容易使思维受限于语言,从模型和结果思考去达到你的目的,编程只是手段之一。
4.1 数据收集¶
数据收集是学习数据分析的开始。我们每一天
为了省去一些学习的麻烦,我找了一些 “大”数据 ,有些上百TB的数据对非行业内的人来说可能毫无意义,但是,先来些数据吧,但是对学习者来说还是比较实用的。
4.2 爬虫示例¶
58同城¶
我简单写了一个 收集58同城中上海出租房信息的爬虫 ,包括的条目有: 描述 、 位置 、 价格 、 房间数 、 URL 。
由于这些信息都可以在地图上表示出来,那我除了画统计图以外还会画它们在地图上的表示。
4.3 numpy 快查¶
import numpy as np
a = np.arange(1,5)
data_type = [('name','S10'), ('height', 'float'), ('age', int)]
values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38),
('Galahad', 1.7, 38)]
b = np.array(values, dtype=data_type)
c = np.arange(6,10)
# 符号
np.sign(a)
# 数组最大值
a.max()
# 数组最小值
a.max()
# 区间峰峰值
a.ptp()
# 乘积
a.prod()
# 累积
a.cumprod()
# 平均值
a.mean()
# 中值
a.median()
# 差分
np.diff(a)
# 方差
np.var(a)
# 元素条件查找,返回index的array
np.where(a>2)
# 返回第2,3,5个元素的array
np.take(a, np.array(1,2,4))
# 排序
np.msort(a)
np.sort(b, kind='mergesort', order='height')
# 均分,奇数个元素的array不可分割为偶数。
np.split(b,2)
# 创建单位矩阵
np.eye(3)
# 最小二乘,参数为[x,y,degree],degree为多项式的最高次幂,返回值为所有次幂的系数
np.polyfit(a,c,1)
4.4 监督学习常用算法及Python实现¶
信息分类基础¶
信息的不稳定性为熵(entropy),而信息增益为有无样本特征对分类问题影响的大小。比如,抛硬币正反两面各有50%概率,此时不稳定性最大,熵为1;太阳明天照常升起,则是必然,此事不稳定性最小,熵为0。
假设事件X,发生概率为x,其信息期望值定义为:
整个信息的熵为:
如何找到最好的分类特征:
def chooseBestFeatureToSplit(dataSet):
numFeatures = len(dataSet[0]) - 1 #the last column is used for the labels
baseEntropy = calcShannonEnt(dataSet)
bestInfoGain = 0.0; bestFeature = -1
for i in range(numFeatures): #iterate over all the features
featList = [example[i] for example in dataSet]#create a list of all the examples of this feature
uniqueVals = set(featList) #get a set of unique values
newEntropy = 0.0
for value in uniqueVals:
subDataSet = splitDataSet(dataSet, i, value)
prob = len(subDataSet)/float(len(dataSet))
newEntropy += prob * calcShannonEnt(subDataSet)
infoGain = baseEntropy - newEntropy #calculate the info gain; ie reduction in entropy
if (infoGain > bestInfoGain): #compare this to the best gain so far
bestInfoGain = infoGain #if better than current best, set to best
bestFeature = i
return bestFeature #returns an integer
其中,dataSet为所有特征向量,caclShannonEnt()计算特征向量的熵,splitDataSet()切除向量中的value列;infoGain即为信息增益,chooseBestFeatureToSplit返回最好的特征向量索引值。
K邻近算法¶
kNN的算法模型如下:
对于未知类别属性的数据且集中的每个点依次执行以下操作:
- 计算已知类别数据集中的点与当前点之间的距离
- 按照距离递增依次排序
- 选取与当前点距离最小的k个点
- 确定前k个点所在类别的出现频率
- 返回前k个点出现频率最高的类别作为当前点的预测分类
代码参考如下:
def classify0(inX, dataSet, labels, k):
dataSetSize = dataSet.shape[0]
diffMat = tile(inX, (dataSetSize,1)) - dataSet
sqDiffMat = diffMat**2
sqDistances = sqDiffMat.sum(axis=1)
distances = sqDistances**0.5
sortedDistIndicies = distances.argsort()
classCount={}
for i in range(k):
voteIlabel = labels[sortedDistIndicies[i]]
classCount[voteIlabel] = classCount.get(voteIlabel,0) + 1
sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True)
return sortedClassCount[0][0]
其中,inX为输入向量,dataSet为数据集,labels为数据集的分类,可调。距离计算公式为d0 = ((x-x0)**2 + (y-y0)**2)**0.5。
此种算法的优点为精度高、对异常值不敏感、但缺点也比较明显,即数据量大时开支相对较大,适用于数值-标称型数据。
决策树¶
决策树即列出一系列选择,根据训练集中的大量形似(A、B、C)以及结果D的向量来预测新输入(A’、B’、C’)的结果D’。
首先创建一个决策树:
def createTree(dataSet,labels):
classList = [example[-1] for example in dataSet]
if classList.count(classList[0]) == len(classList):
return classList[0] #stop splitting when all of the classes are equal
if len(dataSet[0]) == 1: #stop splitting when there are no more features in dataSet
return majorityCnt(classList)
bestFeat = chooseBestFeatureToSplit(dataSet)
bestFeatLabel = labels[bestFeat]
myTree = {bestFeatLabel:{}}
del(labels[bestFeat])
featValues = [example[bestFeat] for example in dataSet]
uniqueVals = set(featValues)
for value in uniqueVals:
subLabels = labels[:] #copy all of labels, so trees don't mess up existing labels
myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels)
return myTree
找到影响最大的特征bestFeat后,再创建此特征下的分类向量创建子树向量,然后将bestFeat分离后继续迭代,直至所有特征都转换成决策节点。
原始数据比如:
no-surfacing flippers fish
1 yes yes yes
2 yes yes yes
3 yes no no
4 no yes no
5 no yes no
会生成如下决策树:
no-surfacing?
/ \
no/ \yes
fish(no) flippers?
/ \
no/ \yes
fish(no) fish(yes)
表示成JSON格式,即python字典:
{'no surfacing':{0:'no',1:{'flippers':{0:'no',1:'yes'}}}
构建决策树的方法比较多,也可使用C4.5和CART算法。
接下来使用决策树进行分类:
def classify(inputTree,featLabels,testVec):
firstStr = inputTree.keys()[0]
secondDict = inputTree[firstStr]
featIndex = featLabels.index(firstStr)
key = testVec[featIndex]
valueOfFeat = secondDict[key]
if isinstance(valueOfFeat, dict):
classLabel = classify(valueOfFeat, featLabels, testVec)
else: classLabel = valueOfFeat
return classLabel
其中,featLabels为测试的判断节点,即特征,testVec为其值,比如classify[myTree,"['no-surfacing','flippers']",:[1,1]"],如此结果便为'no'。
使用pickle对决策树进行序列化存储:
def storeTree(inputTree,filename):
import pickle
fw = open(filename,'w')
pickle.dump(inputTree,fw)
fw.close()
其中,dump可选协议为0(ASCII),1(BINARY),默认为0;读取时使用pickle.load;同样可使用dumps,loads直接对字符变量进行操作。
此种算法计算复杂度不高,对中间值缺失不敏感,但可能会产生过拟合的问题。
朴素贝叶斯¶
贝叶斯模型是基于独立概率统计的,思想大概可以这么说:
总共7个石子在A、B两个桶中,A桶中有2黑2白,B桶中有2黑1白。已知条件为石子来自B桶,那么它是白色石子的概率可表示为:
P(white|B)=P(B|white)P(white)/P(B)
接下来,定义两个事件A、B,P(A|B)与P(B|A)相互转化的过程即为:
P(B|A)=P(A|B)P(B)/P(A)
而朴素贝叶斯可以这样描述:
设x={a1,a2,...,am}为待分类项,a为x的特征属性,类别集合为C={y1,y2,...,ym},如果P(yk|x)=max(P(y1|x),P(y2|x),...,P(yn|x)),则x属于yk。
整个算法核心即是等式P(yi|x)=P(x|yi)P(yi)/P(x)。
首先构建一个分类训练函数(二元分类):
def trainNB0(trainMatrix,trainCategory):
numTrainDocs = len(trainMatrix)
numWords = len(trainMatrix[0])
pBad = sum(trainCategory)/float(numTrainDocs)
p0Num = ones(numWords); p1Num = ones(numWords) #change to ones()
p0Denom = 2.0; p1Denom = 2.0 #change to 2.0
for i in range(numTrainDocs):
if trainCategory[i] == 1:
p1Num += trainMatrix[i]
p1Denom += sum(trainMatrix[i])
else:
p0Num += trainMatrix[i]
p0Denom += sum(trainMatrix[i])
p1Vect = log(p1Num/p1Denom) #change to log()
p0Vect = log(p0Num/p0Denom) #change to log()
return p0Vect,p1Vect,pBad
其中,trainMatrix为所有训练集中的布尔向量,比如两本书A、B,其中A有两个单词x、y,B有两个单词x、z,并且A是好书(值计为0),B是烂书(值计为0),把所有单词进行排序后得向量['x','y','z'],则A的Matrix可表示为[1,1,0],B的为[1,0,1],所以此函数中的trainMatrix即[[1,1,0],[1,0,1]],trainCategory为[0,1]。
函数返回的为概率集的向量。
分类函数:
def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):
p1 = sum(vec2Classify * p1Vec) + log(pClass1) #element-wise mult
p0 = sum(vec2Classify * p0Vec) + log(1.0 - pClass1)
if p1 > p0:
return 1
else:
return 0
vec2Classify即为要分类的向量,形如trainMatrix,随后的三个参数为trainNB0所返回。p1、p0可以理解为期望概率值,比较两者大小即可划分。
测试用例:
def testingNB():
listOPosts,listClasses = loadDataSet()
myVocabList = createVocabList(listOPosts)
trainMat=[]
for postinDoc in listOPosts:
trainMat.append(setOfWords2Vec(myVocabList, postinDoc))
p0V,p1V,pAb = trainNB0(array(trainMat),array(listClasses))
testEntry = ['love', 'my', 'dalmation']
thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
print testEntry,'classified as: ',classifyNB(thisDoc,p0V,p1V,pAb)
testEntry = ['stupid', 'garbage']
thisDoc = array(setOfWords2Vec(myVocabList, testEntry))
print testEntry,'classified as: ',classifyNB(thisDoc,p0V,p1V,pAb)
整体来说,朴素贝叶斯分类方法在数据较少的情况下仍然有效,但是对数据输入比较敏感。
Logistic回归¶
在统计学中,线性回归是利用称为线性回归方程的最小二乘函数对一个或多个自变量和因变量之间关系进行建模的一种回归分析。这种函数是一个或多个称为回归系数的模型参数的线性组合。只有一个自变量的情况称为简单回归,大于一个自变量情况的叫做多元回归。( 维基百科 )
先介绍两个重要的数学概念。
最小二乘法则
最小二乘法(又称最小平方法)是一种数学优化技术。它通过最小化误差的平方和寻找数据的最佳函数匹配。
利用最小二乘法可以简便地求得未知的数据,并使得这些求得的数据与实际数据之间误差的平方和为最小。
示例1
有四个数据点(1,6)、(2,5)、(3,7)、(4,10),我们希望找到一条直线y=a+bx与这四个点最匹配。
采用最小二乘法使等号两边的方差尽可能小,也就是找出这个函数的最小值:
然后对S(a,b)求a,b的偏导数,使其为0得到:
这样就解出:
所以直线y=3.5+1.4x是最佳的。
函数表示
欧几里德表示
线性函数模型
典型的一类函数模型是线性函数模型。最简单的线性式是
写成矩阵式,为
直接给出该式的参数解:
其中
为t值的算术平均值。也可解得如下形式:
示例2
随机选定10艘战舰,并分析它们的长度与宽度,寻找它们长度与宽度之间的关系。由下面的描点图可以直观地看出,一艘战舰的长度(t)与宽度(y)基本呈线性关系。散点图如下:

以下图表列出了各战舰的数据,随后步骤是采用最小二乘法确定两变量间的线性关系。

仿照上面给出的例子
并得到相应的
然后确定b1
可以看出,战舰的长度每变化1m,相对应的宽度便要变化16cm。并由下式得到常数项b0:
可以看出点的拟合非常好,长度和宽度的相关性大约为96.03%。 利用Matlab得到拟合直线:

Sigmoid函数
Sigmoid函数具有单位阶跃函数的性质,公式表示为:

我们将输入记为z,有下面的公式得出:
使用向量写法:
其中向量x是分类器的输入数据,向量w就是我们要找到的最佳系数。
基于优化方法确定回归系数
梯度上升/下降法
梯度上升法/下降法的思想是:要找到函数的最大值,最好的方法是沿着该函数的梯度方向探寻,函数f(x,y)的梯度如下表示:
可以这样理解此算法:
从前有一座山,一个懒人要爬山,他从山脚下的任意位置向山顶出发,并且知道等高线图的每个环上都有一个宿营点,他希望在这些宿营点之间修建一条笔直的路,并且路到两旁的宿营点的垂直距离差的平方和尽可能小。每到一个等高线圈,他都会根据他在上一个等高线的距离的变化量来调节他的在等高线上的位置,从而使公路满足要求。
返回回归系数:
def gradAscent(dataMatIn, classLabels):
dataMatrix = mat(dataMatIn) #convert to NumPy matrix
labelMat = mat(classLabels).transpose() #convert to NumPy matrix
m,n = shape(dataMatrix)
alpha = 0.001
maxCycles = 500
weights = ones((n,1))
for k in range(maxCycles): #heavy on matrix operations
h = sigmoid(dataMatrix*weights) #matrix mult
error = (labelMat - h) #vector subtraction
weights = weights + alpha * dataMatrix.transpose()* error #matrix mult
return weights
其中,误差值乘以矩阵的转秩代表梯度。
待修改。
SVM¶
SVM(Supprot Vector Machines)即支持向量机,完全理解其理论知识对数学要求较高。
AdaBoost¶
4.5 无监督学习¶
4.6 数据可视化¶
第五章 数据处理平台¶
5.1 Hadoop简介¶
Hadoop与现在更流行的Storm和Spark,从初学的角度来说更有价值。因为Hadoop内容不止有MapReduce,更有SQL式的Yarn和HDFS这一专为MR开发的文件系统,所以我认为在基础学习阶段它更具代表性。而Storm和Spark,它们的优劣我现在并不清楚,只知道前者适用于处理输入连绵数据,后者适用于复杂MR过程的模型。
5.2 模块部署(单机/集群)¶
现在部署Hadoop的方式比过去更加容易,就我所知,你可以使用 Cloudera Manager 或者 Puppet 去完成企业级的部署;如果你需要概念证明类的工作,可以直接使用 Hortonworks 的虚拟机镜像 或者 Cloudera的虚拟机镜像 ,或者 MapR ,在接下来的章节中我会使用rpm包进行安装,而不是按照 官方文档 去部署。
Hue:Hadoop User Experience ,即web UI
单节点部署¶
集群部署¶
5.3 本地数据处理¶
5.4 实时数据处理¶
5.6 与Storm/Spark配合使用¶
附录一 OpenStack概念、部署、与高级网络应用¶
首先在这里我会使用RDO快速部署一个具有基本功能的OpenStack环境,如果你想要更完整的部署(比如Heat、Trove组件),可以参考 官方文档 。
也可以使用Mirantis来进行快速部署,参考 Mirantis ;StackOps部署,参考 StackOps 。
如果不想安装任何东西,只是在线使用的话,可以访问 http://trystack.org/ 。
要学习更多关于OpenStack的内容,可以参考 陈沙克的日志 。
API使用请参考http://developer.openstack.org/api-ref.html 以及 http://docs.openstack.org/developer/openstack-projects.html 。
关于在ubuntu/debian上部署OpenStack请参考 Server-World 。
OpenStack 部署¶
在开始之前需要将这些关键组件关系理清。
- nova:提供compute服务,即保证虚拟机运行的必须服务之一,虚拟机运行于所有提供compute服务的主机之上。
- neutron:提供network服务,同时提供Open vSwitch、L3、DHCP代理服务。
- cinder:提供块存储服务,可配合swift使用。
- swift:提供对象存储服务,目录与文件皆视为对象,外部可以方便取用。
- glance:提供镜像管理服务。
- ceilometer:主要功能是监测、收集用户对资源的使用情况,以方便计费、报警等。
- heat:orchestration模块驱动服务,即从模板根据需求更改配置创建新虚拟机。
- keystone:身份认证服务。
- trove:数据库服务。
- sahara:Hadoop模块。
- ironic:提供从模板创建新应用程序。
Mirantis Fuel 部署¶
现在国内使用Mirantis部署的人已经越来越多了,而我们同样使用它来一次快速部署。精通了Mirantis就掌握了大规模集群的部署方式,不只是OpenStack。
RDO 快速部署¶
使用 RDO 来部署OpenStack。
Note
安装说明
在一台安装有RedHat系列(CentOS)系统上部署,将selinux置为permissive;禁用NetworkManager,启用network服务,详细配置请参考以前章节。
如果安装失败,请查看你的CentOS或者Fedora是否为最新发行版。笔者在写时使用的是CentOS6,目前为CentOS7。
# yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
# yum install -y epel-release
# yum update -y
# yum install -y openstack-packstack
# packstack --allinone
请耐心等待,以上过程预计花费一到两小时。有关此次部署的详细信息,在安装完成后可以看到:
**** Installation completed successfully ******
Additional information:
* A new answerfile was created in: /root/packstack-answers-20140730-110621.txt
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.2.160. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.2.160/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* To use Nagios, browse to http://192.168.2.160/nagios username: nagiosadmin, password: ea65dc070f034776
* Because of the kernel update the host 192.168.2.160 requires reboot.
* The installation log file is available at: /var/tmp/packstack/20140730-110621-upxlZJ/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20140730-110621-upxlZJ/manifests
可修改 /root/packstack-answers-20140730-110621.txt 内容以 增加计算节点 ;同理可增加网络节点(待实验)。
Note
假如更换了admin/demo/services的密码,不要忘记在此配置文件中将其修改为新密码。 # packstack –answer-file=/root/packstack-answers-20140730-110621.txt
添加计算节点¶
使用Neutron网络¶
分步详细部署¶
CentOS 7以及Ubuntu等发行版部署OpenStack的过程基本一致,在此以CentOS 7示例。
机器准备¶
控制节点:controller0(192.168.77.50)
计算节点:controller0(192.168.77.50),compute0(192.168.77.51)
网络节点:neutron0(192.168.77.30)
存储节点:cinder0(192.168.77.60),swift0(192.168.77.70),swift-stor0(192.168.77.71),swift-stor1(192.168.77.72),swift-stor2(192.168.77.73)
Heat节点:heat0(192.168.77.80)
每台机器首先将selinux设置为permissive或者disable、打开ssh服务、禁用防火墙(可安装iptables-services服务,关闭firewalld,iptables -F后再service iptables save)、关闭NetworkManager服务、打开network服务并配置IP。
Note
bash_rc文件用例
bash_rc文件比如os_bootstrap、admin_keystone,其中os_bootstrap用于创建基本keystone服务使用,admin_keystone为创建的admin用户。
Note
认证服务
OpenStack中的keystone服务负责绝大部分authentication的工作,其中属于service组的用户(nova、glance)也是基于keystone认证的,所以不要认为service中的服务仅仅是一个服务而已。
# cat os_bootstrap
export SERVICE_TOKEN=admin
export SERVICE_ENDPOINT=http://192.168.77.50:35357/v2.0/
# cat admin_keystone
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://localhost:35357/v2.0/
export PS1='[\u@\h \W(keystone)]\$ '
初始化控制节点¶
在控制节点controller0,配置源、数据库、RabbitMQ、Memcached。
[root@controller0 ~]# yum -y install http://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo.rpm epel-release
[root@controller0 ~]# yum install -y galera mariadb-galera-server rabbitmq-server memcached
[root@controller0 ~]# systemctl start mariadb
[root@controller0 ~]# systemctl enable mariadb
[root@controller0 ~]# systemctl start rabbitmq-server
[root@controller0 ~]# systemctl enable rabbitmq-server
[root@controller0 ~]# systemctl start memcached
[root@controller0 ~]# systemctl enable memcached
# 初始化mysql
[root@controller0 ~]# mysql_secure_installation
/usr/bin/mysql_secure_installation: line 379: find_mysql_client: command not found
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
# 设置mysql的root密码
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
# remove anonymous users
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
# disallow root login remotely
Disallow root login remotely? [Y/n] y
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
# remove test database
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
# reload privilege tables
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
# 重置rabbitmq密码
[root@controller0 ~]# rabbitmqctl change_password guest password
配置KeyStone¶
初始化Keystone¶
# 安装keystone
[root@controller0 ~]# yum install -y openstack-keystone openstack-utils
# 添加数据库
[root@controller0 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026
Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database keystone;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> grant all privileges on keystone.* to keystone@'localhost' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on keystone.* to keystone@'%' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
配置keystone¶
[root@controller0 ~]# vi /etc/keystone/keystone.conf
# line 13: 超级管理员密码为admin,此密码仅供设置keystone,在生产环境中应该禁用
admin_token=admin
# line 418: database
connection=mysql://keystone:password@localhost/keystone
# line 1434: token格式
# 可能不要
token_format=PKI
# line 1624: signing
certfile=/etc/keystone/ssl/certs/signing_cert.pem
keyfile=/etc/keystone/ssl/private/signing_key.pem
ca_certs=/etc/keystone/ssl/certs/ca.pem
ca_key=/etc/keystone/ssl/private/cakey.pem
key_size=2048
valid_days=3650
cert_subject=/C=CN/ST=Di/L=Jiang/O=InTheCloud/CN=controller0.lofyer.org
# 设置证书,同步数据库
[root@controller0 ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
[root@controller0 ~]# keystone-manage db_sync
# 删除日志文件并启动,否则可能因为log文件权限问题而报错
[root@controller0 ~]# rm /var/log/keystone/keystone.log
[root@controller0 ~]# systemctl start openstack-keystone
[root@controller0 ~]# systemctl enable openstack-keystone
添加用户、角色、服务与endpoint¶
将超级管理员配置保存到文件,方便以后管理:
[root@controller0 ~]# cat os_bootstrap
export SERVICE_TOKEN=admin
export SERVICE_ENDPOINT=http://192.168.77.50:35357/v2.0/
[root@controller0 ~]# source os_bootstrap
添加admin及service的tenant组:
[root@controller0 ~]# keystone tenant-create --name admin --description "Admin Tenant" --enabled true
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Admin Tenant |
| enabled | True |
| id | c0c4e7b797bb41798202b55872fba074 |
| name | admin |
+-------------+----------------------------------+
[root@controller0 ~]# keystone tenant-create --name service --description "Service Tenant" --enabled true
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Service Tenant |
| enabled | True |
| id | 9acf83020ae34047b6f1e320c352ae44 |
| name | service |
+-------------+----------------------------------+
[root@controller0 ~]# keystone tenant-list
+----------------------------------+---------+---------+
| id | name | enabled |
+----------------------------------+---------+---------+
| c0c4e7b797bb41798202b55872fba074 | admin | True |
| 9acf83020ae34047b6f1e320c352ae44 | service | True |
+----------------------------------+---------+---------+
创建角色:
# 创建admin角色
[root@controller0 ~]# keystone role-create --name admin
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | 95c4b8fb8d97424eb52a4e8a00a357e7 |
| name | admin |
+----------+----------------------------------+
# 创建Member角色
[root@controller0 ~]# keystone role-create --name Member
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | aa8c08c0ff63422881c7662472b173e6 |
| name | Member |
+----------+----------------------------------+
[root@controller0 ~]# keystone role-list
+----------------------------------+----------+
| id | name |
+----------------------------------+----------+
| aa8c08c0ff63422881c7662472b173e6 | Member |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| 95c4b8fb8d97424eb52a4e8a00a357e7 | admin |
+----------------------------------+----------+
添加用户并赋予角色:
# 添加admin用户至admin组,此处的密码仅仅是admin用户密码,与之前的admin_token可以不同
[root@controller0 ~]# keystone user-create --tenant admin --name admin --pass admin --enabled true
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | cf11b4425218431991f095c2f58578a0 |
| name | admin |
| tenantId | c0c4e7b797bb41798202b55872fba074 |
| username | admin |
+----------+----------------------------------+
# 赋予admin用户以admin角色
[root@controller0 ~]# keystone user-role-add --user admin --tenant admin --role admin
# 添加即将用到的glance、nova用户与服务
[root@controller0 ~]# keystone user-create --tenant service --name glance --pass servicepassword --enabled true
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 2dcaa8929688442dbc1df30bee8921eb |
| name | glance |
| tenantId | 9acf83020ae34047b6f1e320c352ae44 |
| username | glance |
+----------+----------------------------------+
[root@controller0 ~]# keystone user-role-add --user glance --tenant service --role admin
[root@controller0 ~]# keystone user-create --tenant service --name nova --pass servicepassword --enabled true
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 566fe34145af4390b0aadb906131a9e8 |
| name | nova |
| tenantId | 9acf83020ae34047b6f1e320c352ae44 |
| username | nova |
+----------+----------------------------------+
[root@controller0 ~]# keystone user-role-add --user nova --tenant service --role admin
添加服务:
[root@controller0 ~]# keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Keystone Identity Service |
| enabled | True |
| id | b3ea5d31edce4c10b3b4c18359de0d09 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
[root@controller0 ~]# keystone service-create --name=glance --type=image --description="Glance Image Service"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Glance Image Service |
| enabled | True |
| id | 6afe8a067e2945fca023f85c7760ae53 |
| name | glance |
| type | image |
+-------------+----------------------------------+
[root@controller0 ~]# keystone service-create --name=nova --type=compute --description="Nova Compute Service"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Nova Compute Service |
| enabled | True |
| id | 80edb3d3914644c4b0570fd8d8dabdaa |
| name | nova |
| type | compute |
+-------------+----------------------------------+
[root@controller0 ~]# keystone service-list
+----------------------------------+----------+----------+---------------------------+
| id | name | type | description |
+----------------------------------+----------+----------+---------------------------+
| 6afe8a067e2945fca023f85c7760ae53 | glance | image | Glance Image Service |
| b3ea5d31edce4c10b3b4c18359de0d09 | keystone | identity | Keystone Identity Service |
| 80edb3d3914644c4b0570fd8d8dabdaa | nova | compute | Nova Compute Service |
+----------------------------------+----------+----------+---------------------------+
添加endpoint:
[root@controller0 ~]# export my_host=192.168.77.50
# 添加keystone的endpoint
[root@controller0 ~]# keystone endpoint-create --region RegionOne \
--service keystone \
--publicurl "http://$my_host:\$(public_port)s/v2.0" \
--internalurl "http://$my_host:\$(public_port)s/v2.0" \
--adminurl "http://$my_host:\$(admin_port)s/v2.0"
+-------------+-------------------------------------------+
| Property | Value |
+-------------+-------------------------------------------+
| adminurl | http://192.168.77.50:$(admin_port)s/v2.0 |
| id | 09c263fa9b3c4a58bcead0b2f5aba1a1 |
| internalurl | http://192.168.77.50:$(public_port)s/v2.0 |
| publicurl | http://192.168.77.50:$(public_port)s/v2.0 |
| region | RegionOne |
| service_id | b3ea5d31edce4c10b3b4c18359de0d09 |
+-------------+-------------------------------------------+
# 添加glance的endpoint
[root@controller0 ~]# keystone endpoint-create --region RegionOne \
--service glance \
--publicurl "http://$my_host:9292/v1" \
--internalurl "http://$my_host:9292/v1" \
--adminurl "http://$my_host:9292/v1"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://192.168.77.50:9292/v1 |
| id | 975ff2836b264e299c669372076666ee |
| internalurl | http://192.168.77.50:9292/v1 |
| publicurl | http://192.168.77.50:9292/v1 |
| region | RegionOne |
| service_id | 6afe8a067e2945fca023f85c7760ae53 |
+-------------+----------------------------------+
# 添加nova的endpoint
keystone endpoint-create --region RegionOne \
--service nova \
--publicurl "http://$my_host:\$(compute_port)s/v2/\$(tenant_id)s" \
--internalurl "http://$my_host:\$(compute_port)s/v2/\$(tenant_id)s" \
--adminurl "http://$my_host:\$(compute_port)s/v2/\$(tenant_id)s"
+-------------+--------------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------------+
| adminurl | http://192.168.77.50:$(compute_port)s/v2/$(tenant_id)s |
| id | 194b7ddd24c94a0ebf79cd7275478dfc |
| internalurl | http://192.168.77.50:$(compute_port)s/v2/$(tenant_id)s |
| publicurl | http://192.168.77.50:$(compute_port)s/v2/$(tenant_id)s |
| region | RegionOne |
| service_id | 80edb3d3914644c4b0570fd8d8dabdaa |
+-------------+--------------------------------------------------------+
[root@controller0 ~]# keystone endpoint-list
+----------------------------------+-----------+--------------------------------------------------------+--------------------------------------------------------+--------------------------------------------------------+----------------------------------+
| id | region | publicurl | internalurl | adminurl | service_id |
+----------------------------------+-----------+--------------------------------------------------------+--------------------------------------------------------+--------------------------------------------------------+----------------------------------+
| 09c263fa9b3c4a58bcead0b2f5aba1a1 | RegionOne | http://192.168.77.50:$(public_port)s/v2.0 | http://192.168.77.50:$(public_port)s/v2.0 | http://192.168.77.50:$(admin_port)s/v2.0 | b3ea5d31edce4c10b3b4c18359de0d09 |
| 194b7ddd24c94a0ebf79cd7275478dfc | RegionOne | http://192.168.77.50:$(compute_port)s/v2/$(tenant_id)s | http://192.168.77.50:$(compute_port)s/v2/$(tenant_id)s | http://192.168.77.50:$(compute_port)s/v2/$(tenant_id)s | 80edb3d3914644c4b0570fd8d8dabdaa |
| 975ff2836b264e299c669372076666ee | RegionOne | http://192.168.77.50:9292/v1 | http://192.168.77.50:9292/v1 | http://192.168.77.50:9292/v1 | 6afe8a067e2945fca023f85c7760ae53 |
+----------------------------------+-----------+--------------------------------------------------------+--------------------------------------------------------+--------------------------------------------------------+----------------------------------+
配置Glance¶
初始化glance¶
# 安装glance
[root@controller0 ~]# yum install -y openstack-glance
# 初始化数据库
[root@controller0 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 16
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026
Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database glance;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> grant all privileges on glance.* to glance@'localhost' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on glance.* to glance@'%' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
配置glance¶
[root@controller0 ~]# vi /etc/glance/glance-registry.conf
# line 165: database
connection=mysql://glance:password@localhost/glance
# line 245: 添加keystone认证信息
[keystone_authtoken]
identity_uri=http://192.168.77.50:35357
admin_tenant_name=service
admin_user=glance
admin_password=servicepassword
# line 259: paste_deploy
flavor=keystone
[root@controller0 ~]# vi /etc/glance/glance-api.conf
# line 240: 修改rabbit用户密码
rabbit_userid=guest
rabbit_password=password
# line 339: database
connection=mysql://glance:password@localhost/glance
# line 433: 添加keystone认证信息
[keystone_authtoken]
auth_uri = http://192.168.77.50:35357/v2.0
identity_uri=http://192.168.77.50:5000
admin_tenant_name=service
admin_user=glance
admin_password=servicepassword
revocation_cache_time=10
# line 448: paste_deploy
flavor=keystone
[root@controller0 ~]# glance-manage db_sync
# 删除日志文件并启动,否则可能因为log文件权限问题而报错
[root@controller0 ~]# rm /var/log/glance/api.log
[root@controller0 ~]# for service in api registry; do
systemctl start openstack-glance-$service
systemctl enable openstack-glance-$service
done
配置Nova¶
初始化nova¶
[root@controller0 ~]# yum install -y openstack-nova
[root@controller0 ~]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 18
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026
Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database nova;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> grant all privileges on nova.* to nova@'localhost' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on nova.* to nova@'%' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
配置nova¶
基本配置:
[root@controller0 ~]# mv /etc/nova/nova.conf /etc/nova/nova.conf.org
[root@controller0 ~]# vi /etc/nova/nova.conf
# 新建以下内容
[DEFAULT]
# RabbitMQ服务信息
rabbit_host=192.168.77.50
rabbit_port=5672
rabbit_userid=guest
rabbit_password=password
notification_driver=nova.openstack.common.notifier.rpc_notifier
rpc_backend=rabbit
# 本计算节点IP
my_ip=192.168.77.50
# 是否支持ipv6
use_ipv6=false
state_path=/var/lib/nova
enabled_apis=ec2,osapi_compute,metadata
osapi_compute_listen=0.0.0.0
osapi_compute_listen_port=8774
rootwrap_config=/etc/nova/rootwrap.conf
api_paste_config=api-paste.ini
auth_strategy=keystone
lock_path=/var/lib/nova/tmp
log_dir=/var/log/nova
# Memcached服务信息
memcached_servers=192.168.77.50:11211
scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
[glance]
# Glance服务信息
host=192.168.77.50
port=9292
protocol=http
[database]
# connection info for MariaDB
connection=mysql://nova:password@localhost/nova
[keystone_authtoken]
# Keystone server's hostname or IP
auth_uri = http://192.168.77.50:35357/v2.0
identity_uri=http://192.168.77.50:5000
admin_user=nova
# Nova user's password added in Keystone
admin_password=servicepassword
admin_tenant_name=service
signing_dir=/var/lib/nova/keystone-signing
[root@controller0 ~]# chmod 640 /etc/nova/nova.conf
[root@controller0 ~]# chgrp nova /etc/nova/nova.conf
接下来配置network服务,虽然nova-network并不是官方推荐的配置,但是它配置较为简单,所以在此仍然写出,可待后来 配置Neutron(推荐) 时再修改或则直接略过(注意服务以及配置文件):
[root@controller0 ~]# vi /etc/nova/nova.conf
# 在DEFAULT段中添加如下内容
# nova-network
network_driver=nova.network.linux_net
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_api_class=nova.network.api.API
security_group_api=nova
network_manager=nova.network.manager.FlatDHCPManager
network_size=254
allow_same_net_traffic=False
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
# 指定public网络接口
public_interface=eno16777736
# 任意桥接接口
flat_network_bridge=br100
# 创建dummy接口
flat_interface=dummy0
# 添加用于flat-DHCP的虚拟接口
[root@controller0 ~]# cat > /etc/sysconfig/network-scripts/ifcfg-dummy0 <<EOF
DEVICE=dummy0
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
NM_CONTROLLED=no
EOF
# 加载dummy模块,用于虚拟机内网流量路由
[root@controller0 ~]# echo "alias dummy0 dummy" > /etc/modprobe.d/dummy.conf
[root@controller0 ~]# ifconfig dummy0 up
启用服务,如果没用使用nova-network,请忽略数组中的network
[root@controller0 ~]# nova-manage db sync
[root@controller0 ~]# for service in api objectstore conductor scheduler cert consoleauth compute network; do
systemctl start openstack-nova-$service
systemctl enable openstack-nova-$service
done
[root@controller0 ~]# nova service-list
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-conductor | controller0 | internal | enabled | up | 2015-06-19T01:40:07.000000 | - |
| 2 | nova-scheduler | controller0 | internal | enabled | up | 2015-06-19T01:40:08.000000 | - |
| 3 | nova-cert | controller0 | internal | enabled | up | 2015-06-19T01:40:10.000000 | - |
| 4 | nova-consoleauth | controller0 | internal | enabled | up | 2015-06-19T01:40:11.000000 | - |
| 5 | nova-compute | controller0 | nova | enabled | up | 2015-06-19T01:40:14.000000 | - |
| 6 | nova-network | controller0 | internal | enabled | up | 2015-06-19T01:40:15.000000 | - |
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
添加镜像¶
# 以admin用户认证
[root@controller0 ~]# cat ~/admin_keystone
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.77.50:35357/v2.0/
export PS1='[\u@\h \W(keystone)]\$ '
[root@controller0 ~]# source ~/admin_keystone
# 如果可以执行下面的命令,说明认证成功,否则请检查其配置文件
[root@controller0 ~(keystone)]# glance image-list
+----+------+-------------+------------------+------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+----+------+-------------+------------------+------+--------+
+----+------+-------------+------------------+------+--------+
导入之前已经创建好的镜像:
[root@controller0 ~(keystone)]# glance image-create --name="centos7" --is-public=true --disk-format=qcow2 --container-format=bare < rhel7.0.qcow2
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 0ffb6f101c28af38804f79287f15e7e9 |
| container_format | bare |
| created_at | 2015-06-18T09:34:50.000000 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | 7f1f376c-0dff-44a3-87e8-d13883f795fc |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | centos7 |
| owner | c0c4e7b797bb41798202b55872fba074 |
| protected | False |
| size | 21478375424 |
| status | active |
| updated_at | 2015-06-18T09:41:28.000000 |
| virtual_size | None |
+------------------+--------------------------------------+
[root@controller0 ~(keystone)]# glance image-list
+--------------------------------------+---------+-------------+------------------+-------------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------+-------------+------------------+-------------+--------+
| 7f1f376c-0dff-44a3-87e8-d13883f795fc | centos7 | qcow2 | bare | 21478375424 | active |
+--------------------------------------+---------+-------------+------------------+-------------+--------+
配置Nova Network(可选)¶
如果使用nova-network请参考此处,否则请忽略:
# 以admin用户认证
[root@controller0 ~]# cat ~/admin_keystone
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.77.50:35357/v2.0/
export PS1='[\u@\h \W(keystone)]\$ '
[root@controller0 ~]# source ~/admin_keystone
# 创建实例的内网
[root@controller0 ~(keystone)]# nova-manage network create --label neutron01 --dns1 10.0.0.1 --fixed_range_v4=10.1.0.0/24
[root@controller0 ~(keystone)]# nova-manage network list
id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid
1 10.1.0.0/24 None 10.1.0.2 10.0.0.1 None None None d5bac5d4-7d1f-49ea-98d7-ea9039e75740
# 建立安全规则
# 允许ssh访问
[root@controller0 ~(keystone)]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
# 允许ping
[root@controller0 ~(keystone)]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
[root@controller0 ~(keystone)]# nova secgroup-list-rules default
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 22 | 22 | 0.0.0.0/0 | |
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
# 允许虚拟机启动时使用floating-ip
[root@controller0 ~(keystone)]# vi /etc/nova/nova.conf
# 在DEFAULT段中添加
auto_assign_floating_ip=true
# 重启nova-network
[root@controller0 ~(keystone)]# systemctl restart openstack-nova-network
# 指定10.0.0.0/24中的5个IP用于floating-ip给虚拟机使用
[root@controller0 ~(keystone)]# nova-manage floating create --ip_range=10.0.0.248/29
[root@controller0 ~(keystone)]# nova-manage floating list
None 10.0.0.249 None nova eno16777736
None 10.0.0.250 None nova eno16777736
None 10.0.0.251 None nova eno16777736
None 10.0.0.252 None nova eno16777736
None 10.0.0.253 None nova eno16777736
None 10.0.0.254 None nova eno16777736
# 测试启动
[root@controller0 ~(keystone)]# nova boot --flavor 2 --image centos7iso --security_group default centos7iso
+--------------------------------------+---------------------------------------------------+
| Property | Value |
+--------------------------------------+---------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000003 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | 3qF4JPhERims |
| config_drive | |
| created | 2015-06-19T03:47:15Z |
| flavor | m1.small (2) |
| hostId | |
| id | a0cae25e-4629-48da-a054-99aed02baff9 |
| image | centos7iso (d8d93d5f-56cf-4ce6-a2d1-f856fca529e2) |
| key_name | - |
| metadata | {} |
| name | centos7iso |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | c0c4e7b797bb41798202b55872fba074 |
| updated | 2015-06-19T03:47:15Z |
| user_id | cf11b4425218431991f095c2f58578a0 |
+--------------------------------------+---------------------------------------------------+
[root@controller0 ~(keystone)]# nova list
+--------------------------------------+------------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+--------+------------+-------------+-------------------------------+
| a0cae25e-4629-48da-a054-99aed02baff9 | centos7iso | BUILD | spawning | NOSTATE | neutron01=10.1.0.2, 10.0.0.249|
+--------------------------------------+------------+--------+------------+-------------+-------------------------------+
# 添加另一个floating-ip
[root@controller0 ~(keystone)]# nova add-floating-ip centos7iso 10.0.0.250
[root@controller0 ~(keystone)]# nova list
+--------------------------------------+------------+--------+------------+-------------+--------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+--------+------------+-------------+--------------------------------------------+
| a0cae25e-4629-48da-a054-99aed02baff9 | centos7iso | BUILD | spawning | NOSTATE | neutron01=10.1.0.2, 10.0.0.249, 10.0.0.250 |
+--------------------------------------+------------+--------+------------+-------------+--------------------------------------------+
配置Horizon¶
添加web界面。
# 安装必需包
[root@controller0 ~(keystone)]# yum --enablerepo=openstack-juno,epel -y install openstack-dashboard openstack-nova-novncproxy
# 配置vnc
[root@controller0 ~(keystone)]# vi /etc/nova/nova.conf
# 于DEFAULT段中添加
vnc_enabled=true
novncproxy_host=0.0.0.0
novncproxy_port=6080
# replace the IP address to your own IP
novncproxy_base_url=http://192.168.77.50:6080/vnc_auto.html
vncserver_listen=192.168.77.50
vncserver_proxyclient_address=192.168.77.50
# 使能dashboard
[root@controller0 ~(keystone)]# vi /etc/openstack-dashboard/local_settings
# line 15: 允许所有人访问
ALLOWED_HOSTS = ['*']
# line 134:
OPENSTACK_HOST = "192.168.77.50"
# 启用服务
[root@controller0 ~(keystone)]# systemctl start openstack-nova-novncproxy
[root@controller0 ~(keystone)]# systemctl restart openstack-nova-compute
[root@controller0 ~(keystone)]# systemctl restart httpd
[root@controller0 ~(keystone)]# systemctl enable openstack-nova-novncproxy
[root@controller0 ~(keystone)]# systemctl enable httpd
添加计算节点¶
现在开始加入第二个计算节点compute0:
# 安装必需包
[root@compute0 ~]# yum install -y openstack-nova-compute openstack-nova-api openstack-nova-network
# 配置nova
[root@compute0 ~]# mv /etc/nova/nova.conf /etc/nova/nova.conf.org
[root@compute0 ~]# vi /etc/nova/nova.conf
[DEFAULT]
rabbit_host=192.168.77.50
rabbit_port=5672
rabbit_userid=guest
rabbit_password=password
notification_driver=nova.openstack.common.notifier.rpc_notifier
rpc_backend=rabbit
my_ip=192.168.77.51
use_ipv6=false
state_path=/var/lib/nova
enabled_apis=ec2,osapi_compute,metadata
osapi_compute_listen=0.0.0.0
osapi_compute_listen_port=8774
rootwrap_config=/etc/nova/rootwrap.conf
api_paste_config=api-paste.ini
auth_strategy=keystone
lock_path=/var/lib/nova/tmp
log_dir=/var/log/nova
memcached_servers=192.168.77.50:11211
scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
[glance]
host=192.168.77.50
port=9292
protocol=http
[database]
connection=mysql://nova:password@192.168.77.50/nova
[keystone_authtoken]
auth_uri = http://192.168.77.50:35357/v2.0
identity_uri=http://192.168.77.50:5000
admin_user=nova
# Nova user's password added in Keystone
admin_password=servicepassword
admin_tenant_name=service
signing_dir=/var/lib/nova/keystone-signing
[root@compute0 ~]# chmod 640 /etc/nova/nova.conf
[root@compute0 ~]# chgrp nova /etc/nova/nova.conf
配置nova-network:
[root@controller0 ~]# vi /etc/nova/nova.conf
# 在DEFAULT段中添加如下内容
# nova-network
network_driver=nova.network.linux_net
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_api_class=nova.network.api.API
security_group_api=nova
network_manager=nova.network.manager.FlatDHCPManager
network_size=254
allow_same_net_traffic=False
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
# 指定public网络接口
public_interface=eno16777736
# 任意桥接接口
flat_network_bridge=br100
# 创建dummy接口
flat_interface=dummy0
# 如果需要自动floating-ip
auto_assign_floating_ip=true
启动服务,如果不需要nova-network请同样省略数组中的network:
[root@compute0 ~]# for service in metadata-api compute network; do systemctl start openstack-nova-$service; systemctl enable openstack-nova-$service; done
[root@compute0 ~]# nova-manage service list
Binary Host Zone Status State Updated_At
nova-conductor controller0 internal enabled :-) 2015-06-19 05:31:48
nova-scheduler controller0 internal enabled :-) 2015-06-19 05:31:48
nova-cert controller0 internal enabled :-) 2015-06-19 05:31:51
nova-consoleauth controller0 internal enabled :-) 2015-06-19 05:31:52
nova-compute controller0 nova enabled :-) 2015-06-19 05:31:45
nova-network controller0 internal enabled :-) 2015-06-19 05:31:44
nova-compute compute0 nova enabled :-) 2015-06-19 05:31:50
nova-network compute0 internal enabled :-) 2015-06-19 05:31:51
配置Neutron(推荐)¶
如果已经安装上面的顺序(排除nova-networking)配置下来,现在应该有两个计算节点了。
那么我们的配置如下:
+------------------+ | +------------------------+
| [ contoller0 ] | | | [ neutron0 ] |
| Keystone |192.168.77.50 | 192.168.77.30| DHCP Agent |
| Glance |---------------+---------------| L3 Agent |
| Nova API |eth0 | eth0| L2 Agent |
| Neutron Server | | | Metadata Agent |
| Nova Compute | | +------------------------+
+------------------+ |
eth0|192.168.77.51
+--------------------+
| [ compute0 ] |
| Nova Compute |
| L2 Agent |
+--------------------+
控制节点controller0配置¶
安装neutron
neutron依赖于各种插件(openvswitch、linuxbridge等),我们在此使用openvswitch。
# 安装neutron
[root@controller0 ~(keystone)]# yum install -y openstack-neutron openstack-neutron-ml2
# 初始化数据库
[root@controller0 ~(keystone)]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 14
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026
Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database neutron_ml2;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> grant all privileges on neutron_ml2.* to neutron@'localhost' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on neutron_ml2.* to neutron@'%' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
# 创建neutron服务
[root@controller0 ~(keystone)]# keystone user-create --tenant service --name neutron --pass servicepassword --enabled true
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 6dafe1f763de44778fa9c4848da7d20f |
| name | neutron |
| tenantId | 9acf83020ae34047b6f1e320c352ae44 |
| username | neutron |
+----------+----------------------------------+
[root@controller0 ~(keystone)]# keystone user-role-add --user neutron --tenant service --role admin
[root@controller0 ~(keystone)]# keystone service-create --name=neutron --type=network --description="Neutron Network Service"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Neutron Network Service |
| enabled | True |
| id | 534492ae3d48407bb3b2a90607f43461 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
[root@controller0 ~(keystone)]# export neutron_server=192.168.77.50
[root@controller0 ~(keystone)]# keystone endpoint-create --region RegionOne --service neutron --publicurl "http://$neutron_server:9696/" --internalurl "http://$neutron_server:9696/" --adminurl "http://$neutron_server:9696/"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://192.168.77.50:9696/ |
| id | 74fd6b095c16452d97ffcb2b1fd0dad3 |
| internalurl | http://192.168.77.50:9696/ |
| publicurl | http://192.168.77.50:9696/ |
| region | RegionOne |
| service_id | 534492ae3d48407bb3b2a90607f43461 |
+-------------+----------------------------------+
# 配置neutron
[root@controller0 ~(keystone)]# vi /etc/neutron/neutron.conf
# [DEFAULT]
# line 62: 后端插件
core_plugin=ml2
# line 69: 服务插件
service_plugins=router
# line 84: 认证方式
auth_strategy=keystone
# line 110: 取消注释
dhcp_agent_notification=True
# line 339: 控制节点的nova端
nova_url=http://192.168.77.50:8774/v2
# line 345: nova用户名
nova_admin_username=nova
# line 348: service用户的tenant id(可使用keystone tenant-list查看)
nova_admin_tenant_id=9acf83020ae34047b6f1e320c352ae44
# line 357: nova用户的service密码
nova_admin_password=servicepassword
# line 360: keystone认证端
nova_admin_auth_url=http://192.168.77.50:35357/v2.0
# [oslo_messaging_rabbit]
# line 445: rabbitMQ服务器
rabbit_host=192.168.77.50
# line 449: rabbitMQ端口
rabbit_port=5672
# line 458: rabbitMQ用户信息
rabbit_userid=guest
rabbit_password=password
# line 464: rpc后端,可从AMQ或者RABBITMQ中选择
rpc_backend=rabbit
# line 551: 控制信息交换格式
control_exchange=neutron
# line 688: keystone认证信息,由于auth_uri以后会被identity_uri代替,并且auth_host等信息也不必要了,但为兼容性起见,此处我给予保留
[keystone_authtoken]
auth_uri = http://192.168.77.50:35357/v2.0
identity_uri=http://192.168.77.50:5000
admin_tenant_name = service
admin_user = neutron
admin_password = servicepassword
# line 708: 数据库
connection = mysql://neutron:password@192.168.77.50/neutron_ml2
# 配置ml2插件
[root@controller0 ~(keystone)]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
# line 7: 网络支持
type_drivers = flat,vlan,gre
tenant_network_types = vlan,gre
mechanism_drivers = openvswitch
# line 93: 启用安全组
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
# 配置nova节点以支持neutron
[root@controller0 ~(keystone)]# vi /etc/nova/nova.conf
# add in the [DEFAULT] section
# nova-network
#network_driver=nova.network.linux_net
#libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
#linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
#firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
#network_api_class=nova.network.api.API
#security_group_api=nova
#network_manager=nova.network.manager.FlatDHCPManager
#network_size=254
#allow_same_net_traffic=False
#multi_host=True
#send_arp_for_ha=True
#share_dhcp_address=True
#force_dhcp_release=True
## specify nic for public
#public_interface=eno16777736
## specify any name you like for bridge
#flat_network_bridge=br100
## specify nic for flat DHCP bridge
#flat_interface=dummy0
#auto_assign_floating_ip=true
# neutron-network
network_api_class=nova.network.neutronv2.api.API
security_group_api=neutron
# 在末尾添加neutron用户认证信息
[neutron]
url = http://192.168.77.50:9696
auth_strategy = keystone
admin_auth_url = http://192.168.77.50:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = servicepassword
[root@controller0 ~(keystone)]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@controller0 ~(keystone)]# neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head
# 启用neutron-server服务,如果之前有配置nova-network,在此需禁用
[root@controller0 ~(keystone)]# systemctl stop openstack-nova-network
[root@controller0 ~(keystone)]# systemctl disable openstack-nova-network
[root@controller0 ~(keystone)]# systemctl start neutron-server
[root@controller0 ~(keystone)]# systemctl enable neutron-server
[root@controller0 ~(keystone)]# systemctl restart openstack-nova-api
网络节点neutron0配置¶
在节点neutron0上,我们进行如下配置。
# 安装必需包
[root@neutron0 ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
# 打开ip_forward
[root@neutron0 ~]# echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
[root@neutron0 ~]# echo 'net.ipv4.conf.default.rp_filter=0' >> /etc/sysctl.conf
[root@neutron0 ~]# echo 'net.ipv4.conf.all.rp_filter=0' >> /etc/sysctl.conf
[root@neutron0 ~]# sysctl -p
# 配置neutron
[root@network ~]# vi /etc/neutron/neutron.conf
# line 60
core_plugin=ml2
# line 69
service_plugins=router
# line 84
auth_strategy=keystone
# line 110
dhcp_agent_notification=True
# line 444: rabbitMQ信息
rabbit_host=192.168.77.50
# line 448
rabbit_port=5672
# line 457
rabbit_userid=guest
# line 460
rabbit_password=password
# line 545
rpc_backend=rabbit
# line 550
control_exchange=neutron
# line 687: keystone认证信息
[keystone_authtoken]
auth_uri = http://192.168.77.50:35357/v2.0
identity_uri=http://192.168.77.50:5000
admin_tenant_name = service
admin_user = neutron
admin_password = servicepassword
# 配置三层交换代理
[root@neutron0 ~]# vi /etc/neutron/l3_agent.ini
# line 19: uncomment
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
# line 25: uncomment
use_namespaces = True
# line 63: add (it's OK to keep value empty (set it if needed))
external_network_bridge =
# 配置dhcp代理
[root@neutron0 ~]# vi /etc/neutron/dhcp_agent.ini
# line 27: uncomment
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
# line 31: uncomment
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
# line 37: uncomment
use_namespaces = True
# 配置元数据代理
[root@neutron0 ~]# vi /etc/neutron/metadata_agent.ini
# line 6: change (specify endpoint of keystone)
auth_url = http://192.168.77.50:35357/v2.0
# line 12: change (auth info ofr keystone)
admin_tenant_name = service
admin_user = neutron
admin_password = servicepassword
# line 20: uncomment and specify Nova API server
nova_metadata_ip = 10.0.0.30
# line 23: uncomment
nova_metadata_port = 8775
# line 43: uncomment and specify any secret key you like
metadata_proxy_shared_secret = metadata_secret
# 配置ml2
[root@neutron0 ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
# line 7: add
type_drivers = flat,vlan,gre
tenant_network_types = vlan,gre
mechanism_drivers = openvswitch
# line 92: uncomment and add
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@neutron0 ~]# mv /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.org
[root@neutron0 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@neutron0 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[root@neutron0 ~]# systemctl start openvswitch
[root@neutron0 ~]# systemctl enable openvswitch
[root@neutron0 ~]# ovs-vsctl add-br br-int
[root@neutron0 ~]# for service in dhcp-agent l3-agent metadata-agent openvswitch-agent; do
systemctl start neutron-$service
systemctl enable neutron-$service
done
计算节点compute0配置¶
在除controller0的另一个计算节点compute0上,我们进行如下配置。
# 安装必需包
[root@compute0 ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
# 调节rp_filter
[root@compute0 ~]# echo 'net.ipv4.conf.default.rp_filter=0' >> /etc/sysctl.conf
[root@compute0 ~]# echo 'net.ipv4.conf.all.rp_filter=0' >> /etc/sysctl.conf
[root@compute0 ~]# sysctl -p
# 配置neutron
[root@compute0 ~]# vi /etc/neutron/neutron.conf
# line 60
core_plugin=ml2
# line 69
service_plugins=router
# line 84
auth_strategy=keystone
# line 110
dhcp_agent_notification=True
# line 444: rabbitMQ信息
rabbit_host=192.168.77.50
# line 448
rabbit_port=5672
# line 457
rabbit_userid=guest
# line 460
rabbit_password=password
# line 545
rpc_backend=rabbit
# line 550
control_exchange=neutron
# line 687: keystone认证信息
[keystone_authtoken]
auth_uri = http://192.168.77.50:35357/v2.0
identity_uri=http://192.168.77.50:5000
admin_tenant_name = service
admin_user = neutron
admin_password = servicepassword
[root@compute0 ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
# line 7: add
type_drivers = flat,vlan,gre
tenant_network_types = vlan,gre
mechanism_drivers = openvswitch
# line 69: uncomment and add
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@compute0 ~]# mv /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.org
[root@compute0 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@compute0 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
# 配置nova
[root@compute0 ~]# vi /etc/nova/nova.conf
# add in the [DEFAULT] section
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
network_api_class=nova.network.neutronv2.api.API
security_group_api=neutron
# specify the Neutron endpoint
neutron_url=http://192.168.77.50:9696
# specify the auth info for keystone
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=servicepassword
neutron_admin_auth_url=http://192.168.77.50:35357/v2.0
metadata_listen=0.0.0.0
# specify the Control node
metadata_host=192.168.77.50
service_neutron_metadata_proxy=True
# specify the metadata secret key (it is just the value you set in the Network node)
neutron_metadata_proxy_shared_secret=metadata_secret
vif_plugging_is_fatal=false
vif_plugging_timeout=0
# 启用服务,如果之前有配置nova-network,在此需禁用
[root@compute0 ~]# systemctl stop openstack-nova-network
[root@compute0 ~]# systemctl disable openstack-nova-network
[root@compute0 ~]# systemctl start openvswitch
[root@compute0 ~]# systemctl enable openvswitch
[root@compute0 ~]# ovs-vsctl add-br br-int
[root@compute0 ~]# systemctl restart openstack-nova-compute
[root@compute0 ~]# systemctl restart openstack-nova-metadata-api
[root@compute0 ~]# systemctl start neutron-openvswitch-agent
[root@compute0 ~]# systemctl enable neutron-openvswitch-agent
使用Neutron¶
+-------------+ +----+----+
| Name Server | | Gateway |
+------+------+ +----+----+
|192.168.77.2 |192.168.77.2
| |
+--------------+-----------------+------------------------+
| | | |
| | | |192.168.77.200-192.168.77.254
eth0|192.168.77.50 | 192.168.77.30| eth0 +--------+-------+
+--------+---------+ | +-----------+----------+ | Virtual Router |
| [ controller0 ] | | | [ neutron0 ] | +--------+-------+
| Keystone | | | DHCP Agent | 192.168.100.1
| Glance | | eth2| L3 Agent |eth1 | 192.168.100.0/24
| Nova API | | | L2 Agent | | +-----------------+
| Neutron Server | | | Metadata Agent | | +---| Virtual Machine |
+------------------+ | +----------------------+ | | +-----------------+
| | | +-----------------+
| +----------------------+ +-------+---| Virtual Machine |
| eth0| [ compute0 ] |eth1 | +-----------------+
+-----| Nova Compute | | +-----------------+
192.168.77.51| L2 Agent | |---| Virtual Machine |
+----------------------+ +-----------------+
其中,controller0、compute0都有两个物理网口,neutron0有三个物理网口。
修改控制节点配置:
[root@controller0 ~(keystone)]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
# line 64
[ml2_type_vlan]
network_vlan_ranges = physnet1:1000:2999
# 末尾添加
[ovs]
tenant_network_type = vlan
bridge_mappings = physnet1:br-eth1
[root@controller0 ~(keystone)]# systemctl restart neutron-server
在网络节点和计算节点同时添加eth1作内网:
# 添加一个桥
[root@neutron0 ~]# ovs-vsctl add-br br-eth1
# 将eno33554984网口附加到桥,即对应eth1
[root@neutron0 ~]# ovs-vsctl add-port br-eth1 eno33554984
[root@neutron0 ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini
# line 64
[ml2_type_vlan]
network_vlan_ranges = physnet1:1000:2999
# 末尾添加
[ovs]
tenant_network_type = vlan
bridge_mappings = physnet1:br-eth1
[root@neutron0 ~]# systemctl restart neutron-openvswitch-agent
在网络节点添加eth2作外网:
[root@neutron0 ~]# ovs-vsctl add-br br-ext
# eno50332208对应eth2
[root@neutron0 ~]# ovs-vsctl add-port br-ext eno50332208
[root@neutron0 ~]# vi /etc/neutron/l3_agent.ini
# line 63
external_network_bridge = br-ext
[root@neutron0 ~]# systemctl restart neutron-l3-agent
在任意节点修改(neutron的配置属于集群全局配置,此处在控制节点修改,其他节点也可):
# create a virtual router
[root@controller0 ~(keystone)]# neutron router-create router01
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| distributed | False |
| external_gateway_info | |
| ha | False |
| id | 8bf0184c-1cd8-4993-b3e0-7be94aaf2757 |
| name | router01 |
| routes | |
| status | ACTIVE |
| tenant_id | c0c4e7b797bb41798202b55872fba074 |
+-----------------------+--------------------------------------+
[root@controller0 ~(keystone)]# Router_ID=`neutron router-list | grep router01 | awk '{ print $2 }'`
# 创建内网
[root@controller0 ~(keystone)]# neutron net-create int_net
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 532e391d-562d-4499-8dee-48ca31345466 |
| mtu | 0 |
| name | int_net |
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id | 1000 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | c0c4e7b797bb41798202b55872fba074 |
+---------------------------+--------------------------------------+
# 创建内网子网
[root@controller0 ~(keystone)]# neutron subnet-create --gateway 192.168.100.1 --dns-nameserver 192.168.77.2 int_net 192.168.100.0/24
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "192.168.100.2", "end": "192.168.100.254"} |
| cidr | 192.168.100.0/24 |
| dns_nameservers | 192.168.77.2 |
| enable_dhcp | True |
| gateway_ip | 192.168.100.1 |
| host_routes | |
| id | c08dcadf-f632-44b7-9a10-8a3a89c86853 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | |
| network_id | 532e391d-562d-4499-8dee-48ca31345466 |
| subnetpool_id | |
| tenant_id | c0c4e7b797bb41798202b55872fba074 |
+-------------------+------------------------------------------------------+
[root@controller0 ~(keystone)]# Int_Subnet_ID=`neutron net-list | grep int_net | awk '{ print $6 }'`
# 将内网实例附加到路由
[root@controller0 ~(keystone)]# neutron router-interface-add $Router_ID $Int_Subnet_ID
Added interface a2e9bedc-0505-45da-8f87-4a82928a6206 to router 8bf0184c-1cd8-4993-b3e0-7be94aaf2757.
# 创建外网
[root@controller0 ~(keystone)]# neutron net-create ext_net --router:external
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | e041481d-f8b8-42a7-b87b-3d346167ef21 |
| mtu | 0 |
| name | ext_net |
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id | 1001 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | c0c4e7b797bb41798202b55872fba074 |
+---------------------------+--------------------------------------+
# 创建外网子网
[root@controller0 ~(keystone)]# neutron subnet-create ext_net --allocation-pool start=192.168.77.200,end=192.168.77.254 --gateway 192.168.77.2 --dns-nameserver 192.168.77.2 192.168.77.0/24 --disable-dhcp
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "192.168.77.200", "end": "192.168.77.254"} |
| cidr | 192.168.77.0/24 |
| dns_nameservers | 192.168.77.2 |
| enable_dhcp | False |
| gateway_ip | 192.168.77.2 |
| host_routes | |
| id | 98f97e64-94d8-4743-b8a1-a715f2c07e08 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | |
| network_id | e041481d-f8b8-42a7-b87b-3d346167ef21 |
| subnetpool_id | |
| tenant_id | c0c4e7b797bb41798202b55872fba074 |
+-------------------+------------------------------------------------------+
# 将外网实例附加到路由
[root@controller0 ~(keystone)]# Ext_Net_ID=`neutron net-list | grep ext_net | awk '{ print $2 }'`
[root@controller0 ~(keystone)]# neutron router-gateway-set $Router_ID $Ext_Net_ID
Set gateway for router 8bf0184c-1cd8-4993-b3e0-7be94aaf2757
# 创建并启动虚拟机
[root@controller0 ~(keystone)]# Int_Net_ID=`neutron net-list | grep int_net | awk '{ print $2 }'`
[root@controller0 ~(keystone)]# nova image-list
+--------------------------------------+---------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------+--------+--------+
| 4a663fce-97eb-42d3-93d6-415e477bc0a4 | CentOS7 | ACTIVE | |
+--------------------------------------+---------+--------+--------+
[root@controller0 ~(keystone)]# nova boot --flavor 2 --image CentOS7 --security_group default --nic net-id=$Int_Net_ID CentOS_70
[root@controller0 ~(keystone)]# nova list
+-----------+-----------+--------+------------+-------------+-----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+-----------+-----------+--------+------------+-------------+-----------------------+
| 33bb9427- | CentOS_70 | ACTIVE | - | Running | int_net=192.168.100.2 |
+-----------+-----------+--------+------------+-------------+-----------------------+
# 添加浮动IP
[root@controller0 ~(keystone)]# neutron floatingip-create ext_net
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 10.0.0.201 |
| floating_network_id | bd216cab-c07b-4475-90ef-e9ad402bd57b |
| id | da8eef0d-5bc8-488e-8fd4-0c6df1f5922a |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | e8f6ac69de5f46afa189fcefd99c8a1a |
+---------------------+--------------------------------------+
[root@controller0 ~(keystone)]# Device_ID=`nova list | grep CentOS_70 | awk '{ print $2 }'`
[root@controller0 ~(keystone)]# Port_ID=`neutron port-list -- --device_id $Device_ID | grep 192.168.100.2 | awk '{ print $2 }'`
[root@controller0 ~(keystone)]# Floating_ID=`neutron floatingip-list | grep 10.0.0.201 | awk '{ print $2 }'`
[root@controller0 ~(keystone)]# neutron floatingip-associate $Floating_ID $Port_ID
Associated floating IP da8eef0d-5bc8-488e-8fd4-0c6df1f5922a
# confirm settings
[root@controller0 ~(keystone)]# neutron floatingip-show $Floating_ID
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 192.168.100.2 |
| floating_ip_address | 10.0.0.201 |
| floating_network_id | bd216cab-c07b-4475-90ef-e9ad402bd57b |
| id | da8eef0d-5bc8-488e-8fd4-0c6df1f5922a |
| port_id | d4f17f91-c4e9-45ec-af2d-223907e891ea |
| router_id | a0d08cb3-bf96-4872-ab95-b24a697b080a |
| status | ACTIVE |
| tenant_id | e8f6ac69de5f46afa189fcefd99c8a1a |
+---------------------+--------------------------------------+
# 添加安全组
# permit SSH
[root@controller0 ~(keystone)]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
# permit ICMP
[root@controller0 ~(keystone)]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
[root@controller0 ~(keystone)]# nova secgroup-list-rules default
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 22 | 22 | 0.0.0.0/0 | |
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
配置Cinder¶
结构图如下:
+------------------+
192.168.77.60| [ cinder0 ] |
+------------------+ +-----+ Cinder-Volume |
| [ controller0 ] | | eth0| |
| Keystone |192.168.77.50 | +------------------+
| Glance |---------------+
| Nova API |eth0 | +------------------+
| Cinder API | | eth0| [ compute0 ] |
+------------------+ +-----+ Nova Compute |
192.168.77.51| |
+------------------+
控制节点初始化Cinder信息:
# 安装Cinder服务
[root@controller0 ~(keystone)]# yum install -y openstack-cinder
# 配置keystone,添加endpoint
[root@controller0 ~(keystone)]# keystone user-create --tenant service --name cinder --pass servicepassword --enabled true
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | 6c6438aac109473d92ba22ed64ef7f4a |
| name | cinder |
| tenantId | 9acf83020ae34047b6f1e320c352ae44 |
| username | cinder |
+----------+----------------------------------+
[root@controller0 ~(keystone)]# keystone user-role-add --user cinder --tenant service --role admin
[root@controller0 ~(keystone)]# keystone service-create --name=cinder --type=volume --description="Cinder Service"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Cinder Service |
| enabled | True |
| id | f9745ca8657f40d188a464c706d1d923 |
| name | cinder |
| type | volume |
+-------------+----------------------------------+
[root@controller0 ~(keystone)]# keystone service-create --name=cinderv2 --type=volumev2 --description="Cinder Service"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Cinder Service |
| enabled | True |
| id | b11416c99c274ed9872ed5eaffad83b7 |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
[root@controller0 ~(keystone)]# export cinder_api=192.168.77.50
[root@controller0 ~(keystone)]# keystone endpoint-create --region RegionOne \
--service cinder \
--publicurl "http://$cinder_api:8776/v1/\$(tenant_id)s" \
--internalurl "http://$cinder_api:8776/v1/\$(tenant_id)s" \
--adminurl "http://$cinder_api:8776/v1/\$(tenant_id)s"
+-------------+--------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------+
| adminurl | http://192.168.77.50:8776/v1/$(tenant_id)s |
| id | 073dafcb7ee049cb8bfd3ebbe149dbc0 |
| internalurl | http://192.168.77.50:8776/v1/$(tenant_id)s |
| publicurl | http://192.168.77.50:8776/v1/$(tenant_id)s |
| region | RegionOne |
| service_id | f9745ca8657f40d188a464c706d1d923 |
+-------------+--------------------------------------------+
[root@controller0 ~(keystone)]# keystone endpoint-create --region RegionOne \
--service cinderv2 \
--publicurl "http://$cinder_api:8776/v2/\$(tenant_id)s" \
--internalurl "http://$cinder_api:8776/v2/\$(tenant_id)s" \
--adminurl "http://$cinder_api:8776/v2/\$(tenant_id)s"
+-------------+--------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------+
| adminurl | http://192.168.77.50:8776/v2/$(tenant_id)s |
| id | 3f00de1ec9474183971ba3c1c0d35c7d |
| internalurl | http://192.168.77.50:8776/v2/$(tenant_id)s |
| publicurl | http://192.168.77.50:8776/v2/$(tenant_id)s |
| region | RegionOne |
| service_id | b11416c99c274ed9872ed5eaffad83b7 |
+-------------+--------------------------------------------+
# 添加数据库
[root@controller0 ~(keystone)]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 16
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026
Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database cinder;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> grant all privileges on cinder.* to cinder@'localhost' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on cinder.* to cinder@'%' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
# 配置cinder参数
[root@controller0 ~(keystone)]# mv /etc/cinder/cinder.conf /etc/cinder/cinder.conf.org
[root@controller0 ~(keystone)]# vi /etc/cinder/cinder.conf
[DEFAULT]
state_path=/var/lib/cinder
api_paste_config=api-paste.ini
enable_v1_api=true
rootwrap_config=/etc/cinder/rootwrap.conf
auth_strategy=keystone
# specify RabbitMQ server
rabbit_host=192.168.77.50
rabbit_port=5672
# specify RabbitMQ user for auth
rabbit_userid=guest
# specify RabbitMQ user's password above
rabbit_password=password
rpc_backend=rabbit
scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
volume_manager=cinder.volume.manager.VolumeManager
volume_api_class=cinder.volume.api.API
volumes_dir=$state_path/volumes
# auth info for MariaDB
[database]
connection=mysql://cinder:password@192.168.77.50/cinder
# auth info for Keystone
[keystone_authtoken]
auth_host=192.168.77.50
auth_port=35357
auth_protocol=http
admin_user=cinder
admin_password=servicepassword
admin_tenant_name=service
# 启用服务
[root@controller0 ~(keystone)]# chmod 640 /etc/cinder/cinder.conf
[root@controller0 ~(keystone)]# chgrp cinder /etc/cinder/cinder.conf
[root@controller0 ~(keystone)]# cinder-manage db sync
[root@controller0 ~(keystone)]# for service in api scheduler; do
systemctl start openstack-cinder-$service
systemctl enable openstack-cinder-$service
done
[root@controller0 ~(keystone)]# cinder-manage service list
Binary Host Zone Status State Updated At
cinder-scheduler controller0 nova enabled :-) None
配置Cinder节点:
# 安装Cinder服务
[root@cinder0 ~(keystone)]# yum install -y openstack-cinder
[root@cinder0 ~]# mv /etc/cinder/cinder.conf /etc/cinder/cinder.conf.org
[root@cinder0 ~]# vi /etc/cinder/cinder.conf
[DEFAULT]
state_path=/var/lib/cinder
api_paste_config=api-paste.ini
enable_v1_api=true
osapi_volume_listen=0.0.0.0
osapi_volume_listen_port=8776
rootwrap_config=/etc/cinder/rootwrap.conf
auth_strategy=keystone
# specify Glance server
glance_host=192.168.77.50
glance_port=9292
# specify RabbitMQ server
rabbit_host=192.168.77.50
rabbit_port=5672
# RabbitMQ user for auth
rabbit_userid=guest
# RabbitMQ user's password for auth
rabbit_password=password
rpc_backend=rabbit
# specify iSCSI target (it's just the own IP)
iscsi_ip_address=192.168.77.60
iscsi_port=3260
iscsi_helper=tgtadm
scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
volume_manager=cinder.volume.manager.VolumeManager
volume_api_class=cinder.volume.api.API
volumes_dir=$state_path/volumes
# auth info for MariaDB
[database]
connection=mysql://cinder:password@192.168.77.50/cinder
# auth info for Keystone
[keystone_authtoken]
auth_host=192.168.77.50
auth_port=35357
auth_protocol=http
admin_user=cinder
admin_password=servicepassword
admin_tenant_name=service
# 启用服务
[root@cinder0 ~]# chmod 640 /etc/cinder/cinder.conf
[root@cinder0 ~]# chgrp cinder /etc/cinder/cinder.conf
[root@cinder0 ~]# systemctl start openstack-cinder-volume
[root@cinder0 ~]# systemctl enable openstack-cinder-volume
ln -s '/usr/lib/systemd/system/openstack-cinder-volume.service' '/etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service'
[root@cinder0 ~]# cinder-manage service list
Binary Host Zone Status State Updated At
cinder-scheduler controller0 nova enabled :-) 2015-07-21 03:29:39
cinder-volume cinder0 nova enabled :-) None
配置LVM后端¶
存储节点配置
# 创建PV
[root@cinder0 ~]# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
[root@cinder0 ~]# pvcreate /dev/sdb1
Device /dev/sdb1 not found (or ignored by filtering).
[root@cinder0 ~]# pvdisplay
"/dev/sdb" is a new physical volume of "20.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 20.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID lDpf6L-zPJT-6Uth-lcPA-KtAS-TYNS-B5LH4c
[root@cinder0 ~]# vgcreate -s 32M vg_volume01 /dev/sdb
Volume group "vg_volume01" successfully created
[root@cinder0 ~]# vgdisplay
--- Volume group ---
VG Name vg_volume01
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 19.97 GiB
PE Size 32.00 MiB
Total PE 639
Alloc PE / Size 0 / 0
Free PE / Size 639 / 19.97 GiB
VG UUID IYI8rR-d0u4-p58f-h1Bp-afAW-EPRK-21qSdv
# 修改cinder配置
[root@cinder0 ~]# vi /etc/cinder/cinder.conf
# 在DEFAULT段中添加
enabled_backends = lvm
# 在末尾添加
[lvm]
iscsi_helper = lioadm
# volume group name just created
volume_group = vg_volume01
# IP address of Storage Node
iscsi_ip_address = 192.168.77.60
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volumes_dir = $state_path/volumes
iscsi_protocol = iscsi
# 重启服务
[root@cinder0 ~]# systemctl restart openstack-cinder-volume
计算节点配置,在所有计算节点中都要配置
[root@controller0 ~]# vi /etc/nova/nova.conf
# add follows into [DEFAULT] section
osapi_volume_listen=0.0.0.0
volume_api_class=nova.volume.cinder.API
[root@controller0 ~]# systemctl restart openstack-nova-compute
创建测试磁盘
# 在任意计算节点中都可执行cinder命令创建磁盘
[root@controller0 ~]# echo "export OS_VOLUME_API_VERSION=2" >> ~/keystonerc
[root@controller0 ~]# source admin_keystone
[root@controller0 ~(keystone)]# cinder create --display_name disk01 10
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2015-07-27T08:24:32.000000 |
| description | None |
| encrypted | False |
| id | 7a974afe-a71a-479f-b63d-b208daae1707 |
| metadata | {} |
| multiattach | False |
| name | disk01 |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | c0c4e7b797bb41798202b55872fba074 |
| os-volume-replication:driver_data | None |
| os-volume-replication:extended_status | None |
| replication_status | disabled |
| size | 10 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | cf11b4425218431991f095c2f58578a0 |
| volume_type | None |
+---------------------------------------+--------------------------------------+
[root@controller0 ~(keystone)]# cinder list
+--------------------------------------+-----------+--------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------+------+-------------+----------+-------------+
| 7a974afe-a71a-479f-b63d-b208daae1707 | available | disk01 | 10 | None | false | |
+--------------------------------------+-----------+--------+------+-------------+----------+-------------+
存储节点上查看
[root@cinder0 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/vg_volume01/volume-7a974afe-a71a-479f-b63d-b208daae1707
LV Name volume-7a974afe-a71a-479f-b63d-b208daae1707
VG Name vg_volume01
LV UUID Pp91xd-Kj0M-J5eI-tUXY-0iMH-MdJ6-PryIq7
LV Write Access read/write
LV Creation host, time cinder0, 2015-07-27 16:24:33 +0800
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 320
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
计算节点上附加磁盘到虚拟机
[root@controller0 ~(keystone)]# nova list
+----------------+----------+---------+------------+-------------+-----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+----------------+----------+---------+------------+-------------+-----------------------+
| 16971b4c-c901- | CentOS_7 | SHUTOFF | - | Shutdown | int_net=192.168.100.4 |
+----------------+----------+---------+------------+-------------+-----------------------+
[root@controller0 ~(keystone)]# nova volume-attach CentOS_7 7a974afe-a71a-479f-b63d-b208daae1707 auto
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 7a974afe-a71a-479f-b63d-b208daae1707 |
| serverId | 16971b4c-c901-4e95-8334-b2ff36b99633 |
| volumeId | 7a974afe-a71a-479f-b63d-b208daae1707 |
+----------+--------------------------------------+
# the status of attached disk turns "in-use" like follows
[root@controller0 ~(keystone)]# cinder list
配置NFS、Glusterfs混合后端¶
此处使用NFS、Glusterfs混合后端,也可根据实际需求添加LVM后端或者使用三者之一。
+------------------+ +------------------+
192.168.77.60| [ cinder0 ] | 192.168.77.100| |
+------------------+ +-----+ Cinder-Volume | +----------+ GlusterFS #1 |
| [ controller0 ] | | eth0| | | eth0| gfs01.lofyer.org |
| Keystone |192.168.77.50 | +------------------+ | +------------------+
| Glance |----------------+------------------------------+
| Nova API |eth0 | +------------------+ | +------------------+
| Cinder API | | eth0| [ compute0 ] | | eth0| |
+------------------+ +-----+ Nova Compute | +----------------+ GlusterFS #2 |
192.168.77.51| | | 192.168.77.101| gfs02.lofyer.org |
+------------------+ | +------------------+
|
| +------------------+
| eth0| |
+----------+ NFS |
192.168.77.110| nfs.lofyer.org |
+------------------+
存储节点配置
# 安装软件包
[root@cinder0 ~]# yum -y install nfs-utils glusterfs glusterfs
# 修改cinder配置
[root@cinder0 ~]# vi /etc/cinder/cinder.conf
# 于DEFAULT段中添加
enabled_backends=nfs,glusterfs
# 于末尾添加
[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
volume_backend_name = NFS
nfs_shares_config = /etc/cinder/nfs_shares
nfs_mount_point_base = $state_path/nfs
[glusterfs]
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver
volume_backend_name = GlusterFS
glusterfs_shares_config = /etc/cinder/glusterfs_shares
glusterfs_mount_point_base = $state_path/glusterfs
# 修改存储信息
[root@cinder0 ~]# vi /etc/cinder/nfs_shares
# 指定NFS存储路径
nfs.lofyer.org:/storage
[root@cinder0 ~]# vi /etc/cinder/glusterfs_shares
# 指定Glusterfs存储路径
gfs01.lofyer.org:/vol_replica
[root@cinder0 ~]# chmod 640 /etc/cinder/nfs_shares
[root@cinder0 ~]# chgrp cinder /etc/cinder/nfs_shares
[root@cinder0 ~]# chmod 640 /etc/cinder/glusterfs_shares
[root@cinder0 ~]# chgrp cinder /etc/cinder/glusterfs_shares
[root@cinder0 ~]# systemctl restart openstack-cinder-volume
计算节点配置,所有计算节点都要配置
# 安装软件包
[root@controller0 ~]# yum --enablerepo=epel -y install nfs-utils glusterfs glusterfs-fuse
[root@controller0 ~]# vi /etc/nova/nova.conf
# 于DEFAULT段中添加
osapi_volume_listen=0.0.0.0
volume_api_class=nova.volume.cinder.API
# 重启计算服务
[root@controller0 ~]# systemctl restart openstack-nova-compute
添加卷种类
# 在任意计算节点中都可执行cinder命令创建磁盘
[root@controller0 ~]# echo "export OS_VOLUME_API_VERSION=2" >> ~/keystonerc
[root@controller0 ~]# source admin_keystone
[root@controller0 ~(keystone)]# cinder type-create nfs
+--------------------------------------+------+
| ID | Name |
+--------------------------------------+------+
| 7ac3a255-cf70-498d-97d8-2a7fcdd84d2c | nfs |
+--------------------------------------+------+
[root@controller0 ~(keystone)]# cinder type-create glusterfs
+--------------------------------------+-----------+
| ID | Name |
+--------------------------------------+-----------+
| e2608bee-cc52-48e8-ba72-b94124f36a57 | glusterfs |
+--------------------------------------+-----------+
[root@controller0 ~(keystone)]# cinder type-list
+--------------------------------------+-----------+
| ID | Name |
+--------------------------------------+-----------+
| 7ac3a255-cf70-498d-97d8-2a7fcdd84d2c | nfs |
| e2608bee-cc52-48e8-ba72-b94124f36a57 | glusterfs |
+--------------------------------------+-----------+
添加存储种类别名
[root@controller0 ~(keystone)]# cinder type-key nfs set volume_backend_name=NFS
[root@controller0 ~(keystone)]# cinder type-key glusterfs set volume_backend_name=GlusterFS
[root@controller0 ~(keystone)]# cinder extra-specs-list
+--------------------------------------+-----------+----------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+-----------+----------------------------------------+
| 7ac3a255-cf70-498d-97d8-2a7fcdd84d2c | nfs | {u'volume_backend_name': u'NFS'} |
| e2608bee-cc52-48e8-ba72-b94124f36a57 | glusterfs | {u'volume_backend_name': u'GlusterFS'} |
+--------------------------------------+-----------+----------------------------------------+
添加磁盘
[root@controller0 ~(keystone)]# cinder create --display_name disk_nfs --volume-type nfs 10
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2015-06-20T18:23:23.000000 |
| description | None |
| encrypted | False |
| id | 1e92b9ca-20d7-4f63-881e-dea3f8b6b523 |
| metadata | {} |
| multiattach | False |
| name | disk_nfs |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 98ea1b896d3a48438922c0dfa9f6bc52 |
| os-volume-replication:driver_data | None |
| os-volume-replication:extended_status | None |
| replication_status | disabled |
| size | 10 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 704a7f5cf84a479796e10f47c30bb629 |
| volume_type | nfs |
+---------------------------------------+--------------------------------------+
[root@controller0 ~(keystone)]# cinder create --display_name disk_glusterfs --volume-type glusterfs 10
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2015-06-20T18:23:49.000000 |
| description | None |
| encrypted | False |
| id | d8dbaed2-e857-4162-baab-0178fbef4593 |
| metadata | {} |
| multiattach | False |
| name | disk_glusterfs |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 98ea1b896d3a48438922c0dfa9f6bc52 |
| os-volume-replication:driver_data | None |
| os-volume-replication:extended_status | None |
| replication_status | disabled |
| size | 10 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 704a7f5cf84a479796e10f47c30bb629 |
| volume_type | glusterfs |
+---------------------------------------+--------------------------------------+
[root@controller0 ~(keystone)]# cinder list
+--------------------------+-----------+----------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------+-----------+----------------+------+-------------+----------+-------------+
| 1e92b9ca-20d7-4f63-881e- | available | disk_nfs | 10 | nfs | false | |
| d8dbaed2-e857-4162-baab- | available | disk_glusterfs | 10 | glusterfs | false | |
+--------------------------+-----------+----------------+------+-------------+----------+-------------+
附加磁盘到虚拟机
[root@controller0 ~(keystone)]# nova list
+----------------+----------+---------+------------+-------------+-----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+----------------+----------+---------+------------+-------------+-----------------------------------+
| 2c7a1025-30d6- | CentOS_7 | SHUTOFF | - | Shutdown | int_net=192.168.100.3, 10.0.0.201 |
+----------------+----------+---------+------------+-------------+-----------------------------------+
[root@controller0 ~(keystone)]# nova volume-attach CentOS_7 1e92b9ca-20d7-4f63-881e-dea3f8b6b523 auto
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdc |
| id | 1e92b9ca-20d7-4f63-881e-dea3f8b6b523 |
| serverId | 2c7a1025-30d6-446a-a4ff-309347b64eca |
| volumeId | 1e92b9ca-20d7-4f63-881e-dea3f8b6b523 |
+----------+--------------------------------------+
[root@controller0 ~(keystone)]# nova volume-attach CentOS_7 d8dbaed2-e857-4162-baab-0178fbef4593 auto
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdd |
| id | d8dbaed2-e857-4162-baab-0178fbef4593 |
| serverId | 2c7a1025-30d6-446a-a4ff-309347b64eca |
| volumeId | d8dbaed2-e857-4162-baab-0178fbef4593 |
+----------+--------------------------------------+
# the status of attached disk turns "in-use" like follows
[root@controller0 ~(keystone)]# cinder list
+--------------------------+--------+----------------+------+-------------+----------+-----------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------+--------+----------------+------+-------------+----------+-----------------------------+
| 1e92b9ca-20d7-4f63-881e- | in-use | disk_nfs | 10 | nfs | false | 2c7a1025-30d6-446a-a4ff-309 |
| d8dbaed2-e857-4162-baab- | in-use | disk_glusterfs | 10 | glusterfs | false | 2c7a1025-30d6-446a-a4ff-309 |
+--------------------------+--------+----------------+------+-------------+----------+-----------------------------+
配置Swift¶
在此我们有三个存储节点作为swift backend,swift0作为swift访问入口。
|
+------------------+ | +-----------------+
| [ controller0 ] |192.168.77.50 | 192.168.77.70| [ swift0 ] |
| Keystone |---------------+---------------| Proxy |
+------------------+ | +-----------------+
|
+---------------------------+--------------------------+
| | |
|192.168.77.71 |192.168.77.72 |192.168.77.73
+-------+----------+ +--------+---------+ +--------+---------+
| [ swift-stor0 ] | | [ swift-stor1 ] | | [ swift-stor2 ] |
| |-------| |-------| |
+------------------+ +------------------+ +------------------+
配置控制节点
# 添加swift用户
[root@controller0 ~(keystone)]# openstack user create --project service --password servicepassword swift
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| email | None |
| enabled | True |
| id | 9e19bc053b0f44bdbabf751b279c9afd |
| name | swift |
| project_id | 9acf83020ae34047b6f1e320c352ae44 |
| username | swift |
+------------+----------------------------------+
# 赋予swift用户admin角色
[root@controller0 ~(keystone)]# openstack role add --project service --user swift admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 95c4b8fb8d97424eb52a4e8a00a357e7 |
| name | admin |
+-------+----------------------------------+
# 创建swift服务
[root@controller0 ~(keystone)]# openstack service create --name swift --description "OpenStack Object Storage" object-store
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Object Storage |
| enabled | True |
| id | c8d07b32376a4e2780d8ec9b2b836e41 |
| name | swift |
| type | object-store |
+-------------+----------------------------------+
[root@controller0 ~(keystone)]# export swift_proxy=192.168.77.70
# 添加endpoint
root@controller0 ~(keystone)]# openstack endpoint create \
--publicurl http://$swift_proxy:8080/v1/AUTH_%\(tenant_id\)s \
--internalurl http://$swift_proxy:8080/v1/AUTH_%\(tenant_id\)s \
--adminurl http://$swift_proxy:8080 \
--region RegionOne \
object-store
+--------------+-------------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------------+
| adminurl | http://192.168.77.70:8080 |
| id | d7e1f6a5f7064f3690181f7bd8922ac4 |
| internalurl | http://192.168.77.70:8080/v1/AUTH_%(tenant_id)s |
| publicurl | http://192.168.77.70:8080/v1/AUTH_%(tenant_id)s |
| region | RegionOne |
| service_id | c8d07b32376a4e2780d8ec9b2b836e41 |
| service_name | swift |
| service_type | object-store |
+--------------+-------------------------------------------------+
配置swift代理节点
[root@swift0 ~]# vi /etc/swift/proxy-server.conf
# 第53行
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
# 注释如下部分
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#
#identity_uri = http://localhost:35357/
#auth_uri = http://localhost:5000/
#
signing_dir = /tmp/keystone-signing-swift
# 添加认证
auth_uri = http://192.168.77.50:5000
auth_url = http://192.168.77.50:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = servicepassword
delay_auth_decision = true
[root@swift0 ~]# vi /etc/swift/swift.conf
# 更改为任意
[swift-hash]
swift_hash_path_suffix = swift_shared_path
# 创建swift哈希环
# swift-ring-builder builder-file create <part_power> <replicas> <min_part_hours>
# Creates <builder_file> with 2^<part_power> partitions and <replicas>.
# <min_part_hours> is number of hours to restrict moving a partition more
# than once
[root@swift0 ~]# swift-ring-builder /etc/swift/account.builder create 12 3 1
[root@swift0 ~]# swift-ring-builder /etc/swift/container.builder create 12 3 1
[root@swift0 ~]# swift-ring-builder /etc/swift/object.builder create 12 3 1
[root@swift0 ~]# swift-ring-builder /etc/swift/account.builder add r0z0-192.168.77.71:6002/device0 100
Device d0r0z0-192.168.77.71:6002R192.168.77.71:6002/device0_"" with 100.0 weight got id 0
[root@swift0 ~]# swift-ring-builder /etc/swift/account.builder add r0z0-192.168.77.72:6002/device0 100
Device d1r0z0-192.168.77.72:6002R192.168.77.72:6002/device0_"" with 100.0 weight got id 1
[root@swift0 ~]# swift-ring-builder /etc/swift/account.builder add r0z0-192.168.77.73:6002/device0 100
Device d2r0z0-192.168.77.73:6002R192.168.77.73:6002/device0_"" with 100.0 weight got id 2
[root@swift0 ~]# swift-ring-builder /etc/swift/container.builder add r0z0-192.168.77.71:6001/device0 100
Device d0r0z0-192.168.77.71:6001R192.168.77.71:6001/device0_"" with 100.0 weight got id 0
[root@swift0 ~]# swift-ring-builder /etc/swift/container.builder add r0z0-192.168.77.72:6001/device0 100
Device d1r0z0-192.168.77.72:6001R192.168.77.72:6001/device0_"" with 100.0 weight got id 1
[root@swift0 ~]# swift-ring-builder /etc/swift/container.builder add r0z0-192.168.77.73:6001/device0 100
Device d2r0z0-192.168.77.73:6001R192.168.77.73:6001/device0_"" with 100.0 weight got id 2
[root@swift0 ~]# swift-ring-builder /etc/swift/object.builder add r0z0-192.168.77.71:6000/device0 100
Device d0r0z0-192.168.77.71:6000R192.168.77.71:6000/device0_"" with 100.0 weight got id 0
[root@swift0 ~]# swift-ring-builder /etc/swift/object.builder add r0z0-192.168.77.72:6000/device0 100
Device d1r0z0-192.168.77.72:6000R192.168.77.72:6000/device0_"" with 100.0 weight got id 1
[root@swift0 ~]# swift-ring-builder /etc/swift/object.builder add r0z0-192.168.77.73:6000/device0 100
Device d2r0z0-192.168.77.73:6000R192.168.77.73:6000/device0_"" with 100.0 weight got id 2
[root@swift0 ~]# swift-ring-builder /etc/swift/account.builder rebalance
Reassigned 4096 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
[root@swift0 ~]# swift-ring-builder /etc/swift/container.builder rebalance
Reassigned 4096 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
[root@swift0 ~]# swift-ring-builder /etc/swift/object.builder rebalance
Reassigned 4096 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
[root@swift0 ~]# chown swift. /etc/swift/*.gz
[root@swift0 ~]# systemctl start memcached openstack-swift-proxy
[root@swift0 ~]# systemctl enable memcached openstack-swift-proxy
ln -s '/usr/lib/systemd/system/memcached.service' '/etc/systemd/system/multi-user.target.wants/memcached.service'
ln -s '/usr/lib/systemd/system/openstack-swift-proxy.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-proxy.service'
配置存储节点,每个节点都要配置,注意selinux策略,否则会操作失败。
# 禁用selinux
[root@swift-stor0 ~]# setenforce permissive
[root@swift-stor0 ~]# vi /etc/selinux/config
SELINUX=enforcing
# 安装所需包
[root@swift-stor0 ~]# yum install -y openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs rsync openssh-clients
# 格式化存储磁盘为xfs
[root@swift-stor0 ~]# mkfs.xfs -i size=1024 -s size=4096 /dev/sdb -f
meta-data=/dev/sdb isize=1024 agcount=4, agsize=1310720 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=5242880, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
# 创建目录并添加fstab
[root@swift-stor0 ~]# mkdir -p /srv/node/device0
[root@swift-stor0 ~]# mount -o noatime,nodiratime,nobarrier /dev/sdb /srv/node/device0
[root@swift-stor0 ~]# chown -R swift. /srv/node
[root@swift-stor0 ~]# vi /etc/fstab
# 拷贝hash环文件
[root@swift-stor0 ~]# scp root@192.168.77.70:/etc/swift/*.gz /etc/swift/
root@192.168.77.70's password:
account.ring.gz 100% 3862 3.8KB/s 00:00
container.ring.gz 100% 3861 3.8KB/s 00:00
object.ring.gz 100% 3887 3.8KB/s 00:00
[root@swift-stor0 ~]# chown swift. /etc/swift/*.gz
# 修改后缀为proxy设置值
[root@swift-stor0 ~]# vi /etc/swift/swift.conf
[swift-hash] swift_hash_path_suffix = swift_shared_path
# 修改account、container、object服务端口
[root@swift-stor0 ~]# vi /etc/swift/account-server.conf
bind_ip = 192.168.77.71
bind_port = 6002
[root@swift-stor0 ~]# vi /etc/swift/container-server.conf
bind_ip = 192.168.77.71
bind_port = 6002
[root@swift-stor0 ~]# vi /etc/swift/object-server.conf
bind_ip = 192.168.77.71
bind_port = 6000
# 配置rsync
[root@swift-stor0 ~]# vi /etc/rsyncd.conf
# 在末尾添加
# add to the end
pid file = /var/run/rsyncd.pid
log file = /var/log/rsyncd.log
uid = swift
gid = swift
address = 192.168.77.71
[account]
path = /srv/node
read only = false
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 25
lock file = /var/lock/account.lock
[container]
path = /srv/node
read only = false
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 25
lock file = /var/lock/container.lock
[object]
path = /srv/node
read only = false
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 25
lock file = /var/lock/object.lock
[swift_server]
path = /etc/swift
read only = true
write only = no
list = yes
incoming chmod = 0644
outgoing chmod = 0644
max connections = 5
lock file = /var/lock/swift_server.lock
# 开启服务
[root@swift-stor0 ~]# systemctl start rsyncd
[root@swift-stor0 ~]# systemctl enable rsyncd
ln -s '/usr/lib/systemd/system/rsyncd.service' '/etc/systemd/system/multi-user.target.wants/rsyncd.service'
[root@swift-stor0 ~]# for ringtype in account container object; do
systemctl start openstack-swift-$ringtype
systemctl enable openstack-swift-$ringtype
for service in replicator updater auditor; do
if [ $ringtype != 'account' ] || [ $service != 'updater' ]; then
systemctl start openstack-swift-$ringtype-$service
systemctl enable openstack-swift-$ringtype-$service
fi
done
done
ln -s '/usr/lib/systemd/system/openstack-swift-account.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-account.service'
ln -s '/usr/lib/systemd/system/openstack-swift-account-replicator.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-account-replicator.service'
ln -s '/usr/lib/systemd/system/openstack-swift-account-auditor.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-account-auditor.service'
ln -s '/usr/lib/systemd/system/openstack-swift-container.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-container.service'
ln -s '/usr/lib/systemd/system/openstack-swift-container-replicator.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-container-replicator.service'
ln -s '/usr/lib/systemd/system/openstack-swift-container-updater.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-container-updater.service'
ln -s '/usr/lib/systemd/system/openstack-swift-container-auditor.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-container-auditor.service'
ln -s '/usr/lib/systemd/system/openstack-swift-object.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-object.service'
ln -s '/usr/lib/systemd/system/openstack-swift-object-replicator.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-object-replicator.service'
ln -s '/usr/lib/systemd/system/openstack-swift-object-updater.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-object-updater.service'
ln -s '/usr/lib/systemd/system/openstack-swift-object-auditor.service' '/etc/systemd/system/multi-user.target.wants/openstack-swift-object-auditor.service'
添加用户
# 添加工程
[root@controller0 ~(keystone)]# openstack project create --description "Swift Service Project" swiftservice
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Swift Service Project |
| enabled | True |
| id | 65e95f038a654db3a2ee0ae93daaf2b3 |
| name | swiftservice |
+-------------+----------------------------------+
# 创建客户端角色
[root@controller0 ~(keystone)]# openstack role create swiftoperator
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | dbcb21e865ae435da39507bea32fe312 |
| name | swiftoperator |
+-------+----------------------------------+
# 添加工程用户
[root@controller0 ~(keystone)]# openstack user create --project swiftservice --password userpassword user01
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| email | None |
| enabled | True |
| id | c1d2e56e9d72447096221a4542e17e58 |
| name | user01 |
| project_id | 65e95f038a654db3a2ee0ae93daaf2b3 |
| username | user01 |
+------------+----------------------------------+
# 赋予用户客户端角色
[root@controller0 ~(keystone)]# openstack role add --project swiftservice --user user01 swiftoperator
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | dbcb21e865ae435da39507bea32fe312 |
| name | swiftoperator |
+-------+----------------------------------+
客户端操作
# 安装所需包
[root@controller0 ~(keystone)]# yum install -y python-keystoneclient python-swiftclient
# 添加keystone_rc文件
[root@controller0 ~(swift)]# cat swift_keystone
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=swiftservice
export OS_TENANT_NAME=swiftservice
export OS_USERNAME=user01
export OS_PASSWORD=userpassword
export OS_AUTH_URL=http://192.168.77.50:35357/v2.0
export PS1='[\u@\h \W(swift)]\$ '
[root@controller0 ~(keystone)]# source swift_keystone
[root@controller0 ~(swift)]# swift stat
Account: AUTH_65e95f038a654db3a2ee0ae93daaf2b3
Containers: 0
Objects: 0
Bytes: 0
X-Put-Timestamp: 1438064191.81736
Connection: keep-alive
X-Timestamp: 1438064191.81736
X-Trans-Id: txe9426af85d424b7185086-0055b71e3e
Content-Type: text/plain; charset=utf-8
# 创建测试容器
[root@controller0 ~(swift)]# swift post test_container
[root@controller0 ~(swift)]# swift list
test_container
# 上传文件
[root@controller0 ~(swift)]# swift upload test_container admin_keystone
[root@controller0 ~(swift)]# swift list test_container
admin_keystone
# 下载文件
[root@controller0 ~(swift)]# swift download test_container admin_keystone > admin_keystone.txt
配置Heat(可选)¶
Heat服务即用于提供orchestration,在此我们使用neutron0当作heat节点。
|
+------------------+ | +-----------------------+
| [ controller0 ] | | | [ neutron0 ] |
| Keystone |192.168.77.50 | 192.168.77.30| DHCP,L3,L2 Agent |
| Glance |--------------+--------------| Metadata Agent |
| Nova API |eth0 | th0| Heat API,API-CFN |
| Neutron Server | | | Heat Engine |
+------------------+ | +-----------------------+
eth0|192.168.77.51
+--------------------+
| [ compute0 ] |
| Nova Compute |
| L2 Agent |
+--------------------+
控制节点配置
# 安装所需包
[root@controller0 ~]# yum install -y openstack-heat-common
# 添加用户,创建endpoint
[root@controller0 ~(keystone)]# openstack user create --project service --password servicepassword heat
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| email | None |
| enabled | True |
| id | d18e4d820a5d4384a676bb0064448c09 |
| name | heat |
| project_id | 9acf83020ae34047b6f1e320c352ae44 |
| username | heat |
+------------+----------------------------------+
[root@controller0 ~(keystone)]# openstack role add --project service --user heat admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 95c4b8fb8d97424eb52a4e8a00a357e7 |
| name | admin |
+-------+----------------------------------+
[root@controller0 ~(keystone)]# openstack role create heat_stack_owner
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 91ef76ae6d6c4fc0b908cf7416055da0 |
| name | heat_stack_owner |
+-------+----------------------------------+
[root@controller0 ~(keystone)]# openstack role create heat_stack_user
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 0bf5c075fcb7448d844699131c8008fb |
| name | heat_stack_user |
+-------+----------------------------------+
[root@controller0 ~(keystone)]# openstack role add --project admin --user admin heat_stack_owner
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 91ef76ae6d6c4fc0b908cf7416055da0 |
| name | heat_stack_owner |
+-------+----------------------------------+
[root@controller0 ~(keystone)]# openstack service create --name heat --description "Openstack Orchestration" orchestration
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Openstack Orchestration |
| enabled | True |
| id | fc32f278bf3243d9b6b6c7a5cbf13135 |
| name | heat |
| type | orchestration |
+-------------+----------------------------------+
[root@controller0 ~(keystone)]# openstack service create --name heat-cfn --description "Openstack Orchestration" cloudformation
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Openstack Orchestration |
| enabled | True |
| id | f3334830bc254e6491b5f7de8f7d9a58 |
| name | heat-cfn |
| type | cloudformation |
+-------------+----------------------------------+
[root@controller0 ~(keystone)]# heat_api=192.168.77.30
[root@controller0 ~(keystone)]# openstack endpoint create \
--publicurl http://$heat_api:8004/v1/%\(tenant_id\)s \
--internalurl http://$heat_api:8004/v1/%\(tenant_id\)s \
--adminurl http://$heat_api:8004/v1/%\(tenant_id\)s \
--region RegionOne \
orchestration
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| adminurl | http://192.168.77.30:8004/v1/%(tenant_id)s |
| id | 1a91dbfe0913466db38595c1b4f0bd59 |
| internalurl | http://192.168.77.30:8004/v1/%(tenant_id)s |
| publicurl | http://192.168.77.30:8004/v1/%(tenant_id)s |
| region | RegionOne |
| service_id | fc32f278bf3243d9b6b6c7a5cbf13135 |
| service_name | heat |
| service_type | orchestration |
+--------------+--------------------------------------------+
[root@controller0 ~(keystone)]# openstack endpoint create \
--publicurl http://$heat_api:8000/v1 \
--internalurl http://$heat_api:8000/v1 \
--adminurl http://$heat_api:8000/v1 \
--region RegionOne \
cloudformation
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| adminurl | http://192.168.77.30:8000/v1 |
| id | e40570fd8201420fbea42abf3231d8eb |
| internalurl | http://192.168.77.30:8000/v1 |
| publicurl | http://192.168.77.30:8000/v1 |
| region | RegionOne |
| service_id | f3334830bc254e6491b5f7de8f7d9a58 |
| service_name | heat-cfn |
| service_type | cloudformation |
+--------------+----------------------------------+
# 注意末尾的信息,同时手动更新/etc/heat/heat.conf文件
[root@controller0 ~(keystone)]# heat-keystone-setup-domain \
--stack-user-domain-name heat_user_domain \
--stack-domain-admin heat_domain_admin \
--stack-domain-admin-password domainpassword
Please update your heat.conf with the following in [DEFAULT]
stack_user_domain_id=55654da575f048869a9128db12d26f27
stack_domain_admin=heat_domain_admin
stack_domain_admin_password=domainpassword
[root@controller0 ~(keystone)]# vim /etc/heat/heat.conf
# 创建数据库
[root@controller0 ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 11
Server version: 5.5.40-MariaDB-wsrep MariaDB Server, wsrep_25.11.r4026
Copyright (c) 2000, 2014, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> create database heat;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> grant all privileges on heat.* to heat@'localhost' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on heat.* to heat@'%' identified by 'password';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
heat节点配置
[root@neutron0 ~]# mv /etc/heat/heat.conf /etc/heat/heat.conf.org
[root@neutron0 ~]# vi /etc/heat/heat.conf
[DEFAULT]
deferred_auth_method = trusts
trusts_delegated_roles = heat_stack_owner
# Heat installed server
heat_metadata_server_url = http://192.168.77.30:8000
heat_waitcondition_server_url = http://192.168.77.30:8000/v1/waitcondition
heat_watch_server_url = http://192.168.77.30:8003
heat_stack_user_role = heat_stack_user
stack_user_domain_id=55654da575f048869a9128db12d26f27
stack_domain_admin=heat_domain_admin
stack_domain_admin_password=domainpassword
rpc_backend = rabbit
[database]
# MariaDB connection info
connection = mysql://heat:password@192.168.77.50/heat
# RabbitMQ connection info
[oslo_messaging_rabbit]
rabbit_host = 192.168.77.50
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = password
[ec2authtoken]
# Keystone server
auth_uri = http://192.168.77.50:35357/v2.0
[heat_api]
bind_host = 0.0.0.0
bind_port = 8004
[heat_api_cfn]
bind_host = 0.0.0.0
bind_port = 8000
[keystone_authtoken]
# Keystone auth info
auth_host = 192.168.77.50
auth_port = 35357
auth_protocol = http
auth_uri = http://192.168.77.50:35357/v2.0
# Heat admin user
admin_user = heat
# Heat admin user's password
admin_password = servicepassword
# Heat admin user's tenant
admin_tenant_name = service
[root@neutron0 ~]# chgrp heat /etc/heat/heat.conf
[root@neutron0 ~]# chmod 640 /etc/heat/heat.conf
[root@neutron0 ~]# chmod 640 /etc/heat/heat.conf
[root@neutron0 ~]# heat-manage db_sync
[root@neutron0 ~]# systemctl start openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
[root@neutron0 ~]# systemctl enable openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
ln -s '/usr/lib/systemd/system/openstack-heat-api.service' '/etc/systemd/system/multi-user.target.wants/openstack-heat-api.service'
ln -s '/usr/lib/systemd/system/openstack-heat-api-cfn.service' '/etc/systemd/system/multi-user.target.wants/openstack-heat-api-cfn.service'
ln -s '/usr/lib/systemd/system/openstack-heat-engine.service' '/etc/systemd/system/multi-user.target.wants/openstack-heat-engine.service'
使用heat创建虚拟机
[root@controller0 ~]# vi sample-stack.yml
heat_template_version: 2014-10-16
description: Heat Sample Template
parameters:
ImageID:
type: string
description: Image used to boot a server
NetID:
type: string
description: Network ID for the server
resources:
server1:
type: OS::Nova::Server
properties:
name: "Heat_Deployed_Server"
image: { get_param: ImageID }
flavor: "m1.small"
networks:
- network: { get_param: NetID }
outputs:
server1_private_ip:
description: IP address of the server in the private network
value: { get_attr: [ server1, first_address ] }
[root@controller0 ~]# source admin_keystone
[root@controller0 ~(keystone)]# glance image-list
+--------------------------------------+---------+-------------+------------------+-------------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------+-------------+------------------+-------------+--------+
| fc9da1a8-0418-4ae8-8e67-22a7975f7a0b | centos6 | qcow2 | bare | 10739318784 | active |
+--------------------------------------+---------+-------------+------------------+-------------+--------+
[root@controller0 ~(keystone)]# Int_Net_ID=`neutron net-list | grep int_net | awk '{ print $2 }'`
[root@controller0 ~(keystone)]# heat stack-create -f sample-stack.yml -P "ImageID=centos6;NetID=$Int_Net_ID" Sample-Stack
+--------------------------------------+--------------+--------------------+----------------------+
| id | stack_name | stack_status | creation_time |
+--------------------------------------+--------------+--------------------+----------------------+
| 35b5c1e6-ec84-4bf8-9e77-1946bcf8d09b | Sample-Stack | CREATE_IN_PROGRESS | 2015-07-31T03:06:27Z |
+--------------------------------------+--------------+--------------------+----------------------+
配置Ceilometer(可选)¶
配置Sahara(可选)¶
配置Ironic(可选)¶
使用示例¶
基本操作¶
一些常用操作。
与owncloud集成¶
创建一个指定region的endpoint于swift服务中
# source ./keystone_admin # keystone endpoint-create --service swift --region swift_region \ --publicurl "http://192.168.2.160:8080/v1/AUTH_7d11dd5a3f3544149e8b6a9799a2aa48/oc"
其中的publicurl可以从container的详细信息中查看。
使用owncloud的第三方app——external storage,如下进行填写
- 目录名称:显示在owncloud中的目录名称。
- user:project用户名。
- bucket:容器名。
- region:上一步指定的region。
- key:用户密码。
- tenant:project名。
- password:用户密码。
- service_name:服务名,即swift。
- url:使用keystone认证的url,即http://192.168.2.160:5000/v2.0 。
- timeout:超时时长,可不填。
oVirt使用Glance与Neutron服务¶
oVirt自3.3版本起,便可以添加外部组件,比如Foreman、OpenStack的网络或镜像服务。
在添加OpenStack相关组件之前,oVirt管理端需要配置OpenStack的KeyStone URL:
# engine-config --set KeystoneAuthUrl=http://192.168.2.160:35357/v2.0
# service ovirt-engine restart
添加OpenStack镜像服务Glance至oVirt¶
- 在OpenStack的控制台中,添加一个新镜像,比如my_test_image,格式为raw。

- 在oVirt左边栏,选择External Provider添加OpenStack Image服务。

Note
认证选项
用户名:glance
密码:存于RDO配置文件中,形如 CONFIG_GLANCE_KS_PW=bf83b75a635843b4
Tenant:services
- 然后可以在oVirt的存储域中看到刚刚添加的Glance服务。

Neutron¶

可参考 NeutronVirtualAppliance 以及 Overlay_Networks_with_Neutron_Integration ,另外提供 操作视频 。
- 配置oVirt。
# engine-config --set OnlyRequiredNetworksMandatoryForVdsSelection=true
# yum install vdsm-hook-openstacknet
# service ovirt-engine restart
- 如图添加Neutron组件。


Note
认证选项
用户名:neutron
密码:存于RDO配置文件中,形如 CONFIG_NEUTRON_KS_PW=a16c52e3ea634324
Tenant:services
agent 配置相同
OpenStack常见问题¶
Q:如何校验密码是否正确?
A:keystone –os-username=neutron –os-password=servicepassword –os-auth-url=http://localhost:35357/v2.0 token-get
Q:管理界面Swift不能删除目录?
A:使用命令 swift delete public_container aaa/ 进行删除。
Q: Neutron 网络快速开始?
A:参考https://www.ustack.com/blog/neutron_intro/
Q:OpenStack组件间的通信是靠什么?
A:AMQP,比如RabbitMQ、Apache的ActiveMQ,部署时候可以选择,如果对这种消息传输工具有兴趣可以参考 rabbitmq tutorial 以及 各种有用的插件(web监视等) 。
Q:Swift有什么好用的客户端么?
A:python-swiftclient 、 Gladient Cloud Desktop 、 Cloudberry 、 Cyberduck 、 WebDrive 、 S3 Browser 等。
附录二 公有云参考¶
虚拟机占用主机资源隔离¶
网络¶
- 使用 VLAN 、 openvswitch 进行隔离或限制。
- 使用tc(Traffic Controller)命令进行速率的限制,许多虚拟化平台使用的都是它。
CPU¶
- 使用CPU Pin可以将特定虚拟核固定在指定物理核上。线程当作核来使用的话,每一个虚拟机使用一个核,之间互不干涉。
- 使用Linux的 Control Group 来限制虚拟机对宿主机CPU的用度。
磁盘IO¶
更改内核磁盘IO调度方式:noop,cfq,deadline直接选择,参考 Linux Doc. 。
使用Linux的 Control Group 来限制虚拟机对宿主机设备的访问。
使能cgroup,假如没有启动的话。
# mount tmpfs cgroup_root /sys/fs/cgroup # mkdir /sys/fs/cgroup/blkio # mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
创建一个1 Mbps的IO限制组,X:Y 为MAJOR:MINOR。
# lsblk # mkdir -p /sys/fs/cgroup/blkio/limit1M/ # echo "8:0 1048576" > /sys/fs/cgroup/blkio/limit1M/blkio.throttle.write_bps_device
将虚拟机进程附加到限制组。
# echo $VM_PID > /sys/fs/cgroup/blkio/limit1M/tasks
目前没有删除task功能,只能将它移到根组,或者是删除此组。
# echo $VM_PID > /sys/fs/cgroup/blkio/tasks
更改qemu drive cache。
设备¶
使用Linux的 Control Group 来限制虚拟机对宿主机设备的访问。
资源用度、计费¶
系统报告主要是在scaling(测量)、charging(计费)时使用。
在测量时可以使用平台提供的API、测量组件,综合利用nagios/Icinga,使用Django快速开发。
oVirt可以参考ovirt-reports,OpenStack参考其Ceilometer。
计费模块OpenStack可参考新浪云的 dough项目 。
DeltaCloud/Libcloud混合云¶
DeltaCloud支持:
arubacloud
azure
ec2
rackspace
terremark
openstack
fgcp
eucalyptus
digitalocean
sbc
mock
condor
rhevm
opennebula
vsphere
gogrid
rimuhosting
Libcloud支持:
biquo
PCextreme
Azure Virtual machines
Bluebox Blocks
Brightbox
CloudFrames
CloudSigma (API v2.0)
CloudStack
DigitalOcean
Dreamhost
Amazon EC2
Enomaly Elastic Computing Platform
ElasticHosts
Eucalyptus
Exoscale
Gandi
Google Compute Engine
GoGrid
HostVirtual
HP Public Cloud (Helion)
IBM SmartCloud Enterprise
Ikoula
Joyent
Kili Public Cloud
KTUCloud
Libvirt
Linode
NephoScale
Nimbus
Ninefold
OpenNebula (v3.8)
OpenStack
Opsource
Outscale INC
Outscale SAS
ProfitBricks
Rackspace Cloud
RimuHosting
ServerLove
skalicloud
SoftLayer
vCloud
VCL
vCloud
Voxel VoxCLOUD
vps.net
VMware vSphere
Vultr
DeltaCloud示例¶
Libcloud示例¶
SDN学习/mininet¶
SDN广泛用在内容加速、虚拟网络、监控等领域。
关于SDN有许多学习工具: mininet 、 POX 、 Netwrok Heresy Blog 。
学习视频: Coursera SDN 。
附录三 PaaS/OpenShift¶
这是什么¶
设想,你有一个公网主机,上面配置了Apache/Nginx,同时上面你装的有Ruby、JBoss、Python等环境,平时你用它作你自己的应用发布。某一天,你的朋友说他也有一个Django应用要发布,问你要一个环境,你就在你的主机上配置了VirtualHost来解析xiaoli.myhost.com到/var/www/django/xiaoli这个目录下,然后他就请你去吃个烤羊腿了。后来,又一个朋友问你要这样的环境,但是这次是php,你就把/var/www/html/php/zhangsan这个目录给他了,这次请你吃麻辣烫。再后来,问你要环境的朋友越来越多,你就又搞了一个主机,同时配置了一个代理服务来解析不同的域名到某个主机的目录下。某天你在公交车上的时候就想了,我为什么不写一个应用让他们自己注册选择语言环境和域名呢?于是,你就开始了,花了两天时间终于搞定。用的人越来越多,你吃得也也越来越胖。。
这样一个应用,就是PaaS的原型。
当前的形势¶
面对国内社交APP微信的火爆,对Web服务器的需求日益增长,同样,开发者的需求环境也有所差异,而面对这种差异,一个更加灵活的平台就出现了,国内比如SinaAPP,国外比如Google APPEngine,Redhat OpenShift,Amazon AWS。
OK,不多说了,下面开始试验OpenShift的服务器搭建及上线。
附录四 Docker 使用及自建repo¶
Docker已经越来越流行了(IaaS平台开始支持它,PaaS平台也开始支持它),不介绍它总感觉过不去。
它是基于LXC的容器类型虚拟化技术,从实现上说更类似于chroot,用户空间的信息被很好隔离的同时,又实现了网络相关的分离。它取代LXC的原因,我想是因为其REPO非常丰富,操作上类似git。
另外,它有提供Windows/MacOSX的客户端 boot2docker。
中文入门手册请参考 Docker中文指南 ,另外它有一个WebUI shipyard 。
官方repo https://registry.hub.docker.com/ 。
镜像操作¶
运行简单命令
docker run ubuntu /bin/echo "Hello world!"
运行交互shell
docker run -t -i ubuntu /bin/bash
运行Django程序
docker run -d -P training/webapp python app.py
获取container信息
docker ps
获取container内部信息
docker inspect -f '{{ .NetworkSettings.IPAddress }}' my_container
获取container历史
docker log my_container
commit/save/load
Note
保存
只有commit,对docker做的修改才会保存,形如docker run centos yum install -y nmap不会保存。
docker images
docker commit $image_id$ myimage
docker save myimage > myimage.tar
docker load < myimage.tar
Registry操作¶
登录,默认为DockerHub
docker login
创建Registry
参考 https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-ubuntu-14-04 以及 http://blog.docker.com/2013/07/how-to-use-your-own-registry/ 。
# 获取docker-registry,从github或者直接 pip install docker-registry
# git clone https://github.com/dotcloud/docker-registry.git
# cd docker-registry
# cp config_sample.yml config.yml
# pip install -r requirements.txt
# gunicorn --access-logfile - --log-level debug --debug
-b 0.0.0.0:5000 -w 1 wsgi:application
push/pull
# docker pull ubuntu
# docker tag ubuntu localhost:5000/ubuntu
# docker push localhost:5000/ubuntu
附录五 常用功能运维工具¶
Foreman 部署指导¶
Katello 部署指导¶
数据恢复工具¶
extundelete
常用性能测量及优化工具¶
- 优化

- 监视

- 测试

all in one - pip install glances
另外针对qemu/libvirt相关的测试工具,可以参考 virt-test ,当然,仅作参考。
HAProxy¶
没错,我就是要把这个东西单列出来讲,因为你可以用这个东西来做几乎全部应用的HA或者LoadBalancer, 这里是配置说明 <http://www.haproxy.org/download/1.4/doc/configuration.txt> 。
代理http:
...
backend webbackend
balance roundrobin
server web1 192.168.0.130:80 check
frontend http
bind *:80
mode http
default_backend webbackend
listen stats :8080
balance
mode http
stats enable
stats auth me:password
代理tcp:
listen *:3306
mode tcp
option tcplog
balance roundrobin
server smtp 192.168.0.1:3306 check
server smtp1 192.168.0.2:3306 check
常用运维工具¶
Monit¶
小型监控工具,不推荐使用。
Munin¶
轻量级的监控工具。
Cacti¶
与Zabbix在某些方面很像。
Ganglia¶
比较专业的监控工具,并有一款专门针对虚拟机的应用。 http://blog.sflow.com/2012/01/using-ganglia-to-monitor-virtual.html
nagios¶
使用UI Plugin可以将在oVirt管理界面中查看Nagios监控状态,可参考 oVirt_Monitoring_UI_Plugin 以及 Nagios_Intergration 。
附录六 文档参考资源以及建议书单¶
Server World¶
一个非常好的网站,含有很多服务的很具体的搭建过程,在工程实施中参考意义比较大。
Django based WebAdmin¶
书单¶
虽然现在的移动设备很适合阅读,但我还是推荐多看些实体书,尤其是一些大部头。
当然,下面的书目我会尽量提供适合移动设备阅读的版本(PDF、MOBI、EPUB、TXT)。
TCP/IP Vol. 1/2/3
Machine Learning in Action
Elements of Information Theory
The Design of UNIX Operating System
Understanding the Linux Kernel
The Art of Computer Programming Vol. 1/2/3/4
Linux内核完全注释
浪潮之巅
数学之美
UNIX环境高级编程
存储技术原理分析
Hadoop权威指南
Weka应用技术与实践
Python机器学习实践
Model Thinking
Practice Lisp
实践 构建先进的家居云¶
鉴于计算能力的摩尔定律以及家庭联网功能设备的爆发式增长,我们可以制造建立于各个家庭上的“云团”。
先进性体现在哪呢?
首先,我们的服务主要依托于虚拟化,数据流一定是SSL加密的,最大程度地与现有设备交互,服务可以对外限量使用。
系统架构¶
架构这个东西从来就没离开过需求,那我们的需求是什么呢?
- 你是否有过文件无处安放的苦恼,装进电脑里怕系统重装后弄丢,装进移动存储怕插坏?
- 你是否有一些本地照片、音乐、视频要给家人分享?
- 你是否有一些本地照片、音乐、视频要给网络上的好友分享?
- 你是否有一些重要文档,想要的时候却总也找不到?
- 你是不是考虑过品牌家用NAS?
- 你是不是不放心市场上的智能家居设备,担心它们窥探隐私(后门、被入侵)?
- 你是否觉得让现有的家庭设备智能化很容易,但是自己没时间做?
- 你是不是炒股?你的信息来源是不是非常分散?
- 机器太多,怎么监控?
- OK,我在扯淡。。
OK,这些我们都可以分而治之,整个系统的骨架大概如下图所示。

构建元素¶
硬件: HP N54L 、 Raspberry Pi 、 Mac mini 、 电话语音卡 、 WRT54G(可选)
服务:网络认证、XMPP即时通信(服务群成员)、云存储、家庭知识库、家庭影像库、NAS(Apple TM兼容)、数据源(微博等)、DNS(解析内部服务器)、语音电话、语音识别控制、股票分析、clamav(防病毒)、zabbix监控
软件:OpenLDAP、jabber、 、 Seafile 、 owncloud 、 XBMC(更名Kodi) 、Wiki、Asterisk、 jasper 、 Hadoop 、 clamav 、 AirPlay(Linux/OSX Server)
Note
不需要的东西
建立一个搜索引擎就三步:下载网页、建立索引、质量排序,对的,我们不需要自己建立,主要原因就是索引量太小。有兴趣的话可以查看 http://en.wikipedia.org/wiki/List_of_search_engines ,或者使用Nutch、Lucene或者Sphinx来搭建自己的搜索引擎。
OS X Server¶
鉴于OS X Server安装服务非常方便,这里就针对它的常用服务进行讲解。
- Time Machine: 给Mac机器提供时光机器服务,可以很方便地对Mac进行备份与恢复,一定要保证磁盘划分合理。
- VPN:可以创建基于LT2P或者PPTP的VPN服务器。
- 信息:提供基于XMPP的Jabber即时消息服务。
- Wiki:可以创建博客以及Wiki服务器。
- 网站:可提供PHP或者Python的Web服务。 OS X有一个 webpromotion 命令,用于更改桌面配置,以优化web服务体验。
- 文件共享:可以通过Samba、AFP、Webdav方式共享文件或目录。
- FTP:提供FTP服务。
- 通讯录:可提供CardDav格式或者LDAP内的通讯录,适用于大多数移动设备。
- NetInstall:提供网络安装OS X的服务,一般用于重装或者恢复系统。
- Open Directory:提供LDAP服务,包含Kerberos认证。
- DNS:用于内部DNS服务。
链接¶
关于作者与文档¶
文档基本已暂停更新,计划出书中,详情见作者博客。
Author: lofyer@gmail.com
新浪微博: lofyer
QQ: 578645806
微信: 578645806
个人博客: http://blog.lofyer.org
在线阅读
ReadTheDocs: https://inthecloud.readthedocs.org