OpenStack Liberty with XenServer Setup Guide¶
Contents:
1. Overview¶
The OpenStack foundation has an excellent setup guide for their October 2015 release, “Liberty”, which can be found at http://docs.openstack.org/liberty/install-guide-rdo/. However, this guide only deals with the use of the “KVM” hypervisor, and does not cover the use of “XenServer” hypervisor.
There are many circumstances in which it may be desirable to build an OpenStack Liberty XenServer environment. However, in my efforts to do so, I have found the available online documentation regarding using XenServer with OpenStack to be inadequate, outdated or just plain incorrect. Specifically, during this project I experienced issues with:
- XenServer networking configuration
- Nova and Neutron configurations for XenServer networking
- iSCSI authentication issues with Cinder volumes
- Cinder volume mapping errors with XenServer instances
- Cinder quota errors
- ISO image support for XenServer
- Horizon bug affecting XenServer images
- Image metadata for dual hypervisor-type environments
- Neutron requirements for dual-hypervisor-type environments
- Neutron bug affecting the use of OpenvSwitch (Required for XenServer)
- VNC console connectivity
This guide is heavily based on the OpenStack foundation’s guide. It does not go into the same level of detail, but does highlight the differences when using XenServer instead of KVM. Their guide should be considered the superior one, and the “master” guide, and I recommend reading their guide if you have no familiarity with OpenStack at all.
Some elements of this guide are also based on the following blog post: https://www.citrix.com/blogs/2015/11/30/integrating-xenserver-rdo-and-neutron/
On each page, I have highlighted in bold any steps which differ from the original guide. These are typically XenServer-specific changes.
This guide is for a simple setup with “flat” networking. There are no provisions for private “virtual” networks, or any firewall functionality. The guide also does not yet cover “swift” object storage, although this shouldn’t differ from the OpenStack foundation’s guide. A future version of the guide may add these functions.
Later pages in this guide deal with adding a KVM hypervisor to the environment. These pages include changes which I found to be necessary in order to support a dual hypervisor-type environment (i.e the use of XenServer and KVM in the same OpenStack).
Finally, there are pages regarding the creation of CentOS 7 images for both hypervisors. These pages highlight some differences in the image-creation process for both hypervisors, including the package and partitioning requirements to support automatic disk resizing and injection of SSH keys for the root user.
Two networks are required, a “public” network (which instances will be connected to for their day-to-day traffic), and a “management” network, which our OpenStack servers will use for their connectivity. Any servers with connections to both will have eth0 connected to the “public” network, and eth1 connected to the “management” network.
Any IP addresses in the guide should, of course, be replaced with your own. You will also need to pre-generate the following variables which will be referred to throughout the guide:
| Variable | Meaning |
|---|---|
*MYSQL_ROOT* |
Root password for MySQL. |
*KEYSTONE_DBPASS* |
Password for the keystone MySQL database. |
*ADMIN_TOKEN* |
A temporary token for initial connection to keystone. |
*RABBIT_PASS* |
Password for the openstack rabbitmq user. |
*GLANCE_DBPASS* |
Password for the glance MySQL database. |
*GLANCE_PASS* |
Password for the glance identity user. |
*NOVA_DBPASS* |
Password for the nova MySQL database. |
*NOVA_PASS* |
Password for the nova identity user. |
*NEUTRON_DBPASS* |
Password for the neutron MySQL database. |
*NEUTRON_PASS* |
Password for the neutron identity user. |
*NEUTRON_METADATA_SECRET* |
Random secret string for the metadata service. |
*CINDER_DBPASS* |
Password for the cinder MySQL database. |
*CINDER_PASS* |
Password for the cinder identity user. |
*XENSERVER_ROOT* |
Root password for XenServer. |
*XENSERVER_IP* |
IP address of XenServer. |
*CONTROLLER_ADDRESS* |
A DNS address for the controller server. |
*ADMIN_PASS* |
Password for the admin identity user. |
*DEMO_PASS* |
Password for the demo identity user. |
*XAPI_BRIDGE* |
The name of the ovs bridge to be used by instances. |
*SERVER_IP* |
The IP of the server you are currently working on. |
*VM_IP* |
The IP of the “compute” VM for that hypervisor. |
*HOST_NAME* |
The hostname of the physical hypervisor (e.g. XenServer). |
The
*ADMIN_TOKEN*can be created by running:# openssl rand -hex 10
For
*XENSERVER_ROOT*, do not use a password you’re not comfortable placing in plaintext in the nova configuration.For
*CONTROLLER_ADDRESS*, ensure that this is an address which you can reach from your workstation.For
*XAPI_BRIDGE*, this won’t be determined until later in the builld process. You should write it down for later use once it is defined.Any instance of “
*HOST_NAME*” refers to the hostname of the physical hypervisor host. For example, this would be “compute1.openstack.lab.mycompany.com”, and not “compute1-vm.openstack.lab.mycompany.com”.
One final note: I do disable SELINUX in this guide, for simplicity. This is a personal choice,
but I know that some people do choose to run SELINUX on their systems. The guide does include
the installation of SELINUX support for openstack, so you should be able to set this back to “ENFORCING”,
even after performing the installation with this set to “PERMISSIVE”. I have not tested this.
Changelog¶
- Mar 17 2016:
- Add patch for neutron bug to the “install neutron on compute VM” page.
- Mar 16 2016:
- Add nova and neutron configuration fixes for whole-host migration.
- Replace unnecesary XenServer reboot with Toolstack restart.
- Mar 15 2016:
- Add cinder configuration fix to allow volume migration.
- Correct screenshot ordering on XenServer host installation page.
- Add screenshot for primary disk selection to XenServer host installation page.
- Mar 9 2016:
- Add note regarding case-sensitive udev rules file.
- Mar 4 2016:
- Add fix to prevent installation of kernels from Xen repository on Storage node.
- Feb 19 2016:
- Add fix to Horizon config for Identity v3.
- Fix changelog order.
- Feb 17 2016:
- Add steps to enable auto power-on of the “compute” VM on the XenServer host.
- Add required steps to enable migration and live migration of instances between XenServer hosts.
- Feb 12 2016:
- Create changelog.
- Various clarifications.
- Extended identity’s token expiration time.
- Correct syntax for neutron ovs configuration on controller.
- Correct syntax when populating neutron database.
- Add note regarding large storage requirements for cinder image-to-volume conversion.
About the Author¶
My name is Alex Oughton, and I work with OpenStack clouds, as well as dedicated hosting solutions. My work doesn’t involve the actual deployment of OpenStack, and so this guide was developed during a self-learning exercise. If you have any feedback regarding this guide, including any suggestions or fixes, please do contact me on Twitter: http://twitter.com/alexoughton.
You can also directly contribute to this guide through its github: https://github.com/alexoughton/rtd-openstack-xenserver.
2. Build Controller Host¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/environment-networking-controller.html
http://docs.openstack.org/liberty/install-guide-rdo/environment-ntp-controller.html
http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html
- In this guide, I am using a Virtual Machine running on a VMWare hypervisor as my control node. If you are doing the same, you must ensure that the vSwitches on the hypervisor have “promiscuous mode” enabled.
- Boot the control node with the CentOS 7.2.1511 DVD.
- Set your time zone and language.
- For “Software Selection”, set this to “
Infrastructure Server”. - Keep automatic partitioning. Allow to install only on first disk.
- Set the controller’s IPv4 address and hostname. Disable IPv6. Give the connection the name “eth1”.
Click on “Begin Installation”.
Set a good root password.
Once installation is complete, reboot the server, and remove the DVD/ISO from the server.
SSH in to server as root.
Stop and disable the firewalld service:
# systemctl disable firewalld.service # systemctl stop firewalld.service
Disable SELINUX:
# setenforce 0 # vim /etc/sysconfig/selinux SELINUX=permissive
Update all packages on the server:
# yum update
If running the control node on VMWare, install the VM tools:
# yum install open-vm-tools
We need persistent network interface names, so we’ll configure udev to give us these. Replace
00:00:00:00:00:00with the MAC addresses of your control node:# vim /etc/udev/rules.d/90-persistent-net.rules SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:00:00:00:00:00",ATTR{dev_id}=="0x0", ATTR{type}=="1",KERNEL=="eno*", NAME="eth0" SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:00:00:00:00:00",ATTR{dev_id}=="0x0", ATTR{type}=="1",KERNEL=="eno*", NAME="eth1"
- Note: This file is case-sensitive, and the MAC addresses should be lower-case.
Rename the network interface configuration files to eth0 and eth1. Replace
eno00000001andeno00000002with the names of your control node’s interfaces:# cd /etc/sysconfig/network-scripts # mv ifcfg-eno00000001 ifcfg-eth0 # mv ifcfg-eno00000002 ifcfg-eth1
Modify the interface configuration files, replacing any instances of
eno00000001andeno00000002(or whatever your interface names are) witheth0andeth1respectively:# vim ifcfg-eth0 NAME=eth0 DEVICE=eth0 # vim ifcfg-eth1 NAME=eth1 DEVICE=eth1
Reboot the control node:
# systemctl reboot
SSH back in as root after the reboot.
Check that ifconfig now shows
eth0andeth1:# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:0c:29:d9:36:46 txqueuelen 1000 (Ethernet) RX packets 172313 bytes 34438137 (32.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7298 bytes 1552292 (1.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.0.192 netmask 255.255.255.0 broadcast 172.16.0.255 inet6 fe80::20c:29ff:fed9:3650 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:d9:36:50 txqueuelen 1000 (Ethernet) RX packets 1487929 bytes 210511596 (200.7 MiB) RX errors 0 dropped 11 overruns 0 frame 0 TX packets 781276 bytes 4320203416 (4.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 2462286 bytes 3417529317 (3.1 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2462286 bytes 3417529317 (3.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0Update the system hosts file with entries for all nodes:
# vim /etc/hosts 172.16.0.192 controller controller.openstack.lab.eco.rackspace.com 172.16.0.203 compute1 compute1.openstack.lab.eco.rackspace.com 172.16.0.204 compute1-vm compute1-vm.openstack.lab.eco.rackspace.com 172.16.0.195 compute2 compute2.openstack.lab.eco.rackspace.com 172.16.0.196 block1 block1.openstack.lab.eco.rackspace.com 172.16.0.197 object1 object1.openstack.lab.eco.rackspace.com 172.16.0.198 object2 object2.openstack.lab.eco.rackspace.com
Update the “Chrony” (NTP Server) configuration to allow connections from our other nodes:
# vim /etc/chrony.conf Allow 172.16.0.0/24
Restart the Chrony service:
# systemctl restart chronyd.service
Enable the OpenStack-Liberty yum repository:
# yum install centos-release-openstack-liberty
Install the OpenStack client and SELINUX support:
# yum install python-openstackclient openstack-selinux
3. Install core services on controller¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/environment-sql-database.html
http://docs.openstack.org/liberty/install-guide-rdo/environment-nosql-database.html
http://docs.openstack.org/liberty/install-guide-rdo/environment-messaging.html
Install MariaDB:
# yum install mariadb mariadb-server MySQL-python
Set some needed MariaDB configuration parameters:
# vim /etc/my.cnf bind-address = 172.16.0.192 default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8
Enable and start the MariaDB service:
# systemctl enable mariadb.service # systemctl start mariadb.service
Initialize MariaDB security. Say ‘yes’ to all prompts, and set a good root password:
# mysql_secure_installation
Set up the MySQL client configuration. Replace
*MYSQL_ROOT*with your own:# vim /root/.my.cnf [client] user=root password=*MYSQL_ROOT*
Confirm that you are able to connect to MySQL:
# mysql > quit
Install RabbitMQ:
# yum install rabbitmq-server
Enable and start the RabbitMQ service:
# systemctl enable rabbitmq-server.service # systemctl start rabbitmq-server.service
Create the “openstack” RabbitMQ user:
# rabbitmqctl add_user openstack *RABBIT_PASS* # rabbitmqctl set_permissions openstack ".*" ".*" ".*"
4. Install Identity (keystone) on controller¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/keystone-install.html
http://docs.openstack.org/liberty/install-guide-rdo/keystone-services.html
http://docs.openstack.org/liberty/install-guide-rdo/keystone-users.html
http://docs.openstack.org/liberty/install-guide-rdo/keystone-verify.html
http://docs.openstack.org/liberty/install-guide-rdo/keystone-openrc.html
Open the MySQL client and create the “keystone” database. Replace
*KEYSTONE_DBPASS*with your own:# mysql > create database keystone; > grant all privileges on keystone.* to 'keystone'@'localhost' identified by '*KEYSTONE_DBPASS*'; > grant all privileges on keystone.* to 'keystone'@'%' identified by '*KEYSTONE_DBPASS*'; > quit
Install the keystone packages:
# yum install openstack-keystone httpd mod_wsgi memcached python-memcached
Enable and start the memcached service:
# systemctl enable memcached.service # systemctl start memcached.service
Configure keystone. Replace
*ADMIN_TOKEN*and*KEYSTONE_DBPASS*with your own:# vim /etc/keystone/keystone.conf [DEFAULT] admin_token = *ADMIN_TOKEN* [database] connection = mysql://keystone:*KEYSTONE_DBPASS*@controller/keystone [memcache] servers = localhost:11211 [token] provider = uuid driver = memcache expiration = 86400 [revoke] driver = sql
- Note: I have extended token expiration to 24-hours, due to issues I experienced with large images timing-out during the saving process. You may wish to use a shorter expiration, depending on your security requirements.
Populate the keystone database:
# su -s /bin/sh -c "keystone-manage db_sync" keystone
Set the Apache server name:
# vim /etc/httpd/conf/httpd.conf ServerName controller
Configure wsgi:
# vim /etc/httpd/conf.d/wsgi-keystone.conf Listen 5000 Listen 35357 <VirtualHost *:5000> WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / /usr/bin/keystone-wsgi-public WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On <IfVersion >= 2.4> ErrorLogFormat "%{cu}t %M" </IfVersion> ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> </VirtualHost> <VirtualHost *:35357> WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-admin WSGIScriptAlias / /usr/bin/keystone-wsgi-admin WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On <IfVersion >= 2.4> ErrorLogFormat "%{cu}t %M" </IfVersion> ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined <Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory> </VirtualHost>Enable and start the Apache service:
# systemctl enable httpd.service # systemctl start httpd.service
Set up temportary connection parameters. Replace
*ADMIN_TOKEN*with your own:# export OS_TOKEN=*ADMIN_TOKEN* # export OS_URL=http://controller:35357/v3 # export OS_IDENTITY_API_VERSION=3
Create keystone service and endpoints:
# openstack service create --name keystone --description "OpenStack Identity" identity # openstack endpoint create --region RegionOne identity public http://controller:5000/v2.0 # openstack endpoint create --region RegionOne identity internal http://controller:5000/v2.0 # openstack endpoint create --region RegionOne identity admin http://controller:35357/v2.0
Create the “admin” project, user and role. Provide your
*ADMIN_PASS*twice when prompted:# openstack project create --domain default --description "Admin Project" admin # openstack user create --domain default --password-prompt admin # openstack role create admin # openstack role add --project admin --user admin admin
Create the “service” project:
# openstack project create --domain default --description "Service Project" service
Create the “demo” project, user and role. Provide your
*DEMO_PASS*twice when prompted:# openstack project create --domain default --description "Demo Project" demo # openstack user create --domain default --password-prompt demo # openstack role create user # openstack role add --project demo --user demo user
Disable authentication with the admin token:
# vim /usr/share/keystone/keystone-dist-paste.ini
- Remove
admin_token_authfrom[pipeline:public_api],[pipeline:admin_api]and[pipeline:api_v3]
Disable the temporary connection parameters:
# unset OS_TOKEN OS_URL
Test authentication for the “admin” user. Provide
*ADMIN_PASS*when prompted:# openstack --os-auth-url http://controller:35357/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issue
If this is working, various values will be returned (yours will be different):
+------------+----------------------------------+ | Field | Value | +------------+----------------------------------+ | expires | 2016-02-05T22:55:18.580385Z | | id | 9bd8b09e4fdd43cea1f32ca6d62c946b | | project_id | 76f8c8fd7b1e407d97c4604eb2a408b3 | | user_id | 31766cbe74d541088c6ba2fd24654034 | +------------+----------------------------------+
Test authentication for the “demo” user. Provide *DEMO_PASSwhen prompted:
# openstack --os-auth-url http://controller:5000/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name demo --os-username demo --os-auth-type password token issue
- Again, if this is working, various values will be returned.
Create permanent client authentication file for the “admin” user. Replace
*ADMIN_PASS*with your own:# vim /root/admin-openrc.sh export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=admin export OS_TENANT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=*ADMIN_PASS* export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3
Create permanent client authentication file for the “demo” user. Replace
*DEMO_PASS*with your own:# vim /root/demo-openrc.sh export OS_PROJECT_DOMAIN_ID=default export OS_USER_DOMAIN_ID=default export OS_PROJECT_NAME=demo export OS_TENANT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=*DEMO_PASS* export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3
Test authentication with the permanent settings:
# source admin-openrc.sh # openstack token issue
- Once more, if this works, various values will be returned.
5. Install Images (glance) on controller¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/glance-install.html
http://docs.openstack.org/liberty/install-guide-rdo/glance-verify.html
Step 9 has specific changes for the use of XenServer.
Open the MySQL client and create the “glance” database. Replace
*GLANCE_DBPASS*with your own:# mysql > create database glance; > grant all privileges on glance.* to 'glance'@'localhost' identified by '*GLANCE_DBPASS*'; > grant all privileges on glance.* to 'glance'@'%' identified by '*GLANCE_DBPASS*'; > quit
Create the “glance” user, role, service and endpoints. Provide
*GLANCE_PASS*when prompted:# source admin-openrc.sh # openstack user create --domain default --password-prompt glance # openstack role add --project service --user glance admin # openstack service create --name glance --description "OpenStack Image service" image # openstack endpoint create --region RegionOne image public http://controller:9292 # openstack endpoint create --region RegionOne image internal http://controller:9292 # openstack endpoint create --region RegionOne image admin http://controller:9292
Install glance packages:
# yum install openstack-glance python-glance python-glanceclient
Configure glance-api. Replace
*GLANCE_DBPASS*and*GLANCE_PASS*with your own:# vim /etc/glance/glance-api.conf [database] connection = mysql://glance:*GLANCE_DBPASS*@controller/glance [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = glance password = *GLANCE_PASS* [paste_deploy] flavor = keystone [glance_store] default_store = file filesystem_store_datadir = /var/lib/glance/images/ [DEFAULT] notification_driver = noop
Configure glance-registry. Replace
*GLANCE_DBPASS*and*GLANCE_PASS*with your own:# vim /etc/glance/glance-registry.conf [database] connection = mysql://glance:*GLANCE_DBPASS*@controller/glance [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = glance password = *GLANCE_PASS* [paste_deploy] flavor=keystone [DEFAULT] notification_driver = noop
Populate the glance database:
# su -s /bin/sh -c "glance-manage db_sync" glance
- Note: “
No handlers could be found for logger” warnings are normal, and can be ignored.
Enable and start the glance service:
# systemctl enable openstack-glance-api.service openstack-glance-registry.service # systemctl start openstack-glance-api.service openstack-glance-registry.service
Add glance API version settings to the client authentication files:
# echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
Upload a sample image to the glance service:
# source admin-openrc.sh # wget http://ca.downloads.xensource.com/OpenStack/cirros-0.3.4-x86_64-disk.vhd.tgz # glance image-create --name "cirros-xen" --container-format ovf --disk-format vhd --property vm_mode=xen --visibility public --file cirros-0.3.4-x86_64-disk.vhd.tgz
Confirm that the image has been uploaded:
# glance image-list +--------------------------------------+----------------+ | ID | Name | +--------------------------------------+----------------+ | 1e710e0c-0fb6-4425-b196-4b66bfac495e | cirros-xen | +--------------------------------------+----------------+
6. Install Compute (nova) on controller¶
This page is based on the following OpenStack Installation Guide page:
http://docs.openstack.org/liberty/install-guide-rdo/nova-controller-install.html
Open the MySQL client and create the “nova” database. Replace
*NOVA_DBPASS*with your own:# mysql > create database nova; > grant all privileges on nova.* to 'nova'@'localhost' identified by '*NOVA_DBPASS*'; > grant all privileges on nova.* to 'nova'@'%' identified by '*NOVA_DBPASS*'; > quit
Create the “nova” user, role, service and endpoints. Provide
*NOVA_PASS*when prompted:# source admin-openrc.sh # openstack user create --domain default --password-prompt nova # openstack role add --project service --user nova admin # openstack service create --name nova --description "OpenStack Compute" compute # openstack endpoint create --region RegionOne compute public http://controller:8774/v2/%\(tenant_id\)s # openstack endpoint create --region RegionOne compute internal http://controller:8774/v2/%\(tenant_id\)s # openstack endpoint create --region RegionOne compute admin http://controller:8774/v2/%\(tenant_id\)s
Install nova packages:
# yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
Configure nova. Replace
*NOVA_DBPASS*,*NOVA_PASS*,*SERVER_IP*and*RABIT_PASS*with your own:# vim /etc/nova/nova.conf [database] connection = mysql://nova:*NOVA_DBPASS*@controller/nova [DEFAULT] rpc_backend = rabbit auth_strategy = keystone my_ip = *SERVER_IP* network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver enabled_apis = osapi_compute,metadata [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = *RABBIT_PASS* [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = *NOVA_PASS* [vnc] vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip [glance] host = controller [oslo_concurrency] lock_path = /var/lib/nova/tmp
Populate the nova database:
# su -s /bin/sh -c "nova-manage db sync" nova
Enable and start the nova service:
# systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service # systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
7. Build XenServer Host¶
This page is not based on the OpenStack Installation Guide.
- In this guide I am using a server with a small RAID-1 for the OS, and a large RAID-10 for the VMs.
- Boot with XenServer 6.5 DVD.
- Set keyboard, agree to terms, etc.
- Set the installation destination to sda.
- Set VM storage to only sdb, and enable thin provisioning:
- Select local media as the installation source.
- Do not install any supplemental packs.
- Skip verification of the installation media.
- Set a good
*XENSERVER_ROOT*password. Use a password which you don’t mind being plain-text readable to anyone who has root access to this system. - Set the management network interface to use eth1 and configure the IPv4 addresses:
- Set an appropriate timezone.
- Configure the server to use NTP, and set the server address as the controller’s IP:
- Start the installation.
- Reboot the server to start XenServer. The first boot will take a very long time. It will appear to hang a couple of times, but wait for it to reach the user interface.
- On a Windows workstation, go to http://xenserver.org/open-source-virtualization-download.html
- Download XenCenter Windows Management Console, and install it.
- Download XenServer 6.5 SP1 (under Service Packs), and keep it safe in a directory.
- Download all of the public hotfixes for XenServer 6.5 SP1, and also keep them safe in a directory.
- Launch XenCenter, and click add new server:
- Enter the address and credentials of the XenServer:
- Enable the option to remember the connection, and click OK.
- Open up the SP1 zip file you downloaded, and double-click the XenServer Update File inside:
- This will open the Install Update wizard. Click Next:
- Select our one server, and click next:
- XenCenter will upload the update to the server. Click next when done:
- XenCenter will run some checks. Click next when done:
- Select “Allow XenCenter to carry out the post-update tasks”, and then click on “Install Update”:
- XenCenter will perform the installation, and reboot the server. This will take a while to complete. Click Finish when done:
- Repeat steps 22-27 for all of the hotfixes you downloaded. Except in step 26, select “I will carry out the post-update checks myself” for ALL of the hotfixes:
- Reboot the XenServer by right-clicking it in XenCenter, and clicking on “Reboot”:
- Once the server is back online, right-click it and select “New SR…”
- Create an ISO library somewhere where you will have read/write access. In my case I am using a Windows share, but you can use NFS:
SSH to the XenServer as root.
Create the OpenStack Integration Bridge network:
# xe network-create name-label=openstack-int-network
Obtain the bridge name of the new network. Write this down as
*XAPI_BRIDGE*, as this will be needed later:# xe network-list name-label=openstack-int-network params=bridge bridge ( RO) : xapi0
Find the UUID of the ISO library created earlier:
# xe sr-list uuid ( RO) : ef0adc0a-3b56-5e9d-4824-0821f4be7ed4 name-label ( RW): Removable storage name-description ( RW): host ( RO): compute1.openstack.lab.eco.rackspace.com type ( RO): udev content-type ( RO): disk uuid ( RO) : 6658e157-a534-a450-c4db-2ca6dd6296cf name-label ( RW): Local storage name-description ( RW): host ( RO): compute1.openstack.lab.eco.rackspace.com type ( RO): ext content-type ( RO): user uuid ( RO) : f04950c1-ee7b-2ccb-e3e2-127a5bffc5a6 name-label ( RW): CIFS ISO library name-description ( RW): CIFS ISO Library [\\windows.lab.eco.rackspace.com\ISOs] host ( RO): compute1.openstack.lab.eco.rackspace.com type ( RO): iso content-type ( RO): iso uuid ( RO) : 7a549ca7-d1af-cf72-fd7e-2f48448354e8 name-label ( RW): DVD drives name-description ( RW): Physical DVD drives host ( RO): compute1.openstack.lab.eco.rackspace.com type ( RO): udev content-type ( RO): iso uuid ( RO) : 9a4f8404-7745-b582-484f-108917bf1488 name-label ( RW): XenServer Tools name-description ( RW): XenServer Tools ISOs host ( RO): compute1.openstack.lab.eco.rackspace.com type ( RO): iso content-type ( RO): iso
- In my example, the UUID is
f04950c1-ee7b-2ccb-e3e2-127a5bffc5a6.
Set a parameter on the ISO library. Replace
*UUID*with the UUID found above:# xe sr-param-set uuid=*UUID* other-config:i18n-key=local-storage-iso
Update the system hosts file with entries for all nodes:
# vi /etc/hosts 172.16.0.192 controller controller.openstack.lab.eco.rackspace.com 172.16.0.203 compute1 compute1.openstack.lab.eco.rackspace.com 172.16.0.204 compute1-vm compute1-vm.openstack.lab.eco.rackspace.com 172.16.0.195 compute2 compute2.openstack.lab.eco.rackspace.com 172.16.0.196 block1 block1.openstack.lab.eco.rackspace.com 172.16.0.197 object1 object1.openstack.lab.eco.rackspace.com 172.16.0.198 object2 object2.openstack.lab.eco.rackspace.com
Relax XSM SR checks. Needed for migration of instances with Cinder volumes:
# vi /etc/xapi.conf relax-xsm-sr-check = true
Symlink a directory of the SR to /images. Needed for instance migration:
# LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal) # IMG_DIR="/var/run/sr-mount/$LOCAL_SR/images" # mkdir -p "$IMG_DIR" # ln -s "$IMG_DIR" /images
Set up SSH key authentication for the root user. Needed for instance migration. Press ENTER to give default response to all prompts:
# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. # cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
- Note: If you are building an additional XenServer host, you will instead copy the contents of /root/.ssh from your first XenServer host to your additional hosts.
Restart the XenServer Toolstack:
# xe-toolstack-restart
8. Build XenServer Compute VM¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/environment-networking-compute.html
http://docs.openstack.org/liberty/install-guide-rdo/environment-ntp-other.html
http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html
There are many additional steps here specific to XenServer.
- In XenCenter, create a new VM:
- Select the CentOS 7 template:
- Name the VM “
compute”:
- Choose the CentOS 7 ISO (which you should have previously uploaded to the ISO library):
- Place the VM on the only server available:
- Give it one CPU and 2GB:
- Change the disk to 20GB by clicking on properties:
- Give the VM connections to your management and public networks:
- Complete the wizard, which will start the VM.
- Go to the “compute” VM’s console, which should be displaying the CentOS installer’s boot screen:
- Highlight “Install CentOS 7”, and press Enter:
- If the console appears to “hang”, with only a cursor showing (and no other activity), then quit XenCenter, relaunch it, and go back to the console. This should show the graphical installer is now running:
- Set language and timezone.
- Click on “Network & Hostname”. Click on the “eth1” interface, and click on “configure”.
- Set the IPv4 address as appropriate:
- Disable IPv6, and click on “save”:
- Set an appropriate hostname, and then enable the “eth1” interface by setting the switch to “on”:
- If using the NetInstall image, click on “Installation source”. Set the source to network, and then define a known-good mirror. You can use
http://mirror.rackspace.com/CentOS/7.2.1511/os/x86_64/. - Click on “Installation Destination”. Select “I will configure partitioning” and click on “Done”:
- Under “New mount points will use the following partition scheme”, select “Standard Partition”.
- Click on the + button. Set the mount point to / and click “Add mount point”:
- Set “File System” to “ext4”, and then click “Done”.
- A yellow warning bar will appear. Click “Done” again, and then click on “Accept Changes”.
- Click on “Software Selection”. Select “Infrastructure Server”, and click “Done”.
- Click “Begin Installation”. Click on “Root Password” and set a good password.
- Once installation is complete, click “Reboot”.
- SSH as root to the new VM.
- In XenCenter, change the DVD drive to
xs-tools.iso:
Mount the tools ISO and install the tools:
# mkdir /mnt/cdrom # mount /dev/cdrom /mnt/cdrom # cd /mnt/cdrom/Linux # rpm -Uvh xe-guest-utilities-6.5.0-1427.x86_64.rpm xe-guest-utilities-xenstore-6.5.0-1427.x86_64.rpm # cd ~ # umount /mnt/cdrom
In XenCenter, eject the DVD drive:
Stop and disable the firewalld service:
# systemctl disable firewalld.service # systemctl stop firewalld.service
Disable SELINUX:
# setenforce 0 # vim /etc/sysconfig/selinux SELINUX=permissive
Update all packages on the VM:
# yum update
Reboot the VM:
# systemctl reboot
Wait for the VM to complete the reboot, and SSH back in as root.
Update the system hosts file with entries for all nodes:
# vim /etc/hosts 172.16.0.192 controller controller.openstack.lab.eco.rackspace.com 172.16.0.203 compute1 compute1.openstack.lab.eco.rackspace.com 172.16.0.204 compute1-vm compute1-vm.openstack.lab.eco.rackspace.com 172.16.0.195 compute2 compute2.openstack.lab.eco.rackspace.com 172.16.0.196 block1 block1.openstack.lab.eco.rackspace.com 172.16.0.197 object1 object1.openstack.lab.eco.rackspace.com 172.16.0.198 object2 object2.openstack.lab.eco.rackspace.com
Update the chrony configuration to use the controller as a time source:
# vim /etc/chrony.conf server controller iburst
- Remove any other servers listed, leaving only “
controller”.
Restart the chrony service, and confirm that “
controller” is listed as a source:# systemctl restart chronyd.service # chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* controller 3 6 17 6 -3374ns[+2000ns] +/- 6895us
Enable the OpenStack-Liberty yum repository:
# yum install centos-release-openstack-liberty
Install the OpenStack client and SELINUX support:
# yum install python-openstackclient openstack-selinux
SSH to the XenServer as root.
Obtain the UUID of the XenServer pool:
# xe pool-list uuid ( RO) : f824b628-1696-9ebe-5a5a-d1f9cf117158 name-label ( RW): name-description ( RW): master ( RO): b11f5aaf-d1a5-42fb-8335-3a6451cec4c7 default-SR ( RW): 271e0f43-8b03-50c5-a08a-9c7312741378
- Note: In my case, the UUID is
f824b628-1696-9ebe-5a5a-d1f9cf117158.
Enable auto power-on for the XenServer pool. Replace
*POOL_UUID*with your own:# xe pool-param-set uuid=*POOL_UUID* other-config:auto_poweron=true
Obtain the UUID of the “compute VM”:
# xe vm-list name-label='compute' uuid ( RO) : 706ba8eb-fe5f-8da2-9090-3a5b009ce1c4 name-label ( RW): compute power-state ( RO): running
- Note: In my case, the UUID is
706ba8eb-fe5f-8da2-9090-3a5b009ce1c4.
Enable auto power-on for the “compute” VM. Replace
*VM_UUID*with your own:# xe vm-param-set uuid=*VM_UUID* other-config:auto_poweron=true
9. Install Compute (nova) on XenServer compute VM¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html
http://docs.openstack.org/liberty/install-guide-rdo/nova-verify.html
http://docs.openstack.org/liberty/install-guide-rdo/cinder-storage-install.html
It is also based on some steps from the following guide:
https://www.citrix.com/blogs/2015/11/30/integrating-xenserver-rdo-and-neutron/
All steps have modifications for XenServer.
Download and install pip, and xenapi:
# wget https://bootstrap.pypa.io/get-pip.py # python get-pip.py # pip install xenapi
Install nova packages:
# yum install openstack-nova-compute sysfsutils
Configure nova. Replace
*HOST_NAME*,*XENSERVER_ROOT*,*CONTROLLER_ADDRESS*,*XAPI_BRIDGE*,*VM_IP*,*NOVA_PASS*,*XENSERVER_IP*and*RABIT_PASS*with your own:# vim /etc/nova/nova.conf [DEFAULT] rpc_backend = rabbit auth_strategy = keystone my_ip = *VM_IP* network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = xenapi.XenAPIDriver host = *HOST_NAME* live_migration_retry_count=600 [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = *RABBIT_PASS* [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = *NOVA_PASS* [vnc] enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = *XENSERVER_IP* novncproxy_base_url = http://*CONTROLLER_ADDRESS*:6080/vnc_auto.html [glance] host = controller [oslo_concurrency] lock_path = /var/lib/nova/tmp [xenserver] connection_url=http://compute1 connection_username=root connection_password=*XENSERVER_ROOT* vif_driver=nova.virt.xenapi.vif.XenAPIOpenVswitchDriver ovs_int_bridge=*XAPI_BRIDGE* ovs_integration_bridge=*XAPI_BRIDGE*
Download and modify a helper script for installing the dom0 plugins:
# wget --no-check-certificate https://raw.githubusercontent.com/Annie-XIE/summary-os/master/rdo_xenserver_helper.sh # sed -i 's/dom0_ip=169.254.0.1/dom0_ip=compute1/g' rdo_xenserver_helper.sh
Use the script to install the dom0 nova plugins:
# source rdo_xenserver_helper.sh # install_dom0_plugins
- Answer yes to the RSA key prompt
- Enter the XenServer root password when prompted (twice)
- Ignore the errors related to the neutron plugins
Update the LVM configuration to prevent scanning of instances’ contents:
# vim /etc/lvm/lvm.conf devices { ... filter = ["r/.*/"]
- Note: Do not replace the entire “
devices” section, only the “filter” line.
Enable and start the nova services:
# systemctl enable openstack-nova-compute.service # systemctl start openstack-nova-compute.service
Log on to the controller node as root.
Load the “admin” credential file:
# source admin-openrc.sh
Check the nova service list:
# nova service-list +----+------------------+---------------------------------------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+---------------------------------------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | controller.openstack.lab.eco.rackspace.com | internal | enabled | up | 2016-02-08T16:53:19.000000 | - | | 2 | nova-scheduler | controller.openstack.lab.eco.rackspace.com | internal | enabled | up | 2016-02-08T16:53:19.000000 | - | | 3 | nova-conductor | controller.openstack.lab.eco.rackspace.com | internal | enabled | up | 2016-02-08T16:53:22.000000 | - | | 4 | nova-cert | controller.openstack.lab.eco.rackspace.com | internal | enabled | up | 2016-02-08T16:53:27.000000 | - | | 5 | nova-compute | compute1-vm.openstack.lab.eco.rackspace.com | nova | enabled | up | 2016-02-08T16:53:19.000000 | - | +----+------------------+---------------------------------------------+----------+---------+-------+----------------------------+-----------------+
- The list should include
compute1-vmrunningnova-compute.
Check the nova endpoints list:
# nova endpoints WARNING: nova has no endpoint in ! Available endpoints for this service: +-----------+------------------------------------------------------------+ | nova | Value | +-----------+------------------------------------------------------------+ | id | 1c07bba299254336abd0cbe27c64be83 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | url | http://controller:8774/v2/76f8c8fd7b1e407d97c4604eb2a408b3 | +-----------+------------------------------------------------------------+ +-----------+------------------------------------------------------------+ | nova | Value | +-----------+------------------------------------------------------------+ | id | 221f3238f2da46fb8fc6897e6c2c4de1 | | interface | public | | region | RegionOne | | region_id | RegionOne | | url | http://controller:8774/v2/76f8c8fd7b1e407d97c4604eb2a408b3 | +-----------+------------------------------------------------------------+ +-----------+------------------------------------------------------------+ | nova | Value | +-----------+------------------------------------------------------------+ | id | fdbd2fe1dda5460aaa486b5d142f99aa | | interface | admin | | region | RegionOne | | region_id | RegionOne | | url | http://controller:8774/v2/76f8c8fd7b1e407d97c4604eb2a408b3 | +-----------+------------------------------------------------------------+ WARNING: keystone has no endpoint in ! Available endpoints for this service: +-----------+----------------------------------+ | keystone | Value | +-----------+----------------------------------+ | id | 33c74602793e454ea1d9ae9ab6ca5dcc | | interface | public | | region | RegionOne | | region_id | RegionOne | | url | http://controller:5000/v2.0 | +-----------+----------------------------------+ +-----------+----------------------------------+ | keystone | Value | +-----------+----------------------------------+ | id | 688939b258ea4f1d956cb85dfc75e0c0 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | url | http://controller:5000/v2.0 | +-----------+----------------------------------+ +-----------+----------------------------------+ | keystone | Value | +-----------+----------------------------------+ | id | 7c7652f07b2f4a2c8bf805ff49b6a4eb | | interface | admin | | region | RegionOne | | region_id | RegionOne | | url | http://controller:35357/v2.0 | +-----------+----------------------------------+ WARNING: glance has no endpoint in ! Available endpoints for this service: +-----------+----------------------------------+ | glance | Value | +-----------+----------------------------------+ | id | 0d49d35fc21d4faa8c72ff3578198513 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | url | http://controller:9292 | +-----------+----------------------------------+ +-----------+----------------------------------+ | glance | Value | +-----------+----------------------------------+ | id | 54f519365b8e4f7f81b750fdbf55be2f | | interface | public | | region | RegionOne | | region_id | RegionOne | | url | http://controller:9292 | +-----------+----------------------------------+ +-----------+----------------------------------+ | glance | Value | +-----------+----------------------------------+ | id | d5e7d60a0eba46b9ac7b992214809fe0 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | url | http://controller:9292 | +-----------+----------------------------------+
- The list should include endpoints for
nova,keystone, andglance. Ignore any warnings.
Check the nova image list:
# nova image-list +--------------------------------------+----------------+--------+--------------------------------------+ | ID | Name | Status | Server | | 1e710e0c-0fb6-4425-b196-4b66bfac495e | cirros-xen | ACTIVE | | +--------------------------------------+----------------+--------+--------------------------------------+
- The list should include the
cirros-xenimage previously uploaded.
10. Install Networking (neutron) on controller¶
This page is based on the following OpenStack Installation Guide page:
http://docs.openstack.org/liberty/install-guide-rdo/neutron-controller-install.html
Steps 3, 5, 6, 7, 9, 12, 13 and 15 have specific changes for the use of XenServer.
Open the MySQL client and create the “glance” database. Replace
*NEUTRON_DBPASS*with your own:# mysql > create database neutron; > grant all privileges on neutron.* to 'neutron'@'localhost' identified by '*NEUTRON_DBPASS*'; > grant all privileges on neutron.* to 'neutron'@'%' identified by '*NEUTRON_DBPASS*'; > quit
Create the “neutron” user, role, service and endpoints. Provide
*NEUTRON_PASS*when prompted:# source admin-openrc.sh # openstack user create --domain default --password-prompt neutron # openstack role add --project service --user neutron admin # openstack service create --name neutron --description "OpenStack Networking" network # openstack endpoint create --region RegionOne network public http://controller:9696 # openstack endpoint create --region RegionOne network internal http://controller:9696 # openstack endpoint create --region RegionOne network admin http://controller:9696
Install the neutron and ovs packages:
# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch python-neutronclient ebtables ipset
Configure neutron. Note that the default file already has lines for keystone_authtoken. These must be deleted. Replace
*NEUTRON_DBPASS*,*NEUTRON_PASS*,*RABBIT_PASS*and*NOVA_PASS*with your own:# vim /etc/neutron/neutron.conf [database] connection = mysql://neutron:*NEUTRON_DBPASS*@controller/neutron rpc_backend = rabbit [DEFAULT] core_plugin = ml2 service_plugins = auth_strategy = keystone notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller:8774/v2 [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = *RABBIT_PASS* [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = *NEUTRON_PASS* [nova] auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = *NOVA_PASS* [oslo_concurrency] lock_path = /var/lib/neutron/tmp
- Note: The
service_pluginsvalue is intentionally left blank, and is used to disable these plugins.
Configure the ml2 plugin:
# vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan tenant_network_types = mechanism_drivers = openvswitch extension_drivers = port_security [ml2_type_flat] flat_networks = public [securitygroup] enable_ipset = True
- Note: The
tenant_network_typesvalue is also intentionally left blank.
Configure ml2’s ovs plugin. Replace
*XAPI_BRIDGE*with your own:# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini [ovs] integration_bridge = *XAPI_BRIDGE* bridge_mappings = public:br-eth0 [securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver
Configure the DHCP Agent. Replace
*XAPI_BRIDGE*with your own:# vim /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver ovs_integration_bridge = *XAPI_BRIDGE* dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata= True
Configure the metadata agent. Note that the default file already has some lines in
[DEFAULT]. These need to be commented-out or deleted. Replace*NEUTRON_PASS*and*NEUTRON_METADATA_SECRET*with your own:# vim /etc/neutron/metadata_agent.ini [DEFAULT] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_region = RegionOne auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = *NEUTRON_PASS* nova_metadata_ip = controller metadata_proxy_shared_secret = *NEUTRON_METADATA_SECRET*
Reconfigure nova to use neutron. Replace
*NEUTRON_PASS*,*NEUTRON_METADATA_SECRET*and*XAPI_BRIDGE*with your own:# vim /etc/nova/nova.conf [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = neutron password = *NEUTRON_PASS* service_metadata_proxy = True metadata_proxy_shared_secret = *NEUTRON_METADATA_SECRET* ovs_bridge = *XAPI_BRIDGE*
Symlink the ml2 configuration file to neutron’s plugin.ini file:
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Populate the neutron database:
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Enable and start the ovs service:
# systemctl enable openvswitch.service # systemctl start openvswitch.service
Set up the ovs bridge to the public network:
# ovs-vsctl add-br br-eth0 # ovs-vsctl add-port br-eth0 eth0
Restart the nova service:
# systemctl restart openstack-nova-api.service
Enable and start the neutron services:
# systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service # systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
11. Install Networking (neutron) on compute VM¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/neutron-compute-install.html
http://docs.openstack.org/liberty/install-guide-rdo/launch-instance.html
http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-networks-public.html
It is also based on some steps from the following guide:
https://www.citrix.com/blogs/2015/11/30/integrating-xenserver-rdo-and-neutron/
Steps 1, 3, 4, 6, 8, 11, 14 and 15 have specific changes for the use of XenServer.
Install the neutron and ovs packages:
# yum install openstack-neutron openstack-neutron-openvswitch ebtables ipset openvswitch
Configure neutron. Replace
*HOST_NAME*,*RABBIT_PASS*and*NEUTRON_PASS*with your own:# vim /etc/neutron/neutron.conf [DEFAULT] rpc_backend = rabbit auth_strategy = keystone host = *HOST_NAME* [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = *RABBIT_PASS* [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = *NEUTRON_PASS* [oslo_concurrency] lock_path = /var/lib/neutron/tmp
- Make sure that any connection options under
[database]are deleted or commented-out. - Delete or comment-out any pre-existing lines in the
[keystone_authtoken]section.
Configure the neutron ovs agent. Replace
*XAPI_BRIDGE*and*XENSERVER_ROOT*with your own:# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini [ovs] integration_bridge = *XAPI_BRIDGE* bridge_mappings = public:xenbr0 [agent] root_helper = neutron-rootwrap-xen-dom0 /etc/neutron/rootwrap.conf root_helper_daemon = minimize_polling = False [securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver
Configure neutron rootwrap to connect to XenServer. Replace
*XENSERVER_ROOT*with your own:# vim /etc/neutron/rootwrap.conf [xenapi] xenapi_connection_url=http://compute1 xenapi_connection_username=root xenapi_connection_password=*XENSERVER_ROOT*
- There are other lines already present in this file. These should be left as-is.
Reconfigure nova to use neutron. Replace
*NEUTRON_PASS*with your own:# vim /etc/nova/nova.conf [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = neutron password = *NEUTRON_PASS*
Use the helper script to install the dom0 neutron plugins:
# source rdo_xenserver_helper.sh # install_dom0_plugins
- Enter the XenServer root password when prompted (twice).
- If you are prompted whether or not to overwrite a file under /tmp, answer
y.
Restart the nova service:
# systemctl restart openstack-nova-compute.service
Enable and start the neutron service:
# systemctl enable neutron-openvswitch-agent.service # systemctl start neutron-openvswitch-agent.service
Log on to the controller node as root.
Load the “admin” credential file:
# source admin-openrc.sh
Check the neutron agent list:
# neutron agent-list +--------------------------------------+--------------------+---------------------------------------------+-------+----------------+---------------------------+ | id | agent_type | host | alive | admin_state_up | binary | +--------------------------------------+--------------------+---------------------------------------------+-------+----------------+---------------------------+ | 57c49643-3e48-4252-9665-2f22e3b93b0e | Open vSwitch agent | compute1-vm.openstack.lab.eco.rackspace.com | :-) | True | neutron-openvswitch-agent | | 977ff9ae-96e5-4ef9-93d5-65a8541d7d25 | Metadata agent | controller.openstack.lab.eco.rackspace.com | :-) | True | neutron-metadata-agent | | ca0fb18a-b3aa-4cd1-bc5f-ba4700b4d9ce | Open vSwitch agent | controller.openstack.lab.eco.rackspace.com | :-) | True | neutron-openvswitch-agent | | d42db23f-3738-48b3-8f83-279ee29e84ef | DHCP agent | controller.openstack.lab.eco.rackspace.com | :-) | True | neutron-dhcp-agent | +--------------------------------------+--------------------+---------------------------------------------+-------+----------------+---------------------------+
- The list should include the ovs agent running on
controllerandcompute1-vm.
Create the default security group:
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 # nova secgroup-add-rule default tcp 1 65535 0.0.0.0/0
Create the public network. Replace
*PUBLIC_NETWORK_CIDR*,*START_IP_ADDRESS*,*END_IP_ADDRESS**DNS_RESOLVER*and*PUBLIC_NETWORK_GATEWAY*with your own:# neutron net-create public --shared --provider:physical_network public --provider:network_type flat # neutron subnet-create public *PUBLIC_NETWORK_CIDR* --name public --allocation-pool start=*START_IP_ADDRESS*,end=*END_IP_ADDRESS* --dns-nameserver *DNS_RESOLVER* --gateway *PUBLIC_NETWORK_GATEWAY*
There is a bug regarding the network’s segmentation ID which needs to be fixed. This should be resolved in openstack-neutron-7.0.1, but if you are running an older version:
Update the segmentation_id field in the neutron database:
# mysql neutron > update ml2_network_segments set segmentation_id=0; > quit
Update the segmentation_id for the DHCP agent’s ovs port:
# ovs-vsctl set Port $(ovs-vsctl show | grep Port | grep tap | awk -F \" ' { print $2 } ') other_config:segmentation_id=0
15. There is a bug in Neutron which is causing available XenAPI sessions to be exhausted. I have reported this and submitted a patch in https://bugs.launchpad.net/neutron/+bug/1558721. Until the bug is fixed upstream, here is the manual patch to fix the problem:
Open the neutron-rootwrap-xen-dom0 file:
# vim /usr/bin/neutron-rootwrap-xen-dom0Locate the following lines (should start at line 117):
result = session.xenapi.host.call_plugin( host, 'netwrap', 'run_command', {'cmd': json.dumps(user_args), 'cmd_input': json.dumps(cmd_input)}) return json.loads(result)Add the following before the ‘return’ line. It should have the same indentation as the ‘return’ line:
session.xenapi.session.logout()The whole section should now read:
result = session.xenapi.host.call_plugin( host, 'netwrap', 'run_command', {'cmd': json.dumps(user_args), 'cmd_input': json.dumps(cmd_input)}) session.xenapi.session.logout() return json.loads(result)
12. Install Dashboard (horizon) on controller¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/horizon-install.html
http://docs.openstack.org/liberty/install-guide-rdo/horizon-verify.html
Step 3 has specific changes for the use of XenServer.
Install horizon packages:
# yum install openstack-dashboard
Configure horizon. Replace
*TIME_ZONE*with your own (for example “America/Chicago”):# vim /etc/openstack-dashboard/local_settings OPENSTACK_CONTROLLER = "controller" ALLOWED_HOSTS = ['*', ] CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', } } OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_NEUTRON_NETWORK = { 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, } TIME_ZONE = "*TIME_ZONE*" OPENSTACK_API_VERSIONS = { "data-processing": 1.1, "identity": 3, "volume": 2, }
- Note 1: There are many options already present in the file. These should be left as-is.
- Note 2: For the
openstack_neutron_networkblock, modify the settings listed above, rather than replacing the entire block.
There is a bug in Horizon which is breaking image metadata when editing XenServer images. This has been reported in https://bugs.launchpad.net/horizon/+bug/1539722. Until the bug is fixed, here is a quick and dirty patch to avoid the problem:
Open the forms.py file:
# vim /usr/share/openstack-dashboard/openstack_dashboard/dashboards/project/images/images/forms.py
Locate the following lines (should be lines 60 and 61):
else: container_format = 'bare'Add the following two lines above those lines:
elif disk_format == 'vhd': container_format = 'ovf'The whole section should now read:
elif disk_format == 'vhd': container_format = 'ovf' else: container_format = 'bare'
Enable and restart the Apahce and memcached services:
# systemctl enable httpd.service memcached.service # systemctl restart httpd.service memcached.service
From a web browser, access http://
*CONTROLLER_ADDRESS*/dashboard:
- Log in using the admin credentials.
- In the left-hand menu, under “Admin” and then “System”, click on “System Information”. This will display a list of compute services and network agents:
13. Build block1 storage node OS¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/environment-networking-storage-cinder.html
http://docs.openstack.org/liberty/install-guide-rdo/environment-ntp-other.html
http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html
- The block1 node will need to have a large second disk on which to store the cinder volumes. You may also wish to give it a large amount of storage at /var/lib/cinder/conversion (or /) if you will be writing large images to cinder volumes. It will only need a connection to the Management Network.
- Boot the control node with the CentOS 7.2.1511 DVD.
- Set your time zone and language.
- For “Software Selection”, set this to “Infrastructure Server”.
- Keep automatic partitioning. Allow to install only on first disk.
- Set the controller’s IPv4 address and hostname. Disable IPv6. Give the connection the name “eth0”.
Click on “Begin Installation”.
Set a good root password.
Once installation is complete, reboot the server, and remove the DVD/ISO from the server.
SSH in to server as root.
Stop and disable the firewalld service:
# systemctl disable firewalld.service # systemctl stop firewalld.service
Disable SELINUX:
# setenforce 0 # vim /etc/sysconfig/selinux SELINUX=permissive
Update all packages on the server:
# yum update
If running the control node on VMWare, install the VM tools:
# yum install open-vm-tools
We need persistent network interface names, so we’ll configure udev to give us these. Replace
00:00:00:00:00:00with the MAC address of your block1 node:# vim /etc/udev/rules.d/90-persistent-net.rules SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:00:00:00:00:00",ATTR{dev_id}=="0x0", ATTR{type}=="1",KERNEL=="eno*", NAME="eth0"
- Note: This file is case-sensitive, and the MAC addresses should be lower-case.
Rename the network interface configuration file to
eth0. Replaceeno00000001with the name of your control node’s interfaces:# cd /etc/sysconfig/network-scripts # mv ifcfg-eno00000001 ifcfg-eth0
Modify the interface configuration files, replacing any instances of
eno00000001(or whatever your interface name is) witheth0:# vim ifcfg-eth0 NAME=eth0 DEVICE=eth0
Reboot the control node:
# systemctl reboot
SSH back in as root after the reboot.
Check that ifconfig now shows
eth0:# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.0.196 netmask 255.255.255.0 broadcast 172.16.0.255 inet6 fe80::20c:29ff:fefa:bbdc prefixlen 64 scopeid 0x20<link> ether 00:0c:29:fa:bb:dc txqueuelen 1000 (Ethernet) RX packets 322224 bytes 137862468 (131.4 MiB) RX errors 0 dropped 35 overruns 0 frame 0 TX packets 408936 bytes 108141349 (103.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 6 bytes 564 (564.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6 bytes 564 (564.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0Update the system hosts file with entries for all nodes:
# vim /etc/hosts 172.16.0.192 controller controller.openstack.lab.eco.rackspace.com 172.16.0.203 compute1 compute1.openstack.lab.eco.rackspace.com 172.16.0.204 compute1-vm compute1-vm.openstack.lab.eco.rackspace.com 172.16.0.195 compute2 compute2.openstack.lab.eco.rackspace.com 172.16.0.196 block1 block1.openstack.lab.eco.rackspace.com 172.16.0.197 object1 object1.openstack.lab.eco.rackspace.com 172.16.0.198 object2 object2.openstack.lab.eco.rackspace.com
Update the chrony configuration to use the controller as a time source:
# vim /etc/chrony.conf server controller iburst
- Remove any other servers listed, leaving only “
controller”.
Restart the chrony service, and confirm that “
controller” is listed as a source:# systemctl restart chronyd.service # chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* controller 3 6 17 6 -3374ns[+2000ns] +/- 6895us
Enable the OpenStack-Liberty yum repository:
# yum install centos-release-openstack-liberty
Install the OpenStack client and SELINUX support:
# yum install python-openstackclient openstack-selinux
14. Install Block Storage (cinder) on controller¶
This page is based on the following OpenStack Installation Guide page:
http://docs.openstack.org/liberty/install-guide-rdo/cinder-controller-install.html
Open the MySQL client and create the “cinder” database. Replace
*CINDER_DBPASS*with your own:# mysql > create database cinder; > grant all privileges on cinder.* to 'cinder'@'localhost' identified by '*CINDER_DBPASS*'; > grant all privileges on cinder.* to 'cinder'@'%' identified by '*CINDER_DBPASS*'; > quit
Create the “cinder” user, role, services and endpoints. Provide
*CINDER_PASS*when prompted:# source admin-openrc.sh # openstack user create --domain default --password-prompt cinder # openstack role add --project service --user cinder admin # openstack service create --name cinder --description "OpenStack Block Storage" volume # openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 # openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s # openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s # openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s # openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s # openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s # openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
Install the cinder packages:
# yum install openstack-cinder python-cinderclient
Configure cinder. Replace
*SERVER_IP*,*CINDER_DBPASS*,*CINDER_PASS*and*RABBIT_PASS*with your own:# vim /etc/cinder/cinder.conf [database] connection = mysql://cinder:*CINDER_DBPASS*@controller/cinder [DEFAULT] rpc_backend = rabbit auth_strategy = keystone my_ip = *SERVER_IP* nova_catalog_info = compute:nova:publicURL nova_catalog_admin_info = compute:nova:adminURL [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = *RABBIT_PASS* [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = *CINDER_PASS* [oslo_concurrency] lock_path = /var/lib/cinder/tmp
Populate the cinder database:
# su -s /bin/sh -c "cinder-manage db sync" cinder
Reconfigure nova for cinder:
# vim /etc/nova/nova.conf [cinder] os_region_name = RegionOne
Restart the nova service:
# systemctl restart openstack-nova-api.service
Enable and start the cinder services:
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service # systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
15. Install Block Storage (cinder) on storage node¶
This page is based on the following OpenStack Installation Guide page:
http://docs.openstack.org/liberty/install-guide-rdo/cinder-storage-install.html
Steps 3, 4, 5, 6, 8, 9 and 10 have specific changes for the use of XenServer.
Create the LVM volume group on the second disk:
# pvcreate /dev/sdb # vgcreate cinder-volumes /dev/sdb
Update the LVM configuration to prevent scanning of cinder volumes’ contents:
# vim /etc/lvm/lvm.conf devices { ... filter = [ "a/sda/", "a/sdb/", "r/.*/"]
- Note: Do not replace the entire “
devices” section, only the “filter” line.
Enable the
centos-virt-xenandepel-releaserepositories:# yum install centos-release-xen epel-release
Disable kernel updates from the centos-virt-xen repository:
# vim /etc/yum.repos.d/CentOS-Xen.repo [centos-virt-xen] exclude=kernel*
Install special packages needed from outside of the openstack-liberty repositories:
# yum install scsi-target-utils xen-runtime
Remove the
epel-releaserepository again:# yum remove epel-release
Install the cinder packages:
# yum install openstack-cinder python-oslo-policy
Configure cinder. Replace
*CINDER_DBPASS*,*SERVER_IP*,*RABBIT_PASS*and*CINDER_PASS*with your own:# vim /etc/cinder/cinder.conf [database] connection = mysql://cinder:*CINDER_DBPASS*@controller/cinder [DEFAULT] rpc_backend = rabbit auth_strategy = keystone my_ip = *SERVER_IP* enabled_backends = lvm glance_host = controller [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = *RABBIT_PASS* [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = *CINDER_PASS* [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm [oslo_concurrency] lock_path = /var/lib/cinder/tmp
Update the tgtd.conf configuration. There are other lines in this file. Don’t change those, just add this one:
# vim /etc/tgt/tgtd.conf include /var/lib/cinder/volumes/*
Enable and start the tgtd and cinder services:
# systemctl enable tgtd.service openstack-cinder-volume.service # systemctl start tgtd.service openstack-cinder-volume.service
16. Fix cinder quotas for the demo project¶
This page is not based on the OpenStack Installation Guide. I found that a bug causes nova to believe that the demo project has a 0 quota for cinder volumes, even though neutron states that the quota is 10. Re-saving the value populates the value properly in nova.
- From a web browser, access http://
*CONTROLLER_ADDRESS*/dashboard - Log in using the admin credentials.
- In the left-hand menu, under “Identity”, click on “Projects”:
- In the “Actions” drop-down for the “demo” project, select modify quotas:
- Don’t make any changes. Just click “Save”.
17. Launch a test Boot-From-Volume instance from Horizon¶
This page is not based on the OpenStack Installation Guide.
- From a web browser, access http://
*CONTROLLER_ADDRESS*/dashboard. - Log in using the demo credentials.
- In the left-hand menu, under “Project”, and then “Compute”, click on “Instances”. Click on Launch instance:
- Give the instance the name “test bfv”, and select “Boot from image (creates a new volume)” and the “cirros-xen” image. Launch the instance:
- Once the instance enters “Active” status, click on its name:
- Click on the “Console” tab, and you should see the instance booting. Wait for the login prompt:
Once the login prompt has appeared, check that you can ping and SSH to the instance. The credentials are:
- Username:
cirros - Password:
cubswin:)
- Username:
In the left-hand menu, click on “Instances” again, select the “test instance” in the list and click on “Terminate Instances”:
18. Build KVM Host¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/environment-networking-compute.html
http://docs.openstack.org/liberty/install-guide-rdo/environment-ntp-other.html
http://docs.openstack.org/liberty/install-guide-rdo/environment-packages.html
- In this guide I am using a server with a small RAID-1 for the OS, and a large RAID-10 for the VMs. There are four network interfaces, although only the first two are in use.
- Boot the KVM host with the CentOS 7.2.1511 DVD.
- Set your time zone and language.
- For “Software Selection”, set this to “Infrastructure Server”.
- Keep automatic partitioning. Allow to install only on first disk.
- Set the node’s IPv4 address on the management network interface and disable IPv6. Give the connection the name “eth1”. Set the node’s hostname:
Click on “Begin Installation”.
Set a good root password.
Once installation is complete, reboot the server, and remove the DVD/ISO from the server.
SSH in to server as root.
Stop and disable the firewalld service:
# systemctl disable firewalld.service # systemctl stop firewalld.service
Disable SELINUX:
# setenforce 0 # vim /etc/sysconfig/selinux SELINUX=permissive
Update all packages on the server:
# yum update
We need persistent network interface names, so we’ll configure udev to give us these. Replace
00:00:00:00:00:00with the MAC addresses of your KVM node:# vim /etc/udev/rules.d/90-persistent-net.rules SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:00:00:00:00:00",ATTR{dev_id}=="0x0", ATTR{type}=="1",KERNEL=="em*", NAME="eth0" SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:00:00:00:00:00",ATTR{dev_id}=="0x0", ATTR{type}=="1",KERNEL=="em*", NAME="eth1" SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:00:00:00:00:00",ATTR{dev_id}=="0x0", ATTR{type}=="1",KERNEL=="em*", NAME="eth2" SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",ATTR{address}=="00:00:00:00:00:00",ATTR{dev_id}=="0x0", ATTR{type}=="1",KERNEL=="em*", NAME="eth3"
- Note: This file is case-sensitive, and the MAC addresses should be lower-case.
Rename the network interface configuration files to eth0 and eth1. Replace
em1,em2,em3andem4with the names of your KVM node’s interfaces:# cd /etc/sysconfig/network-scripts # mv ifcfg-em1 ifcfg-eth0 # mv ifcfg-em2 ifcfg-eth1 # mv ifcfg-em3 ifcfg-eth2 # mv ifcfg-em4 ifcfg-eth3
Modify the interface configuration files, replacing any instances of
em1,em2,em3,em4(or whatever your interface names are) witheth0,eth1,eth2andeth3respectively:# vim ifcfg-eth0 NAME=eth0 DEVICE=eth0 # vim ifcfg-eth1 NAME=eth1 DEVICE=eth1 # vim ifcfg-eth2 NAME=eth2 DEVICE=eth2 # vim ifcfg-eth3 NAME=eth3 DEVICE=eth3
Reboot the KVM node:
# systemctl reboot
SSH back in as root after the reboot.
Check that ifconfig now shows
eth0,eth1,eth2andeth3:# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 14:fe:b5:ca:c5:a0 txqueuelen 1000 (Ethernet) RX packets 1195904 bytes 1012346616 (965.4 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 366843 bytes 28571196 (27.2 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.0.195 netmask 255.255.255.0 broadcast 172.16.0.255 inet6 fe80::16fe:b5ff:feca:c5a2 prefixlen 64 scopeid 0x20<link> ether 14:fe:b5:ca:c5:a2 txqueuelen 1000 (Ethernet) RX packets 12004890 bytes 15236092868 (14.1 GiB) RX errors 0 dropped 156 overruns 0 frame 0 TX packets 12647929 bytes 15934829339 (14.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 14:fe:b5:ca:c5:a4 txqueuelen 1000 (Ethernet) RX packets 1985034 bytes 180158767 (171.8 MiB) RX errors 0 dropped 252 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth3: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether 14:fe:b5:ca:c5:a6 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 9855259 bytes 517557258 (493.5 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9855259 bytes 517557258 (493.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0Update the system hosts file with entries for all nodes:
# vim /etc/hosts 172.16.0.192 controller controller.openstack.lab.eco.rackspace.com 172.16.0.203 compute1 compute1.openstack.lab.eco.rackspace.com 172.16.0.204 compute1-vm compute1-vm.openstack.lab.eco.rackspace.com 172.16.0.195 compute2 compute2.openstack.lab.eco.rackspace.com 172.16.0.196 block1 block1.openstack.lab.eco.rackspace.com 172.16.0.197 object1 object1.openstack.lab.eco.rackspace.com 172.16.0.198 object2 object2.openstack.lab.eco.rackspace.com
Update the chrony configuration to use the controller as a time source:
# vim /etc/chrony.conf server controller iburst
- Remove any other servers listed, leaving only “
controller”.
Restart the chrony service, and confirm that “
controller” is listed as a source:# systemctl restart chronyd.service # chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* controller 3 6 17 6 -3374ns[+2000ns] +/- 6895us
Enable the OpenStack-Liberty yum repository:
# yum install centos-release-openstack-liberty
Install the OpenStack client and SELINUX support:
# yum install python-openstackclient openstack-selinux
19. Install Compute (nova) on KVM Host¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/nova-compute-install.html
http://docs.openstack.org/liberty/install-guide-rdo/cinder-storage-install.html
http://docs.openstack.org/liberty/install-guide-rdo/nova-verify.html
Install nova packages:
# yum install openstack-nova-compute sysfsutils
Format and mount the second array for instance storage:
# parted -s -- /dev/sdb mklabel gpt # parted -s -a optimal -- /dev/sdb mkpart primary 2048s -1 # parted -s -- /dev/sdb align-check optimal 1 # parted /dev/sdb set 1 lvm on # parted /dev/sdb unit s print # mkfs.xfs /dev/sdb1 # mount /dev/sdb1 /var/lib/nova/instances # tail -1 /etc/mtab >> /etc/fstab # chown nova:nova /var/lib/nova/instances
Update the LVM configuration to prevent scanning of instances’ contents:
# vim /etc/lvm/lvm.conf devices { ... filter = [ "a/sda/", "a/sdb/", "r/.*/"]
- Note: Do not replace the entire “
devices” section, only the “filter” line.
Configure nova. Replace
*SERVER_IP*,*RABBIT_PASS*,*NOVA_PASS*and*CONTROLLER_ADDRESS*with your own:# vim /etc/nova/nova.conf [DEFAULT] rpc_backend = rabbit auth_strategy = keystone my_ip = *SERVER_IP* network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = *RABBIT_PASS* [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = *NOVA_PASS* [vnc] enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://*CONTROLLER_ADDRESS*:6080/vnc_auto.html [glance] host = controller [oslo_concurrency] lock_path = /var/lib/nova/tmp [libvirt] virt_type = kvm
Enable and start the nova and libvirt services:
# systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service
Log on to the control node as root.
Load the “admin” credential file:
# source admin-openrc.sh
Check the nova service list:
# nova service-list +----+------------------+---------------------------------------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+---------------------------------------------+----------+---------+-------+----------------------------+-----------------+ | 1 | nova-consoleauth | controller.openstack.lab.eco.rackspace.com | internal | enabled | up | 2016-02-09T17:19:38.000000 | - | | 2 | nova-scheduler | controller.openstack.lab.eco.rackspace.com | internal | enabled | up | 2016-02-09T17:19:41.000000 | - | | 3 | nova-conductor | controller.openstack.lab.eco.rackspace.com | internal | enabled | up | 2016-02-09T17:19:41.000000 | - | | 4 | nova-cert | controller.openstack.lab.eco.rackspace.com | internal | enabled | up | 2016-02-09T17:19:38.000000 | - | | 5 | nova-compute | compute1-vm.openstack.lab.eco.rackspace.com | nova | enabled | up | 2016-02-09T17:19:39.000000 | - | | 6 | nova-compute | compute2.openstack.lab.eco.rackspace.com | nova | enabled | up | 2016-02-09T17:19:36.000000 | - | +----+------------------+---------------------------------------------+----------+---------+-------+----------------------------+-----------------+
- The list should include
compute1-vmandcompute2runningnova-compute.
20. Install Networking (neutron) on KVM Host¶
This page is based on the following OpenStack Installation Guide pages:
http://docs.openstack.org/liberty/install-guide-rdo/neutron-compute-install.html
All steps except 2 have modifications for XenServer.
Install the neutron and ovs packages:
# yum install openstack-neutron openstack-neutron-openvswitch ebtables ipset openvswitch
Configure neutron. Replace
*RABBIT_PASS*and*NEUTRON_PASS*with your own:# vim /etc/neutron/neutron.conf [DEFAULT] rpc_backend = rabbit auth_strategy = keystone [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = openstack rabbit_password = *RABBIT_PASS* [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = *NEUTRON_PASS* [oslo_concurrency] lock_path = /var/lib/neutron/tmp
- Make sure that any connection options under
[database]are deleted or commented-out. - Delete or comment-out any pre-existing lines in the
[keystone_authtoken]section.
Configure the neutron ovs agent. Replace
*XAPI_BRIDGE*with your own:# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini [ovs] integration_bridge = *XAPI_BRIDGE* bridge_mappings = public:br-eth0 [securitygroup] firewall_driver = neutron.agent.firewall.NoopFirewallDriver
Reconfigure nova to use neutron. Replace
*NEUTRON_PASS*and*XAPI_BRIDGE*with your own:# vim /etc/nova/nova.conf [neutron] url = http://controller:9696 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = neutron password = *NEUTRON_PASS* ovs_bridge = *XAPI_BRIDGE* [DEFAULT] linuxnet_ovs_integration_bridge = *XAPI_BRIDGE*
Enable and start the ovs service:
# systemctl enable openvswitch.service # systemctl start openvswitch.service
Set up the ovs bridge to the public network:
# ovs-vsctl add-br br-eth0 # ovs-vsctl add-port br-eth0 eth0
Enable and start the neutron service:
# systemctl enable neutron-openvswitch-agent.service # systemctl start neutron-openvswitch-agent.service
21. Update images for dual-hypervisor environment¶
This page is not based on the OpenStack Installation Guide.
Log on to the controller node as root.
Download the cirros image for KVM hypervisors:
# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
Upload the image to glance:
# source admin-openrc.sh # glance image-create --name "cirros-kvm" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
From a web browser, access http://
*CONTROLLER_ADDRESS*/dashboardLog in using the admin credentials.
In the left-hand menu, under “Admin”, and then “System”, click on “Images”. Click on the “cirros-kvm” image:
- In the top-right drop-down, click on “Update Metadata”:
- On the left-hand side, in the “custom” box, enter “
hypervisor_type”, and then click on the + button:
- Now, on the right-hand side, in the “hypervisor_type” box, enter “
kvm” and click “Save”:
- In the left-hand menu, under “Admin”, and then “System”, again click on “Images”. This time click on the “cirros-xen” image.
- Again click on “Update Metadata” in the drop-down. Follow the same steps, but set “hypervisor_type” to “
xen”:
22. Create Xen CentOS 7 Image¶
This page is not based on the OpenStack Installation Guide.
Log on to the control node as root.
Download the CentOS 7 ISO, and upload it to glance:
# wget http://mirror.rackspace.com/CentOS/7.2.1511/isos/x86_64/CentOS-7-x86_64-NetInstall-1511.iso # source admin-openrc.sh # glance image-create --name "CentOS 7 ISO" --file CentOS-7-x86_64-NetInstall-1511.iso --disk-format iso --container-format bare --visibility public --progress
From a web browser, access http://
*CONTROLLER_ADDRESS*/dashboardLog in using the admin credentials.
In the left-hand menu, under “Admin”, and then “System”, click on “Hypervisors”:
- Click on the “Compute Host” tab:
- Next to “compute2”, click on “Disable Service”.
- Enter a reason of “Building Xen image”, and click “Disable Service”:
- In the left-hand menu, under “Project”, and then “Compute”, click on “Instances”. Click on “Launch Instance”.
- Give the instance the name “
centos7-xen-build”, use the flavor m1.small (for a 20GB disk), and select “Boot from image” and the “CentOS 7 ISO” image. Launch the instance:
- Wait for the instance to enter “Active” state. Then click on the instance. Click on the “Console” tab, and then click on the grey “Connected (unencrypted) to: QEMU” bar so that keyboard input will be directed to the console:
- Highlight “Install CentOS 7”, and press Enter. Wait for the installer to start:
- Set language and timezone.
- Click on “Network & Hostname”. Enable the network interface by setting the switch to “On”:
- Click on “Installation Source”. Set the source to network, and then define a known-good mirror. You can use
http://mirror.rackspace.com/CentOS/7.2.1511/os/x86_64/. - Click on “Installation Destination”. Select “I will configure partitioning” and click on “Done”:
- Click the arrow next to the word “Unknown” to expand that section and display the partition. Select “Reformat”, set the file system to “ext4”, and set the mount point to “/”. Click Done:
- A yellow warning bar will appear. Click “Done” again, and then click on “Accept Changes”.
- Click on “Software Selection”. Select “Infrastructure Server”, and click “Done”.
- Click “Begin Installation”. Click on “Root Password” and set a good password.
- Once installation is complete, click “Reboot”.
- When reboot completes, your connection to the console will likely die. Refresh the page, click on the “Console” tab again, and then click on the grey banner again.
- The server will be attempting to boot from the ISO once more. Press any key to stop the countdown.
- In the top-right of the page, click the “Create Snapshot” button:
- Call the image “
centos7-xen-initialkick” and click on “Create Snapshot”:
- Horizon will show the “Images” page. Wait until “centos7-xen-initialkick” reaches “Active” status, and then click on the image.
- In the top-right drop-down, click on “Update Metadata”.
- On the left-hand side, in the “custom” box, enter “
vm_mode” and click on the + button. - On the right-hand side, in the “vm_mode” box, enter “
hvm”. - On the left-hand side, in the “custom” box, enter “
hypervisor_type” and click on the + button. - On the right-hand side, in the “hypervisor_type” box, enter “
xen”, and click on the “Save” button:
- In the left-hand menu, under “Project”, and then “Compute”, click on “Instances”.
- Highlight the “centos7-xen-build” instance, and click on “Terminate Instances”.
- Click “Terminate Instance” again to confirm:
- Click on “Launch Instance”. Give the instance the name “
centos7-xen-build”, use the flavor m1.small (for a 20GB disk), and select “Boot from image” and the “centos7-xen-initialkick” image. Launch the instance:
Wait for the instance to enter “Active” state. SSH to the new instance as “root”, using the root password used during setup.
Delete the static hostname file:
# rm /etc/hostname
Stop and disable the firewalld service:
# systemctl disable firewalld.service # systemctl stop firewalld.service
Disable SELINUX:
# setenforce 0 # vim /etc/sysconfig/selinux SELINUX=permissive
Update all packages on the server:
# yum update
Download and install the XenServer tools:
# wget http://boot.rackspace.com/files/xentools/xs-tools-6.5.0-20200.iso # mkdir /mnt/cdrom # mount -o loop xs-tools-6.5.0-20200.iso /mnt/cdrom # cd /mnt/cdrom/Linux # rpm -Uvh xe-guest-utilities-xenstore-6.5.0-1427.x86_64.rpm xe-guest-utilities-6.5.0-1427.x86_64.rpm # cd ~ # umount /mnt/cdrom # rm xs-tools-6.5.0-20200.iso
Reboot the instance:
# systemctl reboot
Wait for the server to reboot, and then log back in as root.
Install the nova-agent:
# rpm -Uvh https://github.com/rackerlabs/openstack-guest-agents-unix/releases/download/1.39.1/nova-agent-1.39-1.x86_64.rpm
Create a CentOS 7.2-compatible systemd unit file for the nova-agent service:
# vim /usr/lib/systemd/system/nova-agent.service [Unit] Description=nova-agent service After=xe-linux-distribution.service [Service] EnvironmentFile=/etc/nova-agent.env ExecStart=/usr/sbin/nova-agent -n -l info /usr/share/nova-agent/nova-agent.py [Install] WantedBy=multi-user.target
Create a python environment file for the nova-agent service:
# vim /etc/nova-agent.env LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/share/nova-agent/1.39.1/lib" PYTHONPATH="${PYTHONPATH}:/usr/share/nova-agent/1.39.1/lib/python2.6/site-packages:/usr/share/nova-agent/1.39.1/lib/python2.6/"Reload systemd to import the new unit file:
# systemctl daemon-reload
Enable and start the nova-agent service:
# systemctl enable nova-agent.service # systemctl start nova-agent.service
Remove the static network configuration file:
# rm /etc/sysconfig/network-scripts/ifcfg-eth0
Clear the root bash history:
# rm /root/.bash_history; history -c
In horizon, click the “Create Snapshot” button next to the Instance. Name the image “
CentOS 7 (Xen)”:
- Wait for the image to go to “Active” state and then, from the drop-down box next to the image, click on “Update Metadata”.
- On the left-hand side, in the “Custom” box, enter “
xenapi_use_agent”, and then click the + button. - On the right-hand side, in the “xenapi_use_agent”, enter “
true” and then click the Save button:
- In the drop-down box next to the image, click on “Edit Image”.
- Check the “public” and “protected” boxes, and click on “Update Image”:
- Select the “centos7-xen-initialkick” image, and click on “Delete Images”. Click “Delete Images” to confirm:
- In the left-hand menu, under “Project” and then “Compute”, click on “Instances”.
- Highlight the “centos7-xen-build” instance, and click on “Terminate Instances”. Click “Terminate Instances” to confirm:
- In the left-hand menu, under “Admin” and then “System” click on “Hypervisors”. Next to “compute2”, click on “Enable Service”.
23. Launch test Xen CentOS 7 Instance¶
This page is not based on the OpenStack Installation Guide.
- From a web browser, access http://
*CONTROLLER ADDRESS*/dashboard. - Log in using the demo credentials.
- In the left-hand menu, under “Project”, and then “Compute”, click on “Access & Security”. Click on the “Key Pairs” tab:
- If you have an SSH keypair already available which you would like to use, click on “Import Key Pair”. Give the key a name and then paste in your public key:
- Alternatively, if you would like to create a new pair, click on “Create Key Pair. Give the key a name and click on “Create Key Pair. Download the key for use in your SSH client:
- In the left-hand menu, under “Project”, and then “Compute”, click on “Instances”.
- Click on “Launch Instance”. Name the instance “
centos7-test”, select the “m1.small” flavor, and “boot from image”. Choose the “CentOS 7 (Xen)” image. Before clicking on “Launch”, click on the “Access & Security” tab:
- Ensure that the key pair you just created or imported is selected, and then click on Launch:
- Wait for the instance to go to “Active” state, and then SSH to the server as “root”, using the key pair you just created or imported.
- When you are satisfied that the test instance is working, select it and then click on “Terminate Instances”. Click on “Terminate Instances” to confirm.
24. Create KVM CentOS 7 Image¶
This page is not based on the OpenStack Installation Guide.
- From a web browser, access http://
*CONTROLLER_ADDRESS*/dashboard. - Log in using the admin credentials.
- In the left-hand menu, under “Admin”, and then “System”, click on “Hypervisors”:
- Click on the “Compute Host” tab:
- Next to “compute1-vm”, click on “Disable Service”.
- Enter a reason of “Building KVM image”, and click “Disable Service”:
- In the left-hand menu, under “Project”, and then “Compute”, click on “Instances”. Click on “Launch Instance”.
- Give the instance the name “
centos7-kvm-build”, use the flavor m1.small (for a 20GB disk), and select “Boot from image” and the “CentOS 7 ISO” image. Launch the instance:
- Wait for the instance to enter “Active” state. Then, in the left-hand menu, under “Project”, and then “Compute”, click on “Volumes”. Click on “Create Volume”.
- Name the image “
centos7-kvm-build”, and set the size to 20 GB. Click “Create Volume”:
- Once the volume enters “Available” status, click the “Actions” drop-down next to the volume, and select “Manage Attachments”.
- Under “Attach to instance”, select “centos7-kvm-build”, and click “Attach Volume”:
- In the left-hand menu, under “Project”, and then “Compute”, click on “Instances”. Under the “Actions” drop-down for the “centos7-kvm-build” instance, click on “Hard Reboot Instance”. Click on “Hard Reboot Instance” to confirm:
- Wait for the instance to go back to “Active” state, and then click on the instance. Click on the “Console” tab, and then click on the grey “Connected (unencrypted) to: QEMU” bar so that keyboard input will be directed to the console:
- Highlight “Install CentOS 7”, and Enter.
- Wait for the installer to boot:
- Select language and set the timezone.
- Click on “network & hostname” and activate the network interface by setting the switch to “On”:
- Click on “Installation Source”. Set the source to network, and then define a known-good mirror. You can use
http://mirror.rackspace.com/CentOS/7.2.1511/os/x86_64/. - Click on “Installation Destination”. Select “I will configure partitioning” and click on “Done”:
- Under “New mount points will use the following partition scheme”, select “Standard Partition”.
- Click on the + button. Set the mount point to / and click “Add mount point”:
- Set “File System” to “ext4”, and then click “Done”:
- A yellow warning bar will appear. Click “Done” again, and then click on “Accept Changes”:
- Click “Begin installation”. Click on “Root Password” and set a good password.
- Once installation is complete, click “Reboot”.
- The server will be attempting to boot from the ISO once more. Press any key to stop the countdown.
- In the left-hand menu, under “Project” and then “Compute”, click on “Instances”. Select the “centos7-kvm-build” instance, and then click on “Terminate Instances”. Click “Terminate Instances” to confirm:
- In the left-hand menu, under “Project” and then “Compute”, click on Volumes.
- Click on the “Actions” drop-down next to “centos7-kvm-build”, and click on “Upload to Image”. Name the image “
centos7-kvm-initialkick”, and set the “Disk Format” to “QCOW2”. Upload the image:
- The volume will go to “Uploading” state. Wait for this to return to “Available” state.
- In the left-hand menu, under “Project” and then “Compute”, click on “Images”. Click on the “centos7-kvm-initialkick” image, which should be in “Active” state.
- In the top-right drop-down, click on “Update Metadata”.
- On the left-hand side, in the “custom” box, enter “
hypervisor_type” and click on the + button. - On the right-hand side, in the “hypervisor_type” box, enter “
kvm”. - On the left-hand side, in the “custom” box, enter “
auto_disk_config”, and click on the + button. - On the right-hand side, in the “auto_disk_config” box, enter “
true”. - On the left-hand side, in the “custom” box, enter “
hw_qemu_guest_agent” and click on the + button. - On the right-hand side, in the “hw_qemu_guest_agent” box, enter “
true”, and click on the “Save” button:
- In the left-hand menu, under “Project”, and then “Compute”, click on “Volumes”. Highlight the “centos7-kvm-build” volume, and click on “Delete Volumes”. Click “Delete Volumes” to confirm:
- In the left-hand menu, under “Project” and then “Compute”, click on “Instances”.
- Click on “Launch Instance”. Give the instance the name “
centos7-kvm-build”, use the flavor m1.small (for a 20GB disk), and select “Boot from image” and the “centos7-kvm-initialkick” image. Launch the instance:
Wait for the instance to enter “Active” state. SSH to the new instance as “root”, using the root password used during setup.
Delete the static hostname file:
# rm /etc/hostname
Stop and disable the firewalld:
# systemctl disable firewalld.service # systemctl stop firewalld.service
Disable SELINUX:
# setenforce 0 # vim /etc/sysconfig/selinux SELINUX=permissive
Update all packages on the instance:
# yum update
Install the qemu guest agent, cloud-init and cloud-utils:
# yum install qemu-guest-agent cloud-init cloud-utils
Enable and start the qemu-guest-agent service:
# systemctl enable qemu-guest-agent.service # systemctl start qemu-guest-agent.service
Enable kernel console logging:
# vim /etc/sysconfig/grub
Append “
console=ttyS0 console=tty0” to the end of theGRUB_CMDLINE_LINUXsetting. For example:GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet console=ttyS0 console=tty0"
Rebuild the grub config file:
# grub2-mkconfig -o /boot/grub2/grub.cfg
Disable user creation at instance creation time:
# vim /etc/cloud/cloud.cfg disable_root: 0
- Also delete the “
default_user:” section under “system_info”.
Delete the static network configuration file:
# rm /etc/sysconfig/network-scripts/ifcfg-eth0
Clear the root bash history:
# rm /root/.bash_history; history -c
In horizon, click the “Create Snapshot” button next to the Instance. Name the image “
CentOS 7 (KVM)”:
- Wait for the image to go to “Active” state and then, in the drop-down box next to the image, click on “Edit Image”.
- Check the “public” and “protected” boxes, and click on “Update Image”:
- Select the “centos7-kvm-initialkick” image, and click on “Delete Images”. Click “Delete Images” to confirm:
- In the left-hand menu, under “Project” and then “Compute”, click on “Instances”.
- Highlight the “centos7-kvm-build” instance, and click on “Terminate Instances”. Click “Terminate Instances” to confirm:
- In the left-hand menu, under “Admin” and then “System” click on “Hypervisors”. Next to “compute1-vm”, click on “Enable Service”.
25. Create test KVM CentOS 7 Instance¶
This page is not based on the OpenStack Installation Guide.
- From a web browser, access http://
*CONTROLLER_ADDRESS*/dashboard. - Log in using the demo credentials.
- In the left-hand menu, under “Project”, and then “Compute”, click on “Instances”.
- Click on “Launch Instance”. Name the instance “
centos7-test”, select the “m1.small” flavor, and “boot from image”. Choose the “CentOS 7 (Xen)” image. Before clicking on “Launch”, click on the “Access & Security” tab:
- Ensure that the key pair you just created or imported on page 23 is selected, and then click on Launch:
- Wait for the instance to go to “Active” state, and then SSH to the server as “root”, using the key pair you previously created or imported.
- When you are satisfied that the test instance is working, select it and then click on “Terminate Instances”. Click on “Terminate Instances” to confirm: