欢迎阅读CloudStack管理文档

_images/acslogo.png

警告

We are in the process of changing documentation format as well as hosting mechanism. Please be patient with us as we migrate our entire documentation to this new setup.

This guide is aimed at Administrators of a CloudStack based Cloud, for Release Notes, Installation and General introduction to CloudStack see the following guides:

用户界面

用户界面

登陆到用户界面

CloudStack提供一个基于web的UI,管理员和终端用户能够使用这个界面。用户界面版本依赖于登陆时使用的凭证不同而不同。用户界面是适用于大多数流行的浏览器包括IE7,IE8,IE9,Firefox3.5+,Firefox4,Safari4,和Safari5。URL是:(用你自己的管理控制服务器IP地址代替)

http://<management-server-ip-address>:8080/client

如果管理服务器是全新的安装,。那么会出现一个安装向导。在稍后的访问中,,你将看到一个登录界面,,你需要通过用户名和密码登入来查看你的仪表盘.

用户名 -> 你账号的用户ID。默认用户名是admin。

密码 -> 用户ID对应的密码。默认用户名的密码是password。

域 -> 如果你是root用户,此处留空即可。

如果你是一个子域用户,在域中输入完全路径,不包括根域。

例如,假设在根域下建立了多个层级,像Comp1/hr,在Comp1域的用户在域字段处应该输入Comp1,在Comp1/sales域的用户应该输入Comp1/sales。

更多关于当你登录这个界面时选项的指导,参照作为根管理员登录。

最终用户界面概览

CloudStack用户界面帮助云基础设施的用户查看和使用他们的云资源,包括虚拟机、模板和ISO、数据卷和快照、宾客网络,以及IP 地址。如果用户是一个或多个CloudStack项目的成员或管理员,用户界面能提供一个面向项目的视图。

根管理员界面的概述

CloudStack界面帮助CloudStack管理员配置、查看和管理云的基础设施、用户域、账号、项目和配置。当一个全新的管理服务器安装完成后,在第一次启动界面的时候,可以选择根随引导步骤配置云的基础设施。当再次登录时,会显示当前登录用户的仪表板。在这个页面有很多的连接,可以通过左边的导航栏访问各种管理功能。根管理员也可以使用界面像最终用户一样来执行所有的功能。

作为根管理员登录

在管理服务器软件安装并且运行后, 你就可以运行CloudStack的用户界面.了。在这里通过UI,可以供给、查看并管理你的云基础架构。

  1. 打开你自己喜欢的浏览器并访问这个URL。请把IP地址替换成你自己的管理服务器的IP。

    http://<management-server-ip-address>:8080/client
    

    初次登录管理服务器时,会出现一个向导启动画面。后续访问时,您会直接进入控制面板。

  2. 如果你看到第一次的向导屏幕, 可以选择下面步骤之一进行。

    • **继续执行基本安装。**如果你仅仅是想体验CloudStack,请选择这个,并且这样你可以马上开始跟着向导进行简单的配置。我们将帮助你建立一个有以下功能的云:一个运行CloudStack软件的机器和使用NFS协议的存储;一个运行VMs的XenServer或KVM hypervisor的服务器;一个共享的公有网络。

      安装向导的提示会给你需要的所有信息。但如果你需要更多的详细信息,你可以按照试用安装向导进行。

    • 我之前用过CloudStack。 如果您已经完成设计阶段,计划部署一个复杂CloudStack云,或是准备对用基础安装向导搭建的试验云进行扩展,请选择此项。在管理员UI中,您可以使用CloudStack中更强大的功能,例如高级VLAN网络、高可用、负载均衡器和防火墙等额外网络设备,以及支持Citrix XenServer、KVM、VMware vSphere等多种虚拟化平台。

      根管理员的仪表盘显示出来。

  3. 你应该为根管理员设置一个新的密码。如果你选择基础设置。将会提示你立即创建一个新的密码。如果你选择有经验的用户,请选择:ref:`changing-root-password`里的步骤。

警告

使用根管理员登录。这个账号管理CloudStack的部署,包括物理架构。根管理员可以通过更改配置来变更基本的功能,创建或删除用户账号,以及其它许多只有被授权的用户执行的操作。请更改默认的密码,确保其唯一性和安全性。

更改Root密码

在云的安装及后续管理过程中,您需要用根管理员登录UI。根管理员账号管理着CloudStack的部署以及物理设施。根管理员可以修改系统配置,改变基本功能,创建和删除用户账号,以及其他仅限于已授权人员执行的操作。在初始安CloudStack时,请务必修改默认密码为新的较独特的密码。

  1. 打开你自己喜欢的浏览器并访问这个URL。请把IP地址替换成你自己的管理服务器的IP。

    http://<management-server-ip-address>:8080/client
    
  2. 使用当前root用户的ID和口令登录UI。缺省为admin,pawword。

  3. 点击帐户。

  4. 点击管理员帐号名。

  5. 点击查看用户。

  6. 点击管理员用户名。

  7. 点击更改密码按钮。 button to change a user's password

  8. 输入新密码,然后点击确认。

管理账户,用户和域

管理账户,用户和域

账户,用户,域

账户

一个账户通常代表一个客户的服务提供者或一个大组织中的一个部门。一个账户可存在多个用户。

帐户通常按域进行分组。域中经常包含多个账户,这些账户间存在一些逻辑上关系和一系列该域和其子域下的委派的管理员(这段的意思就是说在逻辑上域下可以有管理员,子域下也可以有管理员)。比如,一个服务提供商可有多个分销商这样的服务提供商就能为每一个分销商创建一个域

对于每个账户的创建,&PRODUCT; 创建了三种不同类型的用户账户:根管理员,域管理员,普通用户。

普通用户

用户就像是账户的别名。在同一账户下的用户彼此之间并非隔离的。但是他们与不同账户下的用户是相互隔离的。大多数安装不需要用户的表面概念;他们只是每一个帐户的用户。同一用户不能属于多个帐户。

多个账户中的用户名在域中应该是唯一的。相同的用户名能在其他的域中存在,包括子域。域名只有在全路径名唯一的时候才能重复。比如,你能创建一个root/d1,也可以创建root/foo/d1和root/sales/d1。

管理员在系统中是拥有特权的账户。可能有多个管理员在系统中,管理员能创建删除其他管理员,并且修改系统中任意用户的密码。

域管理员

域管理员可以对属于该域的用户进行管理操作。域管理员在物理服务器或其他域中不可见。

根管理员

根管理员拥有系统完全访问权限,包括管理模板,服务方案,客户服务管理员和域。

资源所有权

资源属于帐户,而不是帐户中的单个用户。例如,账单、资源限制等由帐户维护,而不是用户维护。用户有权限操作任何在帐户中提供的资源。权限有角色决定。根管理员通过使用assignVirtualMachine API可以将任何虚拟机的所有权从一个帐户调整到另一个帐户。域或子域管理员可以对域中的VMs做同样的操作,包括子域。

给帐户和域分配专用资源

根管理员可以将资源分配给指定的域或为了保证额外的安全或性能从而需要单独基础架构帐户。为了一个指定的域或账号,区域、机架、群集或者主机可以被根管理员保留。只有域或它的子域中的用户可以使用这个基础架构。比如,只有域中的用户可以在其中的区域中创建来宾虚机。

这里有几种有效的分配方式:

  • 明确的专用。根管理员在初始部署和配置期间给一个帐户或者域分配了一个区域、机架、群集或者主机。

  • 严格的潜在专用:一个主机禁止通过多个账号共享。例如,严格私自共享对于部署的某些应用是有用处的,像没有软件授权主机不能在不同账号间进行桌面共享。

  • 优先的潜在专用。如果可以的话,VM会被部署在专用的基础架构中。否则,VM可被部署在共享基础架构中。

如何给帐户或者域指定一个区域、群集、机架或者主机

对于明确的专用:当部署一个新的区域、机架、群集或者主机的时候,根管理员可以点击Dedicated选框,然后选择域或者帐户来拥有这些资源。

对于明确的专用一个已存在的区域、机架、群集或者主机:使用根管理员登录,在UI中找到资源,然后点击Dedicate按钮。button to dedicate a zone, pod,cluster, or host

对于隐式的专用:管理员创建的计算服务方案和在部署规划区域选择ImplicitDedicationPlanner。然后在规划模型中,管理员按照是否允许一些人当没有专用资源可用的时候使用共享资源来选择严格的或者优先的。无论何时,用户基于这个服务方案创建的VM都会位于专用主机。

如何使用专用主机

要使用明确专用主机,在关联组 (参阅 “关联组”)中选择explicit-dedicated 类型。比如,当创建新VM的时候,终端用户可以选择将其运行在专用基础架构上。如果一些基础架构已经被分配给专用的用户帐号或域,那么这个操作才能成功。

专用主机、群集、机架和区域的行为

管理员可以将VMs从专用主机上迁移到任何想要的地方,不管目标主机是不同帐号/域专用的还是共享的主机(不对任何特殊帐号或域专用)。CloudStack将生成一个警告,不过操作还是允许的。

专用主机可用主机标签连接。如果同时需要主机标签和专用,那么VM将只会在匹配所有需求的主机上运行。如果没有专用资源可用于这类用户,那么VM就不会被不部署。

如果你删除了一个指定了专用资源的帐号或者域,那么其中的任何主机、群集、机架和区域就会被释放。它们会变成可被任何帐户或者域共享,或者管理员可选择重新把它们指定给不同的帐号或域。

系统VMs和虚拟路由器影响专用主机的行为。系统VMs和虚拟路由器由CloudStack系统账号拥有,并且它们可在任何主机上部署。它们不会伴随着明确专用主机。主机上的系统虚机和虚拟路由器使其不再适合作为严格的潜在专用主机。主机之所以不能用于严格的潜在专用主机,是因为主机已经有针对帐号(默认系统账号)的VMs。尽管如此,运行着系统VMs或虚拟路由器的主机可以被用于优先的潜在专用。

使用LDAP服务器用于用户验证

You can use an external LDAP server such as Microsoft Active Directory or ApacheDS to authenticate CloudStack end-users. CloudStack will search the external LDAP directory tree starting at a specified base directory and gets user info such as first name, last name, email and username.

To authenticate, username and password entered by the user are used. Cloudstack does a search for a user with the given username. If it exists, it does a bind request with DN and password.

To set up LDAP authentication in CloudStack, call the CloudStack API command addLdapConfiguration and provide Hostname or IP address and listening port of the LDAP server. You could configure multiple servers as well. These are expected to be replicas. If one fails, the next one is used.

The following global configurations should also be configured (the default values are for openldap)

  • ldap.basedn: Sets the basedn for LDAP. Ex: OU=APAC,DC=company,DC=com
  • ldap.bind.principal, ldap.bind.password: DN and password for a user who can list all the users in the above basedn. Ex: CN=Administrator, OU=APAC, DC=company, DC=com
  • ldap.user.object: object type of users within LDAP. Defaults value is user for AD and interorgperson for openldap.
  • ldap.email.attribute: email attribute within ldap for a user. Default value for AD and openldap is mail.
  • ldap.firstname.attribute: firstname attribute within ldap for a user. Default value for AD and openldap is givenname.
  • ldap.lastname.attribute: lastname attribute within ldap for a user. Default value for AD and openldap is sn.
  • ldap.username.attribute: username attribute for a user within LDAP. Default value is SAMAccountName for AD and uid for openldap.
Restricting LDAP users to a group:
  • ldap.search.group.principle: this is optional and if set only users from this group are listed.
LDAP SSL:
If the LDAP server requires SSL, you need to enable the below configurations.

Before enabling SSL for LDAP, you need to get the certificate which the LDAP server is using and add it to a trusted keystore. You will need to know the path to the keystore and the password.

  • ldap.truststore : truststore path
  • ldap.truststore.password : truststore password
LDAP groups:
  • ldap.group.object: object type of groups within LDAP. Default value is group for AD and groupOfUniqueNames for openldap.
  • ldap.group.user.uniquemember: attribute for uniquemembers within a group. Default value is member for AD and uniquemember for openldap.

Once configured, on Add Account page, you will see an “Add LDAP Account” button which opens a dialog and the selected users can be imported.

_images/CloudStack-ldap-screen1.png

You could also use api commands: listLdapUsers, ldapCreateAccount and importLdapUsers.

Once LDAP is enabled, the users will not be allowed to changed password directly in cloudstack.

使用项目来组织用户资源

使用项目来管理用户和资源。

项目概览

项目用来管理用户和资源。处于单独域中的CloudStack用户可以自组,他们可以集中并分享虚拟资源,如VM、快照、模板、磁盘、IP地址等。CloudStack可以像跟踪每个用户一样跟踪每个项目的资源,所以可以按照用户或者项目对资源使用收费。例如,一个软件公司的私有云可能将所有QA部门的员工分配到一个项目,当需要在测试中区分同一云中该项目组员工的贡献时,测试中的资源跟踪将变的容易。

你可以配置 CloudStack允许任何用户创建项目,或者你也可以只允许CloudStack管理员进行此项操作。一旦你创建了项目,你就成为项目管理员,你可以将域中的其他用户加入到项目。CloudStack可以设置成将用户直接加入项目或者向接受者发送邀请。项目组成员可以浏览和管理项目中的所有虚拟资源(例如,共享VM)。一个用户可以属于任何项目组,并可在CloudStack界面中切换只与项目相关的信息,如项目VM,项目成员,项目相关警告等。

项目管理员可以将角色传递给项目另外的成员。项目管理员也可以添加、删除项目成员,设置新的资源限制(只要在CloudStack管理员默认的全局设置范围之内),删除项目。当管理员将成员移出项目,那个成员所创建的资源,如VM实例,仍旧在项目中。这将我们带到了资源归属以及项目可用资源的课题下.

项目内创建的资源为项目所有,不属于任何特殊的CloudStack帐户,仅能在项目内使用。属于一个或多个项目的用户扔可在这些项目之外创建资源,这些资源属于这个用户帐户;与项目使用和资源限制并不冲突。你可以在项目内创建项目级的网络来隔离流量,并提供网络服务,如端口转发,负载均衡,VPN,静态NAT。项目也可以在项目之外使用特定资源,只要这些资源是共享的。例如,域中的共享网络,公用模板对任何项目都是可用的。模板所有者如果赋予权限,项目也可以使用私有模板。项目可以使用域中设置的服务方案或磁盘方案;然后,你无法在项目层级创建私有服务和磁盘方案。

配置项目

在CloudStack用户使用项目前,CloudStack管理员必须设置不同的系统以支持它们,包括成员身份邀请,项目资源的限制,以及对谁可以创建项目的控制。

设置邀请

CloudStack可以设置成项目管理员直接添加用户或者向接受者发送邀请。邀请可以通过邮件或者用户的CloudStack帐户。如果你希望管理员使用邀请来添加项目成员,那么打开并设置CloudStack中的邀请属性。

  1. 使用管理员登录到CloudStack管理界面。

  2. 在左侧导航栏,点击 全局设置

  3. 在搜索栏中,输入项目,点击搜索按钮。 |搜索项目|

  4. 在搜索结果中,你还需要设置一些其他的参数来控制邀请行为。下表所示的是全局配置参数中与项目邀请相关的部分,点击编辑按钮设置每个参数。

    配置参数

    描述

    project.invite.required

    将值设置为 true以打开邀请特性。

    project.email.sender

    邀请邮件中填入发送区域的邮件地址。

    project.invite.timeout

    新成员对于邀请的允许响应时间。

    project.smtp.host

    作为处理邀请的邮件服务器的主机名。

    project.smtp.password

    (可选)SMTP服务器要求的密码。你必须将project.smtp.username和project.smtp.useAuth也设置为true。

    project.smtp.port

    SMTP服务器的监听端口。

    project.smtp.useAuth

    如果SMTP服务器需要用户名和密码,则设置为true

    project.smtp.username

    (可选) 用于SMTPU认证的用户名。必须将project.smtp.password 和project.smtp.useAuth也设置为true。

  5. 重启管理服务器:

    service cloudstack-management restart
    
设置项目的资源限制

CloudStack管理员可以设置全局默认限制来控制云中每个项目可拥有的资源量。该服务用来限制不可控的资源使用,如快照,IP地址,虚拟机实例。域管理员在域中可以覆盖个人项目中的这些资源限制,只要这些限制在CloudStack根管理员的全局默认限制范围内。CloudStack根管理员可以为云中的任何项目设置更低的资源限制。

按项目设置资源限制

The CloudStack root administrator or the domain administrator of the domain where the project resides can set new resource limits for an individual project. The project owner can set resource limits only if the owner is also a domain or root administrator.

新限制值必须小于 CloudStack系统管理员设置的全局限制值(参见`“设置项目的资源限制” <#setting-resource-limits-for-projects>`_)。如果项目中的某种资源数量已经超过了新限制值,现有资源不受影响。然而,该项目将不能再添加该类型的新资源,直到资源数低于新限制值。

  1. 使用管理员登录到CloudStack管理界面。

  2. 在左侧导航栏点击项目。

  3. 在选择视图选择项目。

  4. 点击您要操作的项目名称。

  5. 点击资源页。该页列出了项目当前可拥有的各类资源的最大数量。

  6. 为一种或几种资源输入新值。

  7. 点击应用

设置全局项目资源限制
  1. 使用管理员登录到CloudStack管理界面。

  2. 在左侧导航栏,点击 全局设置

  3. 在搜索栏输入“max.projects”,点击搜索按钮。

  4. 在搜索结果中,你可以看到应用于云中所有项目的参数,你可以使用它们设置每个项目的最大资源量。没有项目能拥有更多资源,但个人项目能拥有更低的限制。点击编辑按钮设置每个参数。|编辑参数|

    max.project.public.ips

    项目拥有的公共IP最大值,参看公共IP地址。

    max.project.snapshots

    项目可拥有的最大快照数。参看工作相关快照。

    max.project.templates

    项目可拥有的最大模板数。参看工作相关模板。

    max.project.uservms

    项目中的虚拟客户机最大数目。参看工作相关虚拟机。

    max.project.volumes

    项目中所拥有的最大数据卷数,参看工作相关卷。

  5. 重启管理服务器。

    # service cloudstack-management restart
    
设置项目创建许可

你可以配置 CloudStack允许所有用户创建新项目,或限制只有 CloudStack管理员具备此项能力。

  1. 使用管理员登录到CloudStack管理界面。

  2. 在左侧导航栏,点击 全局设置

  3. 在搜索框中,输入allow.user.create.projects。

  4. 点击编辑按钮设置参数。|编辑参数|

    allow.user.create.projects

    设置为true以允许端用户创建项目。设置为false如果你仅希望CloudStack根管理员和域管理员创建项目。

  5. 重启管理服务器。

    # service cloudstack-management restart
    

创建新项目

CloudStack系统管理员和域管理员能创建项目。如果全局变量allow.user.create.projects设置为true,终端用户也能创建项目。

  1. 使用管理员登录到CloudStack管理界面。

  2. 在左侧导航栏点击项目。

  3. 在选择视图点击项目。

  4. 点击新建项目。

  5. 为项目命名并添加描述,然后点击创建项目。

  6. 会出现一个界面,你可以很快的添加更多成员到项目中,此步可选。当你准备好继续,点击下一步。

  7. 点击保存。

添加成员到一个项目

项目管理员、项目所属域的域及其父域的管理员,CloudStack根管理员均可以添加新成员。CloudStack有两种方法可以添加成员,但每次只能使能一种:

  • 如果邀请已被使能,你可以向新成员发送邀请。

  • 如果邀请未被使能,你可以直接通过界面添加成员。

发送项目成员关系邀请

如果邀请特性按照 `“设置邀请” <#设置邀请>`_中描述的方式被打开,使用这些步骤添加项目成员。如果邀请特性没有被打开,使用在界面中添加项目成员的步骤。

  1. 登录到CloudStack的界面

  2. 在左侧导航栏点击项目。

  3. 在选择视图选择项目。

  4. 点击您要操作的项目名称。

  5. 点击邀请标签。

  6. 在添加中,选择以下其中一个:

    1. 帐户 – 邀请将出现在用户项目概览的邀请标签中。参见使用项目概览。

    2. 邮件 – 邀请将发送到用户的邮箱地址。每个被发送的邀请中包含一个唯一的编码,称为令牌,当接收者接受邀请时需要向 CloudStack提供此令牌。只有当SMTP服务器相关的参数设置完毕后,邮件邀请才能工作。参见`“设置邀请” <#setting-up-invitations>`_。

  7. 输入想要添加的新用户的用户名称或邮件地址,点击邀请。如果你在前述步骤中选择了帐户那么输入CloudStack用户名。如果你选择了邮箱,那么输入邮箱地址,你只能邀请在云中拥有帐户并在与项目组同一域中的用户。然而,你可以向任何邮箱发送邀请。

  8. 为了浏览和管理你所发送的邀请,返回到这个标签。当邀请被接受,新成员将显示在项目帐户标签中。

在界面中添加项目成员

下面的步骤介绍在邀请特性未使能的情况下如何添加项目新成员。如果邀请特性已按照 `“设置邀请” <#设置邀请>`_在云中被使能,那么使用 `“发送项目成员关系邀请” <#发送项目成员关系邀请>`_中的步骤。

  1. 登录到CloudStack的界面

  2. 在左侧导航栏点击项目。

  3. 在选择视图选择项目。

  4. 点击您要操作的项目名称。

  5. 点击项目按钮,项目目前的成员将被列表显示。

  6. 输入你想添加的新成员帐户名称,点击添加帐户。你只能添加云中已有并与项目存在于相同域中的帐户。

接受成员关系邀请

如果你收到了一个加入CloudStack项目的邀请,并希望接受邀请,请按照以下步骤:

  1. 登录到CloudStack的界面

  2. 在左侧导航栏点击项目。

  3. 在选择概览,选择邀请。

  4. 如果你看到有邀请列表显示,点击接受按钮。

    屏幕上所显示的邀请是通过你的CloudStack帐户发送的。

  5. 如果你收到的是邮件邀请,点击输入令牌按钮,提供邮件中的项目ID和唯一的ID编码(令牌)。

挂起或删除项目

当项目被挂起,它仍保有资源,但这些资源不能被使用。新资源及用户不能被加入到挂起的项目。

当项目被删除,资源将被销毁,成员帐户也从项目中移除。项目状态被显示为禁止有待最后删除。

项目可以被管理员,项目所属的域及其父域管理员, CloudStack根管理员挂起或删除,

  1. 登录到CloudStack的界面

  2. 在左侧导航栏点击项目。

  3. 在选择视图选择项目。

  4. 点击项目名称。

  5. 点击以下按钮:

    使用 |移除项目|进行删除

    使用 |挂起项目|进行挂起

使用项目概览

如果你是项目成员,你可以通过CloudStack项目概览查看项目成员,资源消耗等。项目概览仅显示与一个项目相关的信息,从众多信息中筛选出项目状态及资源,这是种有效的方法。

  1. 登录到CloudStack的界面

  2. 点击项目概览。

  3. 出现项目控制板,显示项目VM,卷,用户,事件,网络设置等,在控制板中,你可以:

    • 点击帐户按钮浏览和管理项目成员。如果你是项目管理员,你可以添加新成员,或者将成员从用户改为管理员。每次只有一名成员能成为管理员,如果你将另外的用户设置为管理员,你将成为普通用户。

    • (如果邀请已被使能)点击邀请按钮来浏览或管理已经想新项目成员发出未被接受的邀请。有待确认的邀请将一直在列表中知道被新成员接受,或邀请时间超时,或你取消了邀请。

服务方案

服务方案

除了云中的物理和逻辑基础设施以及 CloudStack 软件和服务器,你还需要一个用户服务层以便让人们能够真正利用云。这不仅仅意味这一个用户界面,而是一组用户可选择的选项和资源,如通过模版创建虚拟机,磁盘存储等等。如果您正在运行一个商业服务,您将可以跟踪服务和用户资源的使用以及使用计费情况。即使你不对使用云的人收费,-比如说,用户是你的内部组织,或只是你的朋友共享你的云 - 你仍然可以跟踪他们所使用的服务以及有多少人。

服务方案,磁盘方案,网络方案和模版

用户可以根据不同的功能和特性来创建新的实例,CloudStack提供了几种方法供用户选择以便创建一个新的实例:

  • 服务方案,由CloudStack管理员定义,提供了多种选项供选择:CPU速率,CPU核数,内存大小,根磁盘标签等。可参阅创建新的计算方案。

  • 磁盘方案,由CloudStack管理员定义,针对主数据存储提供磁盘大小和IOPS(QOS)等选项供选择。可参阅创建新的磁盘方案。

  • 网络方案,由 CloudStack 管理员定义,约定来宾网络中虚拟路由器或外部网络设备提供给终端用户可用的功能描述集。

  • 模版,由CloudStack管理员或其他CloudStack用户定义,用户创建新的实例时可选择的基本操作系统镜像。例如,CloudStack中包含的CentOS模版。可参阅使用模版。

除了给用户提供了这些选择,还有另一种类型的服务方案只提供给CloudStack管理员,用于配置虚拟基础设施资源。欲了解更多信息,请参阅使用系统服务方案升级虚拟路由器。

计算和磁盘服务方案

一个服务方案是一个虚拟硬件特征集合,比如 CPU 核心数量和速度,内存,已经磁盘大小。 CloudStack 管理员可以建立各种方案,接着终端用户在建立一个新虚拟机时就可以选择可用方案。基于用户的选择方案, CloudStack 可以发送整合在计费系统中的使用记录。

CloudStack 管理员必须定义一些服务方案的特征,同时留下一些未定义的,让终端用户输入自己需要的值。这个对于降低 CloudStack 管理员定义的方案的数量很有用处。代替为每个用户定义一个他们想要可能组合值的计算方案,管理员可以为用户定义灵活的计算方案,并作为基本的若干不同虚拟机配置。

一个服务方案包含以下元素:

  • 必备的CPU,内存和网络资源

  • 如何计算资源

  • 使用资源如何收费

  • 产生了多少费用

比如,一个服务方案允许用户创建一个虚拟机实例,这个实例等于: 1 GHz Intel® Core™ 2 CPU, $0.20/hour 的 1G 内存, $0.10/GB. 的网络流量。

CloudStack 把服务方案分割为计算方案和磁盘放哪。计算方案指定:

  • 来宾 CPU(可选)。如果 CloudStack 管理员未定义,用户可选择 CPU 特性。

  • 来宾 RAM ( 可选)。如果 CloudStack 管理员未定义,用户可选择 RAM。

  • 来宾网络类型(虚拟或者直连)

  • 根磁盘标签

磁盘方案特指:

  • 磁盘大小 (可选)。如果 CloudStack 管理员未定义,用户可选择磁盘大小。

  • 数据磁盘标签

自定义计算方案

CloudStack provides you the flexibility to specify the desired values for the number of CPU, CPU speed, and memory while deploying a VM. As an admin, you create a Compute Offering by marking it as custom, and the users will be able to customize this dynamic Compute Offering by specifying the memory, and CPU at the time of VM creation or upgrade. Custom Compute Offering is same as the normal Compute Offering except that the values of the dynamic parameters will be set to zeros in the given set of templates. Use this offering to deploy VM by specifying custom values for the dynamic parameters. Memory, CPU and number of CPUs are considered as dynamic parameters.

动态计算方案可以在下列情况下使用:部署虚拟机,改变停止状态虚拟机和运行中的虚拟机的计算方案,单独扩容。为了支持此功能的新字段,自定义,已经被添加到创建计算方案页面。如果自定义字段被选中,用户将可以通过填写所需 CPU ,CPU 速度和内存数量的值来创建自定义计方案。看到了吗?有关它的更多信息。

Recording Usage Events for Dynamically Assigned Resources.

为了支持该功能,使用事件已经被增强到动态分配资源注册事件中。当从一个自定义计算方案创建一个虚拟机时,使用事件被注册,并紧接更改处于停止状态或者运行状态虚拟机的计算方案。如 CPU ,速度,RAM 这些参数的值被记录下来。

创建一个新的计算方案

为了创建一个新的计算方案

  1. 以管理员权限登录CloudStack用户界面。

  2. 在左侧导航栏中,点击 服务方案。

  3. 选择方案中,选择计算方案

  4. 点击 添加计算方案

  5. 在对话框中,选择如下操作:

    • ** 名称 **: 服务方案所需的名称。

    • ** 描述 ** :显示给用户的简短方案描述。

    • ** 存储类型 *:磁盘类型应该被分配。系统VM 运行时所在主机挂载的存储作为本地分配。通过 NFS 可访问的存储作为 共享分配。

    • ** 自定义 ** :自定义计算方案使用在以下场景:部署虚拟机,改变停止状态和运行中虚拟机的计算方案,仅仅为了扩大规模。

      如果自定义字段被选中,最终用户必须为CPU,CPU速度和RAM存储器的数量所需的值使用自定义计算产品时填写。当您选中该复选框,这三个输入字段隐藏在该对话框。

    • # of CPU cores: 在该方案中分配核心数量给系统 VM , 如果选择自定义,该区域不会出现。

    • ** CPU (in MHz) **: 分配给系统 VM 的 CPU 核心速度。比如 “ 2000 ” 将提供 2 GHz 时钟频率。如果选择订制,该区域不会出现

    • ** Memory (in MB) **: 分配给系统 VM 的内存M数。比如,, “ 2048 ” 应该提供 2 G RAM。如果选择订制,该区域不会出现。

    • 网络速度: 允许的数据传输速度(MB/秒)。

    • ** 磁盘读取速率 ** :磁盘每秒允许读取的bit数

    • ** 磁盘写入速率 **: 磁盘每秒允许写入的bit数。

    • ** 磁盘读取速率 ** : IPOS (每秒的输入/输出操作 )中运行磁盘读取的速率

    • ** 磁盘写入速率 ** : IPOS (每秒的输入/输出操作 )中运行磁盘写入的速率

    • ** HA 方案 ** : 如果必要,管理员可以选择监控系统 VM 和尽可能采用高可用。

    • ** 存储标签 ** :这个标签应该和系统 VM 使用的主存储相关联。

    • ** 主机标签 ** :(可选)用于组织你的主机的任何标签。

    • ** CPU 容量 ** : 是否限制CPU使用率的水平,即使备用容量仍可用。

    • ** 公共 ** : 指明系统方案是对所有域或者部分域是否可用。 选择 Yes 则所有域可用。选择 No 则限制一定范围的域可用; CloudStack 会给出一个字域名字提示。

    • ** 隔离 ** : 如果选中,从这个服务方案创建的虚拟机重启复位后都会有自己的根磁盘。这对于需要通过重启获得全新开始和无需保持桌面状态的安全环境中非常有用。

    • ** 部署方案 ** :当部署基于这个服务方案的虚拟机时,你会让 CloudStack 选择使用这个技术。

      首先,存放新虚拟机的第一台主机必须有足够的容量来满足虚拟机的要求。

      User Dispersing makes the best effort to evenly distribute VMs belonging to the same account on different clusters or pods.

      用户更倾向于集中部署同一帐户内的虚拟机在单一提供点.

      默认将虚拟机部署在特定域或账户的专属基础设施中。如果你选择,那么你必须为规划者参考 “ 为账号和域指定资源 ”.

      裸机协同全裸机使用。参考安装向导中的全裸机安装。

    • ** 方案模式 ** :当在之前场景中默认指定方案被选中时使用。方案模式决定了在分享的单独域或账户的私有基础架构中部署多少虚拟机。

      严禁:一个主机禁止通过多个账号共享。例如,严格默认指定对于部署的某些应用是有用处的,例如,在无隔离桌面软件授权主机上不能在不同账号间进行桌面共享。

      优先:VM 尽可能的部署在专属基础架构。否则部署在共享基础架构。

  6. 点击 添加

创建一个新的磁盘方案

为了创建一个新的磁盘方案

  1. 以管理员权限登录CloudStack用户界面。

  2. 在左侧导航栏中,点击 服务方案。

  3. 在选择方案中,选择 磁盘方案

  4. 点击 添加磁盘方案

  5. 在对话框中,选择如下操作:

    • 名字

    • 描述。显示给用户的方案简短描述。

    • 订制磁盘大小。如果选中,用户可以设置自己磁盘大小。如果没选中,管理员必须定义这个磁盘大小的值。

    • 磁盘大小。只有 订制磁盘大小 未被选择才会显示。按照GB定义卷大小。

    • QoS 类型。三种可选:空 ( 无服务质量), hypervisor (在 hypervisor侧强制速率限制),存储 (在存储侧保证最小和最大 IOPS)。如需使用 QoS ,确保 hypervisor 或存储系统支持此功能。

    • 订制 IOPS 。 如选中,用户可以设置自己的 IOPS。如未被选中,root 管理员则能够定义该值。如果使用存储 QoS时,root 管理员没有设置该值,则采用默认值(如果创建主存储时考虑到对应的参数被传递到 CloudStack 中,则默认值将被覆盖)

    • 最小 IOPS 。只有使用存储 QoS 才会出现。在存储侧进行保障最小 IOPS 数量。

    • 最大 IOPS 。 使用了存储 QoS才会显示。IPOS 最大数量将在存储侧被强制设置(系统可以在短时间内超过这个限制)

    • (可选)存储标签.这个标签应与这个磁盘的主存储相关联。标签以逗号分隔存储的属性列表。比如 “ssd,blue”. 标签被添加在主存储上。CloudStack 通过标记匹配磁盘方案到主存储。 如果一个标签出(或多个标签)现在磁盘方案里, 那这个标签也必须出现在将要分配这个卷的主存储上。如果这样的主存储不存在, 从这个磁盘方案中进行分配将失败。

    • ** 公共 ** : 指明系统方案是对所有域或者部分域是否可用。 选择 Yes 则所有域可用。选择 No 则限制一定范围的域可用; CloudStack 会给出一个子域名字提示。

  6. 点击 添加

修改或删除一个服务方案

服务方案一旦被创建则不能被修改。这个特性同样适用于计算方案和磁盘方案。

系统服务方案可以被删除。它不再处于使用中,它可以被立即永久删除。如果服务方案处于使用中,它会留在数据库中,直到所有引用它的虚拟机被删除。管理员删除后,管理员创建新的虚拟机时这个服务方案将不可用。

系统服务方案

系统服务方案提供CPU速度,CPU数量,标签和RAM大小的选择,就像其他服务方案那样。但不被用于虚拟机实例和暴露给用户,系统服务方案是用来改变虚拟路由器,console 代理,和其他系统的虚拟机的默认属性。系统服务方案只对 CloudStack中 root 管理员是可见的。CloudStack 提供默认的系统管理方案。CloudStack 中 root 管理员可以创建其他自定义系统服务方案。

当 CloudStack 中创建一个虚拟路由器的来宾网络,它使用系统服务方案中的默认设置和网络方案进行关联。你可以通过应用包含不同的系统服务方案的网络方案升级的虚拟路由器的功能。该网络中的所有虚拟路由器将使用来自新服务方案的设置。

创建一个新的系统服务方案

为了创建一个系统服务方案:

  1. 以管理员权限登录CloudStack用户界面。

  2. 在左侧导航栏中,点击 服务方案。

  3. 在选择方案中,选择 系统方案

  4. 点击 添加系统服务方案

  5. 在对话框中,选择如下操作:

    • 名称。系统方案任意名称。

    • 描述。显示给用户的方案简短描述。

    • 系统 VM 类型。选中方案支持的系统虚拟机的类型。

    • 存储类型。应该指明磁盘类型。运行中的虚拟机所在主机关联的存储采用本地分配。通过 NFS 可访问存储共享分配。

    • # CPU 核心。方案中提供给系统 VM 可分配的核心数量。

    • CPU ( MHz )。 分配给系统 VM CPU 核心频率。例如,” 2000 “ 将提供一个2 GHz 时钟。

    • 内存 ( MB )。系统 VM 应该总计分配兆字节内存。例如,” 2048 “ 应该分配 2GB RAM.

    • 网络速率。每秒允许传输多少 MB 数据。

    • 提供HA。如果提供,管理员可以为监控系统 VM 并且提供必要的高可用。

    • 存储标签。它应该和系统 VM 使用的主存储相关联。

    • 主机标签。 ( 可选 )你可以选择任何标签用于规范你的主机。

    • CPU 容量。是否限制 CPU 使用水平不管是否还有可用的空闲容量。

    • ** 公共 ** : 指明系统方案是对所有域或者部分域是否可用。 选择 Yes 则所有域可用。选择 No 则限制一定范围的域可用; CloudStack 会给出一个子域名字提示。

  6. 点击 添加

网络限速

网络限速是基于明确规则来控制网络访问和带宽使用率的一个进程。 CloudStack 中通过网络速率参数来控制来宾网络在云中的行为。这个参数定义一个来宾网络允许的默认数据传输速率为 Mbps (兆比特每秒)。它定义了网络利用上限。如果目前的使用率低于允许的上限,则允许访问,否则拒绝。

你可以在大型云环境中通过限制一些账户明确的使用率或拥塞控制来节省网络带宽。以下是可以在你的云中进行配置的网络速率:

  • 网络方案

  • 服务方案

  • 全局参数

服务方案中如果网络速率设置为 NULL , vm.network.throttling.rate 值作为全局参数被应用。网络方案中如果这个值设置为 NULL 。network.throttling.rate 值被看做全局参数。

对于默认的公共,存储和管理网络,网络速度被设置为0,这意味着,公共,存储和管理网络默认不被限制带宽。对于默认来宾网络,网络速率设置为 NULL 。在这种情况下,网络速率被默认为全局参数值。

你可以在大型云环境中通过限制一些账户明确的使用率或拥塞控制来节省网络带宽。以下是可以在你的云中进行配置的网络速率:

网络

网络速率起源

来宾网络虚拟路由器

来宾网络方案

公网网络虚拟路由器

来宾网络方案

辅助存储 VM 的存储网络

系统网络方案

辅助存储 VM 的管理网络

系统网络方案

Console 代理 VM 的存储网络

系统网络方案

Console 代理 VM 的管理玩了过

系统网络方案

虚拟路由器的存储网络

系统网络方案

虚拟路由器的管理网络

系统网络方案

辅助存储 VM 的公共网络

系统网络方案

Console 代理 VM 的公共玩了过

系统网络方案

Console 代理 VM 的默认网络

计算方案

来宾 VM 的附加网络

对应的网络方案

一个来宾虚拟机必须有一个默认网络,也可拥有多个附加网。根据各种参数,如主机和虚拟交换机使用,你可以观察到在云网络中的不同速率。例如,在 VMware 主机的实际网络速率依赖于其他配置( 计算方案,网络方案,或两者兼有 );网络类型( 共享或隔离 )和流量方向( 入口或出口 )。

CloudStack 中特定网络使用网络方案的网络速率设置将应用到端口组的流量整形策略中。例如:端口组 A , 网络 : 现实网络的特定子网或 VLAN 。网络通过虚拟路由器连接端口组 A ,同时网络中默认实例也连接该端口组。尽管如此,如果通过设置了网络速率的计算方案部署虚拟机,并且如果这个网络速率通过流量整形策略应用到其他端口组。例如端口组 B , 使用该计算方案的实例连接到端口组 B 将 替换到端口组 A 的连接。

VMWARE中基于标准端口组流量整形策略只被应用于出口流量,并且网络效果依赖于 CloudStack 中使用的网络类型。共享网络中,不限制 CloudStack 入口流量,同时限制出口流量到任何实例应用到端口组的速率。如果计算方案配置了网络速率,速率应用到出口流量,否则应用网络方案的网络速率设置。对于隔离网络,网络方案设置网络速率,如果明确了范围,会对入口流量产生影响。这个主要是因为网络方案的网络速率设置应用在虚拟路由器到实例之间的出口流量上。在一定范围内,实例通过端口组速率限制出口流量,这个和共享网络类似。

例如:

网络方案网络速率 = 10 Mbps

计算方案网络速率 = 200 Mbps

共享网络中。CloudStack 这种进入流量不进行限制,同时出口流量限制为 200 Mbps。隔离网络入口流量限制为 10 Mbps,同时出口流量限制到 200 Mbps

修改系统 VM 的默系统方案

你可认为的修改一个特殊系统 VM 的系统方案。此外,作为一个 CloudStack 管理员,你也可以改变系统 VM 使用的默认系统方案。

  1. 创建一个新的系统方案

    获取更多信息,查看 创建一个新的系统服务方案

  2. 备份数据库:

    mysqldump -u root -p cloud | bzip2 > cloud_backup.sql.bz2
    
  3. 打开一个 MySQL 提示:

    mysql -u cloud -p cloud
    
  4. 在 cloud 数据库运行以下查询。

    1. 在磁盘 _offering 表,验证原始默认方案和你想默认使用的新方案。

      记录新方案的 ID

      select id,name,unique_name,type from disk_offering;
      
    2. 对于原始默认方案,设置 _name 唯一值为 NULL

      # update disk_offering set unique_name = NULL where id = 10;
      

      确保你使用正确的 ID 值

    3. 对于你想使用默认值的新方案,参照以下进行唯一 _name 值设置:

      对于默认控制台代理虚拟机 ( CPVM )方案, 设置唯一 _name 为 ‘ Cloud.com-ConsoleProxy ‘。对于默认辅助存储虚拟机 ( SSVM ) 方案, 设置唯一 _name 为 ‘ Cloud.com-SecondaryStorage ‘。例如:

      update disk_offering set unique_name = 'Cloud.com-ConsoleProxy' where id = 16;
      
  5. 重启 CloudStack 管理器服务器。因为默认方案启动时被加载到内存,要求重启。

    service cloudstack-management restart
    
  6. 销毁存在的 CPVM 或 SSVM 方案,并等待它们重建。 通过新的系统方案配置新的 CPVM 或 SSVM。

用户网络设置

用户网络设置

用户网络设置概览

当涉及到云计算所提供的网络服务, 使用云基础设施的人都有各自不同的需求和喜好. 作为CloudStack管理员, 你可以为你的用户做下面的事情来设置网络:

  • 在资源域里设置物理网络

  • 在单个物理网卡上为同一服务设置不同的服务提供者(例如, 同时设置Cisco和Juniper的防火墙)

  • 绑定不同类型的网络服务到网络方案中, 这样用户可以在给定的虚机中使用期望的网络服务.

  • 随着时间的推移, 添加新的网络方案以便是最终用户升级他们的网络服务.

  • 提供更多的途径让一个用户访问一个网络, 比如通过用户所在的一个项目

关于虚拟网络

虚拟网络是使多租户在一个物理网络中的逻辑结构。在CloudStack中,虚拟网络可以被共享或隔离。

隔离的网络

一个隔离的网络可以访问虚拟机的单一账户。隔离的网络具有下列性质。

  • 如VLAN等资源被动态分配和垃圾收集

  • 有一个用于整个网络的网络方案

  • 网络提供可升级或降级,但它是用于整个网络的

更多信息,参考`“在高级区域中配置来宾流量” <networking2.html#configure-guest-traffic-in-an-advanced-zone>`_.

共享网络

共享网络可以被属于不同客户的虚拟机访问。共享网络中的网络隔离通过安全组实现(仅在CloudStack 3.0.3及以后的基本区域中支持)

  • 管理员创建的共享网络

  • 在一个确定的哉中设计共享网络

  • 共享网络资源如VLAN和物理网络,它映射到指定的管理员

  • 共享网络通过安全组实现隔离

  • 公网网络作为一个共享网络不会展示给终端用户

  • 当共享网络是由虚拟路由提供的服务,则每个区域是不支持Source NAT功能的。然而,每个帐户是支持Source NAT功能的。更多信息,参考`“配置共享来宾网络” <networking2.html#configuring-a-shared-guest-network>`_.

虚拟网络资源的运行时分配

当你定义一个新的虚拟网络,该网络的所有的设置都存储在CloudStack中。只有当第一台虚拟机在该网络中启动时,实际的网络资源才会被激活。当所有的虚拟机都离开了虚拟网络,系统进行网络资源垃圾收集,使它们能够重新分配。这有助于节约网络资源。

网络服务提供者

注解

查看最新的网络服务提供者支持列表请见CloudStack用户界面或者调用listNetworkServiceProviders`.。

服务提供者(也称为网络元件)是指通过硬件或虚拟应用来实现网络应用。比如,防火墙应用可以安装在云端来提供防火墙服务。在独立网络中多个提供者能提供相同的网络服务。比如,可以通过思科或者Juniper的设备在同一个物理网络中提供防火墙服务。

在一个网络中你可以多个实例使用相同的服务提供者(也可以使用多个Juniper SRX设备)

如果不同提供者被设置在网络中提供相同服务,管理员可以通过创建网络提供方案,因此用户能够自己制定使用哪个物理网络提供者(要遵从网络提供方案中的其他选项)。否则CloudStack会在服务被需求的时候选择使用哪个提供者。

支持的网络服务提供者

CloudStack已经预置了一些内置的服务提供者支持列表。你能在创建网络提供方案的时候你能从这列表中选择。

 

虚拟路由

Citrix NetScaler Juniper SRX F5 BigIP Host based (KVM/Xen)

远程访问VPN

支持

不支持

不支持

不支持

不支持

DNS/DHCP/User Data

支持

不支持

不支持

不支持

不支持

防火墙

支持

不支持

支持

不支持

不支持

负载均衡

支持

支持

不支持

支持

不支持

弹性IP

不支持

支持

不支持

不支持

不支持

弹性负载均衡

不支持

支持

不支持

不支持

不支持

Source NAT

支持

不支持

支持

不支持

不支持

静态 NAT

支持

支持

支持

不支持

不支持

端口转发

支持

不支持

支持

不支持

不支持

网络方案

注解

要查看最新的网络服务支持列表,请参见 CloudStack用户界面或者调用API listNetworkServices。

网络方案是带名称的一套网络服务,例如:

  • DHCP
  • DNS
  • Source NAT
  • 静态 NAT

  • 端口转发

  • 负载均衡

  • 防火墙

  • VPN
  • (可选)在服务的多个可选提供者中指定其一,例如Juniper作为防火墙服务

  • (可选)用于指定使用哪个物理网络的网络标签

用户创建虚机时,需要选择一种可用的网络方案。该网络方案确定了虚机可使用的网络服务。

CloudStack ;提供了默认的网络方案,在此之外,系统管理员可以添加任意数量的自定义网络方案。通过创建多种自定义网络方案,您可以在一个多租户的物理网络基础上提供不同级别的网络服务。例如,尽管两个租户的底层物理网络是一个,租户A可能仅需要用防火墙保护他们的网站,而租户B可能运行一个Web服务器集群,要求一个可扩展的防火墙方案、负载均衡方案、和用于访问后台数据库的可替代的网络。

注解

如果你创建了一个负载均衡规则且使用包括外部负载均衡设备的网络服务方案,如包括NetScaler,但随后将网络方案改成使用CloudStack的虚拟路由器,则你必须在虚拟路由器上创建一个防火墙规则,这些防火墙规则与已经设置的负载均衡规则一一对应,只有这样,才能使那些负载均衡规则继续起作用。

创建新虚拟网络时,CloudStack 管理服务器可选择该网络使用的网络方案。每个虚拟网络都关联一个网络方案。虚拟网络可以通过改变关联的网络方案来升级或降级。如果您要这样做,请确保调整物理网络以匹配网络方案。

CloudStack 中有给系统虚机使用的内部网络方案。内部网络方案对最终用户是不可见的,但系统管理员可以对其修改。

创建一个新的网络方案

创建一个新的网络方案

  1. 以管理员权限登录CloudStack用户界面。

  2. 在左侧导航栏中,点击 服务方案。

  3. 在选择方案中,选择网络方案。

  4. 点击添加网络方案。

  5. 在对话框中,选择如下操作:

    • 名称: 任何网络方案的名称。

    • 描述: 提供一个简短的方案描述。

    • 网络速度: 允许的数据传输速度(MB/秒)。

    • 来宾类型: 选择来宾网络为隔离或共享网络。

      关于此组的描述,参考 see “关于虚拟网络”.

    • 持续性: 表明来宾网络是否支持持续性。无需提供任何VM部署的网络,称之为持续性网络。更多信息,参考 “持久性网络”.

    • 指定VLAN.(只针对隔离来宾网络)这表明无论使用这个方案时是否指定了VLAN,如果你选中了这个选项,并且当你创建VPC层或隔离网络时使用了这个网络方案,那么,你都可以为你创建的网络拽定一个VLAN ID。

    • VPC: 此选项表明是否在来宾网络中启用VPC。 CloudStack中的虚拟专用云(VPC)是专用、隔离的。 一个VPC可以有一个类似于传统物理网络的虚拟网络拓扑结构。有关的VPC的详细信息,请参考 see “关于VPC”.

    • 支持的服务.选择一个或多个可能的网络服务。对于有一个服务,你还必须同时选择服务的提供者。比如,如果你选择了负载均衡服务,那你可以CLOUDSTACK虚拟路由或是云环境中其它配置了此功能的服务者。取决于你选择服务的不同,额外的选项或是对话框的填写项也会相应不同。

      基于你选择的来宾网络类型,可以看到如何支持的服务。

      支持的服务

      描述

      隔离

      共享

      DHCP

      更多信息,请参考 “DNS and DHCP”.

      支持

      支持

      DNS

      更多信息,请参考 “DNS and DHCP”.

      支持

      支持

      负载均衡

      如果选择了负载均衡功能,你就可以选择CLOUDSTACK虚拟路由或是云环境中其它配置了此服务的提供者。

      支持

      支持

      防火墙

      更多信息,请参考管理指南

      支持

      支持

      Source NAT

      如果选择了 Source NAT功能,你就可以选择CLOUDSTACK虚拟路由或是云环境中其它配置了此服务的提供者。

      支持

      支持

      静态 NAT

      如果选择了Static NAT功能,你就可以选择CLOUDSTACK虚拟路由或是云环境中其它配置了此服务的提供者。

      支持

      支持

      端口转发

      如果选择了端口转发功能,你就可以选择CLOUDSTACK虚拟路由或是云环境中其它配置了此服务的提供者。

      支持

      不支持

      VPN

      更多信息,请参考`“Remote Access VPN” <networking2.html#remote-access-vpn>`_.

      支持

      不支持

      用户数据

      更多信息,请参考 “用户数据和元数据”.

      不支持

      支持

      网络 ACL

      更多信息,参考 “配置网络ACL”.

      支持

      不支持

      安全组

      更多信息,请参考 “添加一个安全组”.

      不支持

      支持

    • 系统方案.如果你在任务服务提供者中选择了虚拟路由,那就会出现系统方案的填写项。选择你希望虚拟路由在此网络中提供的服务。比如,如果你选择了虚拟路由提供负载均衡服务,系统方案填写项就会出现,你可以选择CLOUDSTACK管理员定义的所有系统服务及默认服务。

      更多信息,参考 “系统服务方案”.

    • 负载均衡隔离:指定一种你希望的网络里的负载均衡隔离类型:共享的或单独的。

      单独的:如果你选择了单独的负载均衡隔离类型,这个区域内的单独负载均衡设备资源池里就会分出一个设备来供使用。如果没有足够的设备,网络创建就会失败。如果要在高流量环境充分利用设备,单独的负载均衡设备是一个很好的选择。

      共享的:如果你选择了共享的负载均衡设备, 这个区域内的共享负载均衡设备资源池里就会分出一个设备来供使用。CloudStack会将共享设备分配给不少于一定量的帐号共用,如果共享设备到达了最大数量,则这个设备不会再分配给新的帐号使用。

    • 模式: 你可以选择 Inline mode 或 Side by Side 模式:

      Inline mode: 这种模式只被Juniper SRX 和BigF5负载均衡设备支持。在Inline mode模式下,防火墙设备放在负载均衡设置的前端,这个防火墙作为进入流量的网关,所有的流量会被转发到后端的负载均衡设备上。在这种情况下,负载均衡不是直接连接到公网网络的。

      Side by Side*:在Side by Side模式下,防火墙设备和负载均衡设备是并列的。所以所有的流量都是直接到负载均衡设备而不经过防火墙。因此,负载均衡设备是直接暴露在公网中的。

    • 关联的公网IP:如果你想给来宾网络中的虚拟机直接分配公网IP,选择这个选项。这个选择只有在以下情况都满足时才有效:

      • 来宾网络被共享

      • 启用StaticNAT

      • 启用弹性IP

      更多信息,参考`“关于弹性IP” <networking2.html#about-elastic-ip>`_.

    • 冗余路由性能:这个选项只有在虚拟路由作为Source NAT提供者时才是可选的。如果你希望提供不间断的网络连接,选中这个选项,则系统会提供两个虚拟路由:一个主的,一个辅的。主虚拟路由负载收发用户虚拟机器的相关请深圳市。如主的路由DOWN掉时,由辅助路由升级为主路由接替工作。CloudStack 会将两个邪气路由分别部署在不同的主机上,以保证可用性。

    • Conserve mode: 这个选项表明是滞要使用Conserve模式。在这种模式下,网络资源只会分配给网络里第一个启动的虚拟机。当Conserve模式被关闭时,公网IP只能用于一个设备。比如,一个用于端口转发规则的公网IP不会被用于其它StaticNAT或是负载均衡服务。当Conserve模式启用时,你可以在同一个公网IP上定义多个服务。

      注解

      如果StaticNAT启用后,不考虑conseve模式的状态,则此IP不会创建端口转发或是负载均衡规则。然而,你可以通过 createFirewallRule命令添加防火墙规则。

    • 标签:网络标签用于指定所要使用的物理网络。

    • 默认外出策略: 为防火墙配置默认的外出流量规则。选项有阻止和允许。如果没有特别制定策略的话,所有外出流量都是允许的。这表明,从这个方案创建的客户网络是允许所有外出流量的。

      为了阻止客户网络的外出流量,选择阻止选项。在这种情况下,当你从一个隔离的客户网络配置一个外出规则时,添加一些特殊的规则以允许外出流量。

  6. 点击 添加

使用虚拟机

使用虚拟机

关于虚拟机的使用

CloudStack在云中为管理员提供了完整的管理所有来宾VMs整个生命周期的功能。CloudStack为终端用户和管理员提供了许多来宾虚机管理操作。VMs能被关机、开机、重启和销毁。

来宾VMs有名称和组。VM的名称和组对于CloudStack是不透明的,对终端用户整理他们的VMs可用。每个VM可以有三个用于不同环境的名称。其中有两个名字受用户控制:

  • 实例名称 – 一个唯一的,不可变的由CloudStack生成的ID,此ID不能被用户修改。此名称符合 IETF RFC 1123中的要求。

  • 显示名称 – 在CloudStack UI中显示的名称。可以由用户设置。默认跟实例名称一致。

  • 名称 – 由DHCP服务器分配给VM的主机名。可以由用户设置。默认跟实例名称一致。

注解

你能把来宾VM的显示名附加到它的内部名称上。更多信息,请参考 “将显示名附加到VM的内部名称”.

来宾VMs可以配置成高可用(HA)。启用了HA的VM由系统监控。如果系统检测到此VM宕机,它可能将尝试在不同的主机上重启VM。更多信息,请参考在虚拟机上启用HA

每个新VM都有一个公共网络IP地址。当VM启动后,CloudStack为此VM的公共网络IP地址与内网IP地址自动创建一个静态NAT。

如果使用了弹性IP(与Netscaler负载均衡同时使用),初始分配给新VM的IP地址并没有标记为弹性的。用户必须将自动配置IP改为获得一个弹性IP,并在新IP与来宾VM的内网IP之间设置静态NAT映射。VM的原始IP地址随后会被释放到可用公共网络IPs池中。同样,你也可以在启用了EIP的基础zone中不为VM分配公网IP。关于弹性IP的更多信息,请参考`“关于弹性IP” <networking2.html#about-elastic-ip>`_。

CloudStack不能区分是VM突然关机还是用户对来宾VM进行的关机操作(像Linux中的“shutdown”命令)。如果从启用了HA功能的VM系统中执行了关机操作,CloudStack会重启它。要对启用了HA功能的VM进行关机操作,你必须通过CloudStack UI或者API。

虚拟机的最佳实践

为了让VMs能够按照预期的工作并提供最好的服务,请按照下面的指导进行操作。

监视VMs的最大容量

管理员应该监视每个集群中的虚拟机实例的总数,如果总量快达到hypervisor允许的最大虚拟机数量时,不再向此群集分配虚拟机。并且,要注意为主机预留一定的计算能力,以防止群集中有主机发生故障,因为发生故障的主机上的虚拟机需要重新部署在这些预留主机上。请咨询您所选择hypervisor的文档,了解此hypervisor能支持的最大虚拟机数量,并将此数值设置在CloudStack的全局设置里。监控每个群集里虚拟机的活跃程序,并将活跃虚拟机保持在一个安全线内。这样,CloudStack就能允许偶尔的主机故障。举个示例:如果集群里有N个主机,而你只能让其中任一主机的停机时间不超过特定时间。那么,你能在此集群部署的最多虚拟主机数量值为:(N-1) * (每主机最大虚拟量数量限值)。一旦群集中的虚拟机达到此数量,必须在CloudStack的用户界面中禁止向此群集分配新的虚拟机。

安装需要的工具和驱动

确认在每个VM上都安装了下列软件和驱动:

  • 对于XenServer,在每个VM上安装PV驱动和Xen tools。这将启用动态迁移和干净的来宾虚机关机。安装Xen tools也是为了使动态的CPU和RAM扩展能够工作。

  • 对于vSphere,在每台VM上安装VMware Tools。这是为了能让控制台查看正常工作。安装VMware Tools也是为了使动态的CPU和RAM扩展能够工作。

使用下面其中一个方法确认已经安装了Xen tools或VMware Tools:

  • 从已经安装了tools的模板创建的每个VM;或者,

  • 当注册一个新的模板时,管理员或者用户可以标示出哪个在模板上安装了tools。这可以通过UI或者使用updateTemplate API来做;或者,

  • 如果用户通过一个没有安装Xen tools或者VMware Tools的模板部署虚机,然后在VM上安装了tools,之后用户可以使用updateVirtualMachine API告知CloudStack。在安装tools和更新虚拟机之后,请重启虚拟机。

虚拟机的生命周期

虚拟机可能处于下列状态:

Basic two-machine CloudStack deployment

一旦一个虚拟机被销毁,那么它就不能被恢复。该虚拟机使用的所有资源被系统回收。其中包括虚拟机的IP地址。

停止操作是比较稳妥的关闭操作系统的方式,因为这样做能正常的关闭所有正在运行的应用程序。如果操作系统不能被停止,才要强制关闭它。这么做跟拔掉物理机器的电源线是一个效果。

重启是关机再开机的过程。

CloudStack会保存虚机硬盘的状态直到它被销毁。

正在运行的虚机可能会因为硬件或者网络故障而出现问题。出现问题的虚机是down状态。

如果系统在三分钟内收不到hypervisor的心跳信号,那么它就会将虚机切换至down状态。

用户能够手动从down状态下重启虚机。

如果虚机被标记上启用了HA,那么系统会自动地重启down状态中的虚机。

创建VMs

虚拟机通常是从模板创建的。用户也能创建空白虚拟机。空白虚拟机就是一个重新安装的操作系统。用户可以通过CD/DVD-ROM加载ISO文件安装。

注解

你可以创建一个VM,但不要启动它。你来决定此VM是否需要作为VM部署的一部分而启动。在deployVm API里面提供了一个startVM参数,提供了这个功能。更多信息,请参考开发者指导。

从模板创建一个VM。

  1. 用管理员或用户账号登录CloudStack UI。

  2. 在左侧导航栏,点击实例

  3. 点击添加实例。

  4. 选择一个区域。

  5. 选择一个模板,然后按照向导中的步骤操作。更多关于如何上传模板的更多信息,请参考`*使用模板* <templates.html>`_。

  6. 确认​你​的​硬​件​设​备​满足​运​行​所​选​的​服​务条件。

  7. 点击提交,你的VM就会被创建并启动。

    注解

    出于安全原因,VM的内部名称仅root管理员可见。

从ISO创建虚拟机:

注解

(XenServer)XXenServer上运行的Windows VMs需要安装PV驱动,它可能在模板中或在创建完VM后添加。PV驱动对于基本的管理功能是必要的,比如挂载额外的卷和ISO镜像、在线迁移和正常关机。

  1. 用管理员或用户账号登录CloudStack UI。

  2. 在左侧导航栏,点击实例

  3. 点击添加实例。

  4. 选择一个区域。

  5. 选择ISO启动,然后按照向导中的步骤进行操作。

  6. 点击提交,你的VM就会被创建并启动。

访问VMs

任​何​用​户​都可​以​访​问​他​们​自​己​的​虚​拟​机​。​管​理​员​能​够​访​问​在​云​中​运​行​的​所​有​虚​拟​机​。

通过CloudStack UI访问VM:

  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 点击实例,然后点击运行VM的名字。

  3. 点击查看控制台按钮 depicts adding an iso image

通过网络直接访问VM:

  1. VM必须开通相关端口以便外部访问。比如,在基础zone中,新的VM可能被关联到一个允许流量进入的安全组。但这取决于你在创建VM的时候选择的安全组。在其他情况下,你可以通过设置端口转发策略来打开一个端口。请参考 “IP转发及防火墙”

  2. 如果打开端口后,你仍然不能使用SSH访问VM,那么可能是VM中没有启用SSH。这取决于当你创建VM的时候选的模板是否启用了SSH。通过CloudStack UI访问VM,然后使用VM操作系统中的命令行启用SSH。

  3. 如果网络中有外部防火墙设备,你需要创建一个防火墙策略来允许访问。请参考`“IP转发及防火墙” <networking2.html#ip-forwarding-and-firewalling>`_。

停止和启动VMs

一旦VM实例被创建,你可以根据需要停止、重启或删除它。在CloudStack UI里,点击实例,选择VM,然后使用停止、启动、重启和销毁按钮。

将VMs指定到主机

在任何时刻,每个虚拟机实例都运行在一台主机上。CloudStack如何决定在哪个主机上放置VM呢?通常是这几种方式:

  • 自动分配默认主机。CloudStack可以自动的选取最合适的主机来运行每个虚机。

  • 实例类型偏好设置。CloudStack管理员可以指定专门为某些特殊来宾实例做过优化的主机。比如,管理员指定为Windows来宾实例做过优化的主机。默认主机分配器会尝试将此类来宾实例运行到这些主机上。如果这些主机不够用,分配器会把实例运行在容量充足的物理机上。

  • 纵向和横向定位。纵向定位是指,在消耗完当前主机提供的所有资源的时候会将所有来宾虚机定位到第二台主机上。这会云中的节省电力成本。横向定位在每个主机上使用轮询的方式放置来宾虚机。在某些情况下这样会给来宾虚机带来更好的性能。

  • 最终用户偏好设置。用户不能控制VM实例运行在哪台主机上,但是他们能为VM指定区域。这样CloudStack就只能将VM分配到该区域中的主机上。

  • 主机标签。管理员可以给主机分配标签。这些标签是用来指定VM该使用哪台主机的。CloudStack管理员自己决定如何定义主机标签,然后使用这些标签创建一个服务方案来提供给用户。

  • 关联性组。依靠定义关联性组并把VMs分配给它们,用户或管理员可以影响(但不是指令)VMs运行在不同的主机上。这个功能是让用户可以规定某些VMs不会运行在同一台主机上。

  • CloudStack为添加新的分配器提供了一个插件似的接口。这些自定义的分配器可以提供任何管理员想要的策略。

关联性组

依靠定义关联性组并把VMs分配给它们,用户或管理员可以影响(但不是指令)VMs运行在不同的主机上。这个功能是让用户可以规定某些VMs运行在同“host anti-affinity”类型的而非同一台主机上。这写类服务器增强了容错功能。如果一个主机出现问题,那么另一个提供同样服务的VM(比如,运行用户的网站)依然在别的主机上运行。

一个关联性组的作用域是针对每个用户账号的。

创建一个新的关联性组

要添加一个关联性组:

  1. 用管理员或用户账号登录CloudStack UI。

  2. 在左侧的导航条,点击关联性组。

  3. 点击添加关联性组。在对话框中的下列区域中输入:

    • 名称。组的名称。

    • 描述。给出更多关于这个组的说明。

    • Type. The only supported type shipped with CloudStack is Host Anti-Affinity. This indicates that the VMs in this group should avoid being placed on the same host with each other. If you see other types in this list, it means that your installation of CloudStack has been extended with customized affinity group plugins.
将一个新的VM分配给一个关联性组

要将一个新的VM分配给一个关联性组:

  • 如何创建VM,在 `“Creating VMs” <virtual_machines.html#creating-vms>`_中有描述。在添加实例向导中,有一个新的关联性组标签,你可以选择关联性组。

给已有的VM更改关联性组

要将已有的VM添加到关联性组:

  1. 用管理员或用户账号登录CloudStack UI。

  2. 在左侧导航栏,点击实例

  3. 点击你想更改的VM的名称。

  4. 点击停止按钮,将此VM关机。

  5. 点击变更关联性组按钮。 button to assign an affinity group to a virtual machine.

查看关联性组的成员

要查看当前哪些VMs被分配到指定的关联性组:

  1. 在左侧的导航条,点击关联性组。

  2. 点击要查看的组的名称。

  3. 点击查看实例。组的成员会列在此处。

    在这里,你可以点击列表中任何VM的名称来访问它所有的详细信息并控制它。

删除一个关联性组

要删除关联性组:

  1. 在左侧的导航条,点击关联性组。

  2. 点击要查看的组的名称。

  3. 点击删除。

    任何属于关联性组的VM都会被踢出此组。之前组里的成员会继续在当前主机上正常的运行,但是如果VM重启了,它就不再按照之前关联性组的主机分配策略。

虚拟机快照

(支持XenServer、KVM和VMware)

利用CloudStack的VM卷快照功能,你可以对VM做快照像保存它的CPU/内存状态一样保护VM的数据卷(可选的)。这对快速还原一个VM来说是非常有用的。比如,你可以对一个VM做快照,然后做了一些像软件升级的操作。如果期间有问题出现,使用之前保存的VM快照就可以将VM恢复到之前的状态了。

快照的创建使用的是 hypervisor本地快照工具。VM快照不但包括数据卷还可选择性的包括VM的运行或关机时(CPU状态)和内存内容的状态。快照保存在CloudStack的主存储里。

VM快照存在父/子关系。同一个VM的每次快照都是之前快照的子级。每次你对同一VM追加的快照,它仅仅保存两次快照之间系统状态差异。之前的快照变成父级,新的快照变成子级。它可能对这些父/子快照创建一个长链,它实际上是一个从当前的VM状态还原到之前的”还原”记录。

如果你需要了解更多关于VMware的VM快照,请转去VMware文档中心和VMwareBK库,查找 了解虚拟机的快照-英文`了解虚拟机的快照-中文<http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2075429>`_

VM快照的限制
  • 如果一个VM存储了一些快照,你就不能给他附加新卷或删除存在的卷。如果你更改了VM的卷,他将不能通过之前卷结构下所做快照来进行恢复。如果你想给这样一个VM附加卷,请先删除快照。

  • 如果你更改了VM的服务方案,那么包含了数据卷和内存的VM快照就不能保留了,任何已有的此类型的VM快照都将被丢弃。

  • 你不能同时对VM在做卷快照和VM快照。

  • 你只能使用CloudStack来创建其管理的主机上的VM快照。你在hypervisor上直接创建的任何快照都不能被CloudStack识别。

配置VM快照

云管理员可以使用全局配置变量来控制VM快照的行为。要设置这些变量,移步至CloudStack UI中的全局设置。

配置设置名

描述

vmsnapshots.max

云中虚拟机能够保存快照的最大数。 (number of VMs) * vmsnapshots.max是云中VM快照的总共可能数量。如果任何一个VM的快照数达到了最大值,那么快照删除任务会把最老的快照删掉。

vmsnapshot.create.wait

在提示失败和发生错误之前快照工作为了成功的做快照而等待的秒数。

是用VM的快照

是用CloudStack UI创建一个VM快照:

  1. 是用用户或者管理员登录CloudStack。

  2. 点击实例。

  3. 点击你想做快照的VM名称。

  4. 点击抓取VM快照按钮。 button to restart a VPC

    注解

    如果一个快照处理过程正在进行,那么点击这个按钮是没有反应的。

  5. 提供一个名称和秒数。这些会显示在VM快照列表中。

  6. (仅限运行中的VMs)如果你想在快照中包含VM的内存状态,请勾选内存。这可以保存虚拟机的CPU和内存状态。如果你不勾选这个选项,那么只有VM目前磁盘状态会被保存。勾选了这个选项会让快照过程变长。

  7. 静止VM:如果你想在做快照之前让VM的文件系统处于静止状态,请勾选此选项。使用CloudStack提供的主存储的XenServer环境不受支持。

    当这个选项在CloudStack提供的主存储中使用了,那么静止操作是由底层的hypervisor完成的(VMware支持)。当使用了其他主存储提供商的插件,静止操作是由其提供商的软件完成的,

  8. 点击确定。

要删除一个快照或者还原VM的状态到指定的一个快照:

  1. 通过之前描述的步骤定位至VM。

  2. 点击查看VM快照。

  3. 在快照列表中,点击你要操作的快照名字。

  4. 取决于你想做什么:

    要删除快照,点击删除按钮。 delete-button.png

    要还原至此快照,点击还原按钮。 depicts adding an iso image

注解

当VM被销毁了,那么它的快照也会被自动的删除。这种情况下,你不用手动的去删除快照。

更改VM名字,OS或组

在VM被创建之后,你可以修改显示名,操作系统和它所属的组。

通过CloudStack UI访问VM:

  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 在左侧的导航菜单中,点击实例。

  3. 选择你想修改的VM。

  4. 点击停止按钮来关闭虚机。 depicts adding an iso image

  5. 点击编辑。 button to edit the properties of a VM

  6. 更改以下几项为所需要的:

  7. 显示名:如果你想更改VM的名字,那么输入一个新的显示名。

  8. OS类型:选择所需的操作系统。

  9. :输入VM的组名。

  10. 点击应用

给来宾VM的内部名称附加显示名

每个来宾VM都有一个内部名称。主机使用内部名称来识别来宾VMs。CloudStack为来宾VM提供了一个关于显示名的选项。你可以设置显示名作为内部名称以便vCenter可以使用它来识别来宾VM。vm.instancename.flag作为一个新的全局参数,已被添加用来实现此功能。

默认的内部名称的格式是 i-<user_id>-<vm_id>-<instance.name>,这里instance.name是全局参数。但是,如果vm.instancename.flag被设置为true,并且来宾VM在创建的过程中提供了显示名,那么显示名就会被附加至该主机的来宾VM的内部名称上。这样就使得内部名称的格式类似于 i-<user_id>-<vm_id>-<displayName>。vm.instancename.flag的默认值被设置为false。这个功能可以比较容易的在大型数据中心的部署中表示实例名和内部名之间的相互关系。

下面的表格解释了不同场景下VM的名称是如何显示的。

用户提供的显示名

vm.instancename.flag

VM的主机名

vCenter中的名称

内部名称

支持

True

显示名

i-<user_id>-<vm_id>-displayName i-<user_id>-<vm_id>-displayName

不支持

True UUID i-<user_id>-<vm_id>-<instance.name> i-<user_id>-<vm_id>-<instance.name>

支持

False

显示名

i-<user_id>-<vm_id>-<instance.name> i-<user_id>-<vm_id>-<instance.name>

不支持

False UUID i-<user_id>-<vm_id>-<instance.name> i-<user_id>-<vm_id>-<instance.name>

为VM变更服务方案

要给虚拟机提升或降低可用计算资源的级别,你可以更改VM的计算方案。

  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 在左侧的导航菜单中,点击实例。

  3. 选择你要处理的VM。

  4. (如果你启用了动态VM扩展或缩减,请跳过此步;查看:ref:cpu-and-memory-scaling。)

    点击停止按钮来关闭虚机。 depicts adding an iso image

  5. 点击更改服务按钮。 button to change the service of a VM

    显示更改服务对话框。

  6. 选择你想应用到选择的VM的方案。

  7. 点击确定。

运行中VMs的CPU和内存的扩展和缩减

(支持XenServer、KVM和VMware)

通常当你第一次部署VM的时候不太可能精确地预计CPU和RAM需求。你可能必须在VM的生命周期内的任何时间增加这些资源。你可以在不中断运行状态中的VM的情况下,动态的修改CPU和RAM级别来提升这些资源,、。

动态CPU和RAM扩展和缩减能被用于一下情况:

  • 运行VMware和XenServer的主机上的用户VMs。

  • VMware上的系统VMs。

  • 虚拟机上必须安装VMware Tools或者XenServer Tools。

  • 对于新的CPU和RAM大小必须在hypervisor和VM操作系统要求之内。

  • 在CloudStack 4.2安装完以后创建的新VMs都可以使用动态扩展和缩减功能。如果你的CloudStack是从旧版本升级而来的,其上已有的VMs不会支持动态扩展和缩减功能,除非你按照下面的过程来升级这些VM。

升级已有VMs

如果你正在升级旧版本的CloudStack,并且你还想让你的旧版本VMs拥有动态扩展和缩减能力,请使用以下步骤来升级VMs:

  1. 确保区域级别设置enable.dynamic.scale.vm被设置为true。在CloudStack UI的左侧导航条,点击基础架构,然后点击区域,点击你要操作的区域,然后点击Settings标签。

  2. 在每台VM上安装Xen tools(适用于XenServer主机)或者VMware Tools(适用于VMware主机)。

  3. 停止VM。

  4. 点击编辑按钮。

  5. 点击动态伸缩选框。

  6. 点击应用

  7. 重启VM。

配置动态CPU和RAM伸缩

要配置此功能,请使用下面的全局配置变量:

  • enable.dynamic.scale.vm:设置为True以启用此功能。默认情况下,此功能是被关闭的。

  • scale.retry:伸缩操作的重试次数。默认为2。

如果动态伸缩CPU和RAM

要修改虚拟机的CPU和/或RAM,你必须更改VM的计算方案为你想要的。你可以按照上文中所述的同样的步骤 “Changing the Service Offering for a VM”,但是要跳过停止虚拟机的步骤。当然,你可能必须先创建一个新的计算方案。

当你提交一个动态伸缩的请求的时候,当前主机可能会扩展资源。如果主机没有足够的资源,VM会被在线迁移至同一群集中的其他主机。如果群集中没有能满足CPU和RAM条件的主机,那么扩展操作会失败。不用担心,VM会往常一样继续运行。

局限性
  • 你不能为XenServer上运行的系统VMs进行动态伸缩操作。

  • CloudStack不会检查新的CPU和RAM级别是不是符合VM操作系统的要求。

  • 当为VMware上运行的Linux VM扩展内存或者CPU的时候,你可能需要运行额外的脚本。更多信息,请参阅VMware知识库中的 在Linux中热添加内存 (1012764)

  • (VMware)如果当前主机上没有可用资源,因为一个已知的问题,VMware上的虚机扩展会失败,这是因为CloudStack和vCenter计算可用容量方法的不同。更多信息,请参阅`https://issues.apache.org/jira/browse/CLOUDSTACK-1809 <https://issues.apache.org/jira/browse/CLOUDSTACK-1809>`_。

  • 对于运行Linux 64位和Windows 7 32位操作系统的VMs,如果VM初始分配的RAM小于3GB,那么它最大能动态的扩展到3GB。这是因为这些操作系统已知的问题,如果尝试从小于3G动态扩展至超过3G的话,操作系统会宕机。

在重启时重置虚拟机的Root卷

为安全环境和确保VM重启期间不被中断状态,你可以重置root磁盘。更多信息,请参阅 “在重启时给VM重置一个新的Root磁盘”

在主机之间移动VMs(手动在线迁移)

CloudStack管理员可以从一个主机将运行中的VM不中断服务地移动到其他主机或者将其进入维护模式。这个过程叫手动在线迁移,要满足以下条件:

  • root管理员已登录。域管理员和用户不能执行手动在线迁移操作。

  • VM正在运行。停止的VMs不能进行在线迁移。

  • 目标主机必须有足够的可用容量。如果没有,VM会一直停留在”迁移中”状态,直到有多余内存可用。

  • (KVM)VM必须不能使用本地磁盘存储。(在XenServer和VMware中,CloudStack依靠XenMotion和vMotion允许使用了本地磁盘的VM能够在线迁移。)

  • (KVM)目标主机必须跟源主机在同一个群集。(在XenServer和VMware上,CloudStack依靠XenMotion和vMotion,允许VM从一个群集在线迁移到另一个群集)

手动在线迁移一个虚拟机

  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 在左侧的导航菜单中,点击实例。

  3. 选择你想迁移的VM。

  4. 点击迁移实例按钮。 button to migrate an instance

  5. 从可用主机列表中,选择一个目标主机。

    注解

    如果VM的存储与VM必须一起被迁移,这点会在主机列表中标注。CloudStack会为你自动的进行存储迁移。

  6. 点击确定。

删除VMs

用户可以删除他们拥有的虚拟机。在删除运行中的虚拟机之前,虚拟机会被强制停止。管理员可以删除任何虚拟机。

要删除虚拟机:

  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 在左侧的导航菜单中,点击实例。

  3. 选择你想删除的VM。

  4. 点击销毁实例按钮。 button to destroy an instance

使用ISOs

CloudStack支持ISOs和将其挂载到来宾VMs。ISO是一种只读的ISO/CD-ROM格式的文件文件系统。用户可以上传他们自己的ISOs并且将其挂载至他们的来宾VMs。

ISOs文件通过URL上传。HTTP是受支持的协议。一旦ISO通过HTTP指定一个上传URL,比如http://my.web.server/filename.iso上传成功,那么文件就可以用了。

ISOs可能是私有的也可以是共有的,像templates.ISOs并不针对某种hypervisor。也就是说,运行在vSphere上的来宾虚机和运行在KVM上的虚机可以挂载同一个ISO。

ISO镜像可能存储在系统中并且隐私级别与模板相似。ISO镜像分为可引导或不可引导的。可引导的ISO镜像是包含了OS镜像的。CloudStack允许用户通过ISO镜像来启动来宾虚机。用户同样可以将ISO镜像挂载到来宾VMs。比如,给Windows安装PV驱动。ISO镜像不指定hypervisor。

添加ISO

为了添加额外的操作系统或者给来宾VMs使用其它软件,你可以添加ISO。操作系统镜像被认为是最典型的ISO,但是你也能添加软件类型的ISOs,例如你想把安装的桌面应用作为模板的一部分。

  1. 使用管理员或者终端用户账号登录CloudStack UI。

  2. 在左侧的导航栏,点击模板。

  3. 在选择视图中,选择ISOs。

  4. 点击添加ISO。

  5. 在添加ISO界面中,提供下列信息:

    • 名称: ISO 镜像的简称。例如,CentOS6.2 64-bit。

    • 描述: 对于ISO镜像的描述。例如,CentOS 6.2 64-bit.

    • URL: ISO镜像主机的URL。管理服务器必须能够通过HTTP访问这个地址。如果有需要你可以直接将ISO放置到管理服务器中。

    • 区域: 选择你希望该ISO在到哪个区域可用,或者选择所有区域使该ISO在CloudStack中全部区域中可用。

    • 可启动: 来宾是否可以通过该ISO镜像启动。例如,一个CentOS ISO 是可启动的,一个Microsoft Office ISO 是不可启动的。

    • **操作系统类型*: 这有助于CloudStack和Hypervisor执行某些操作并假设可提高来宾虚拟机的性能。选择下列之一。

      • 如果你需要的ISO镜像对应的操作系统在列表中,请选择它。

      • 如果ISO镜像的操作系统类型没有被列出或者ISO是不可引导的,选择Other。

      • (仅针对XenServer) 如果你想使用这个ISO在PV 模式中引导,选择 Other PV (32-bit) 或 Other PV(64-bit)

      • (仅针对KVM) 如果你选择一个操作系统为PV-enabled,, 通过这个ISO创建的虚拟机会有一个SCSI(virtio)根磁盘。如果这个操作系统没有PV-enabled,,虚拟机将有一个IDE根磁盘。PV-enabled 类型是:

        • Fedora 13
        • Fedora 12
        • Fedora 11
        • Fedora 10
        • Fedora 9
        • Other PV
        • Debian GNU/Linux
        • CentOS 5.3
        • CentOS 5.4
        • CentOS 5.5
        • Red Hat Enterprise Linux 5.3
        • Red Hat Enterprise Linux 5.4
        • Red Hat Enterprise Linux 5.5
        • Red Hat Enterprise Linux 6

      注解

      不建议选择一个比操作系统镜像老的版本。例如,选择CentOS 5.4 去支持一个CentOS6.2的镜像通常导致不能工作。在这种情况下,去选择Other。

    • 可提取: 如果该ISO可以被提取出来,则选择Yes。

    • 公共: 如果该ISO可以被所有用户使用,则选择Yes。

    • 精选: 如果你想这个用户在选择这个ISO时更加突出则选择Yes。该ISO将出现在精选ISO列表中。只有管理员可以设置ISO为精选。

  6. 点击确定。

    管理服务器将下载该ISO。根据ISO镜像的大小,下载过程可能会用很长时间。一旦该镜像已经被成功下载到辅助存储时ISO 状态栏将会显示Ready。点击刷新更新更新百分比。

  7. 重要: 等等ISO下载完成过程中。如果你想执行下一个任务并且尝试正常使用该ISO,这将会失败。在CloudStack 使用它之前该ISO必须是完整且有效的。

附加ISO到虚拟机
  1. 在左侧的导航菜单中,点击实例。

  2. 选择要使用的虚拟机。

  3. 点击附加ISO按钮。 depicts adding an iso image

  4. 在附加ISO对话框中,选择所需的ISO。

  5. 点击确定。

改变VM’s 的基础镜像

每个VM都是通过基础镜像创建的,可以是创建并存储在CloudStack中的一个模版或者一个ISO镜像。云管理员和终端用户都可以创建和修改模版,ISO和VM。

在CloudStack中,你可以改变虚拟机的基础磁盘从一个模版换成其他模版,或从一个ISO换成其他ISO。(你不能把ISO变成模版或模版变成ISO。)

例如,假设有一个模板基于一个特定的操作系统,并且操作系统厂商发布了一个补丁。管理员和用户理所当然的想要将该补丁应用到已经存在的虚拟机中,并确保虚拟机开始使用它。无论是否涉及软件更新,也有可能只是将虚拟机从当前的模板切换至其他所需的模板。

要改变熏鸡的基础镜像,通过虚拟机iD和新的模版ID调用restoreVirtualMachine API命令。模版ID参数可以参考模版或者ISO,取决于虚拟机正在使用的基础镜像类型(它必须匹配先前的镜像类型)。当这个调用生效时,虚拟机的根磁盘首先被摧毁,然后通过指定的模版ID参数创建根磁盘。新的根磁盘会附件到虚拟机中,现在虚拟机已经基于新的模版了。

你同样可以在调用restoreVirtualMachine时忽略模版ID参数。在这个情况下,虚拟机的根磁盘被摧毁并重新创建。但相同的模版或ISO已经被该虚拟机使用。

Using SSH Keys for Authentication

In addition to the username and password authentication, CloudStack supports using SSH keys to log in to the cloud infrastructure for additional security. You can use the createSSHKeyPair API to generate the SSH keys.

Because each cloud user has their own SSH key, one cloud user cannot log in to another cloud user’s instances unless they share their SSH key files. Using a single SSH key pair, you can manage multiple instances.

Creating an Instance Template that Supports SSH Keys

Create an instance template that supports SSH Keys.

  1. Create a new instance by using the template provided by cloudstack.

    For more information on creating a new instance, see

  2. Download the cloudstack script from The SSH Key Gen Script to the instance you have created.

    wget http://downloads.sourceforge.net/project/cloudstack/SSH%20Key%20Gen%20Script/cloud-set-guest-sshkey.in?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fcloudstack%2Ffiles%2FSSH%2520Key%2520Gen%2520Script%2F&ts=1331225219&use_mirror=iweb
    
  3. Copy the file to /etc/init.d.

    cp cloud-set-guest-sshkey.in /etc/init.d/
    
  4. Give the necessary permissions on the script:

    chmod +x /etc/init.d/cloud-set-guest-sshkey.in
    
  5. Run the script while starting up the operating system:

    chkconfig --add cloud-set-guest-sshkey.in
    
  6. Stop the instance.

Creating the SSH Keypair

You must make a call to the createSSHKeyPair api method. You can either use the CloudStack Python API library or the curl commands to make the call to the cloudstack api.

For example, make a call from the cloudstack server to create a SSH keypair called “keypair-doc” for the admin account in the root domain:

注解

Ensure that you adjust these values to meet your needs. If you are making the API call from a different server, your URL/PORT will be different, and you will need to use the API keys.

  1. Run the following curl command:

    curl --globoff "http://localhost:8096/?command=createSSHKeyPair&name=keypair-doc&account=admin&domainid=5163440e-c44b-42b5-9109-ad75cae8e8a2"
    

    The output is something similar to what is given below:

    <?xml version="1.0" encoding="ISO-8859-1"?><createsshkeypairresponse cloud-stack-version="3.0.0.20120228045507"><keypair><name>keypair-doc</name><fingerprint>f6:77:39:d5:5e:77:02:22:6a:d8:7f:ce:ab:cd:b3:56</fingerprint><privatekey>-----BEGIN RSA PRIVATE KEY-----
    MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci
    dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB
    AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu
    mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy
    QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7
    VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK
    lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm
    nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14
    4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd
    KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3
    -----END RSA PRIVATE KEY-----
    </privatekey></keypair></createsshkeypairresponse>
    
  2. Copy the key data into a file. The file looks like this:

    -----BEGIN RSA PRIVATE KEY-----
    MIICXQIBAAKBgQCSydmnQ67jP6lNoXdX3noZjQdrMAWNQZ7y5SrEu4wDxplvhYci
    dXYBeZVwakDVsU2MLGl/K+wefwefwefwefwefJyKJaogMKn7BperPD6n1wIDAQAB
    AoGAdXaJ7uyZKeRDoy6wA0UmF0kSPbMZCR+UTIHNkS/E0/4U+6lhMokmFSHtu
    mfDZ1kGGDYhMsdytjDBztljawfawfeawefawfawfawQQDCjEsoRdgkduTy
    QpbSGDIa11Jsc+XNDx2fgRinDsxXI/zJYXTKRhSl/LIPHBw/brW8vzxhOlSOrwm7
    VvemkkgpAkEAwSeEw394LYZiEVv395ar9MLRVTVLwpo54jC4tsOxQCBlloocK
    lYaocpk0yBqqOUSBawfIiDCuLXSdvBo1Xz5ICTM19vgvEp/+kMuECQBzm
    nVo8b2Gvyagqt/KEQo8wzH2THghZ1qQ1QRhIeJG2aissEacF6bGB2oZ7Igim5L14
    4KR7OeEToyCLC2k+02UCQQCrniSnWKtDVoVqeK/zbB32JhW3Wullv5p5zUEcd
    KfEEuzcCUIxtJYTahJ1pvlFkQ8anpuxjSEDp8x/18bq3
    -----END RSA PRIVATE KEY-----
    
  3. Save the file.

Creating an Instance

After you save the SSH keypair file, you must create an instance by using the template that you created at Section 5.2.1, “ Creating an Instance Template that Supports SSH Keys”. Ensure that you use the same SSH key name that you created at Section 5.2.2, “Creating the SSH Keypair”.

注解

You cannot create the instance by using the GUI at this time and associate the instance with the newly created SSH keypair.

A sample curl command to create a new instance is:

curl --globoff http://localhost:<port number>/?command=deployVirtualMachine\&zoneId=1\&serviceOfferingId=18727021-7556-4110-9322-d625b52e0813\&templateId=e899c18a-ce13-4bbf-98a9-625c5026e0b5\&securitygroupids=ff03f02f-9e3b-48f8-834d-91b822da40c5\&account=admin\&domainid=1\&keypair=keypair-doc

Substitute the template, service offering and security group IDs (if you are using the security group feature) that are in your cloud environment.

Logging In Using the SSH Keypair

To test your SSH key generation is successful, check whether you can log in to the cloud setup.

For example, from a Linux OS, run:

ssh -i ~/.ssh/keypair-doc <ip address>

The -i parameter tells the ssh client to use a ssh key found at ~/.ssh/keypair-doc.

Resetting SSH Keys

With the API command resetSSHKeyForVirtualMachine, a user can set or reset the SSH keypair assigned to a virtual machine. A lost or compromised SSH keypair can be changed, and the user can access the VM by using the new keypair. Just create or register a new keypair, then call resetSSHKeyForVirtualMachine.

使用模板

使用模板

模板相当于虚拟机的重用配置。当用户创建虚拟机时能从CloudStack的模板列表中选择一个。

特殊情况下,模板可以是一个包含一个或多个操作系统的虚拟磁盘镜像,你可以选择性的安装另外的软件,比如office应用并设置访问控制来决定谁能使用这个模板。每个模板对应一个特殊类型的虚拟机,此类虚拟机在将模板添加入CloudStack时指定。

CloudStack附带一个默认模板。为了向用户呈现出更多选择,CloudStack的管理员和用户能创建模板并添加到CloudStack中。

创建模板概览

CloudStack默认已经有了一个带CentOS系统的默认模板。有许多添加更多模板的方法,管理员和普通用户均能添加。一般是这样的顺序:

  1. 运行一个带有你需要的操作系统的虚拟机实例,并进行一些你期望的设置。

  2. 停止VM。

  3. 将卷转换为模板。

还有其他方法向CloudStack中添加模板。比如你可以对虚机磁盘卷做个快照然后通过这个快照创建模板,或者从另一个系统导入一个VHD到CloudStack。

接下来的几节中将继续讲述各种创建模板的技术。

模板的需求

  • 对于 XenServer, 在每一个你创建的模板上安装 PV 驱动 / Xen tools。 这将使动态迁移和干净的宾客关机成为可能。

  • 对于 vSphere, 在每一个你创建的模板上安装VMware 工具。这将使控制台视图能够正常工作。

模板最佳实践

如果你计划使用大的模板(100 GB 或更大),确保你有10g 的网络以支持大的模板。 当大的模板被使用时,较慢的网络可能导致超时及其它错误。

默认模版

CloudStack包含一个CentOS 模版。当主存储和二级存储配置完成后,这个模版会由二级存储虚拟机下载。可以在生产部署中使用这个模版,也可以删除掉它,使用自定义的模版。

默认模版的root用户密码是“password”。

为XenServer,KVM和vSphere各提供了一个默认模板。下载的模板取决于你的云中使用的hypervisor类型。每个模板大概占用2.5GB的存储空间。

默认模版包括标准的iptables 规则,会阻止除了ssh以外的其他访问。

# iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
RH-Firewall-1-INPUT  all  --  anywhere             anywhere

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
RH-Firewall-1-INPUT  all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain RH-Firewall-1-INPUT (2 references)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere
ACCEPT     icmp --  anywhere        anywhere       icmp any
ACCEPT     esp  --  anywhere        anywhere
ACCEPT     ah   --  anywhere        anywhere
ACCEPT     udp  --  anywhere        224.0.0.251    udp dpt:mdns
ACCEPT     udp  --  anywhere        anywhere       udp dpt:ipp
ACCEPT     tcp  --  anywhere        anywhere       tcp dpt:ipp
ACCEPT     all  --  anywhere        anywhere       state RELATED,ESTABLISHED
ACCEPT     tcp  --  anywhere        anywhere       state NEW tcp dpt:ssh
REJECT     all  --  anywhere        anywhere       reject-with icmp-host-

私有模板和公共模板

用户创建模板时可选择模板为公有还是私有。

私有模板只对创建者可用。默认上传的模板都是私有的。

当用户将模板标识为“公有”,该模板不但能让该用户域中所有账户中的所有用户使用,还可以让能访问存储该模板的区域的其他域中用户使用。这取决于zone是设置成公用还是私有。私有区域被分配到一个单一的域,而公共区域能被任何域访问。

通过已存在的虚拟机创建模板

当你已经有了一台按你的想法已经配置好的虚拟机,你就能以他为原型创建别的虚拟机。

  1. 使用 `“创建VMs” <virtual_machines.html#creating-vms>`_给出的方法创建并且开启一个虚拟机。

  2. 在虚拟机中做好需要的配置,然后点击按钮关闭该虚拟机。

  3. 等待虚拟机关闭。当虚拟机状态显示为“已停止”,执行下一步。

  4. 点击创建模板并填写如下内容:

    • 名称和显示文本。这些会在UI中显示,所以建议写一些描述信息。

    • **操作系统类型*:。这有助于CloudStack和Hypervisor执行某些操作并可能提高来宾虚拟机的性能。选择下列之一。

      • 如果已停止虚拟机的系统在列表中,选择它。

      • 如果已停止虚拟机系统类型不在列表中就选择其他。

      • 如果你打算以PV模式启动该模板,请选择其他PV(32位)或其他PV(64位)。这个选项只对XenServer有效:

        注解

        通常你不能选择比镜像版本老的OS版本。比如,选择CentOS 5.4来支持CentOS 6.2镜像通常来说是不工作的。在这种情况下,你应该选择其他。

    • 公共。选择是来让CloudStack里面的所有用户都能访问这个模板。模板将会出现在社区模板列表中。请参阅 “私有和公共模板”

    • 启用密码。如果你的模板中安装了CLoudStack密码修改脚本,选择是。请参阅 给你的模板添加密码管理

  5. 点击 添加

当模版创建过程完成后,新模版会出现在模版页面。在创建虚机时就可以使用新模版了。

从一个快照创建一个模板

如果你不想为了使用创建模板菜单项而停止虚拟机(如在`“从已有的虚机创建模板” <#creating-a-template-from-an-existing-virtual-machine>`_中描述的), 你可以通过CloudStack UI从任何快照直接创建模板。

上传模板

vSphere模板和ISOs

如果你通过vSphere Client上传模板,请确认OVA文件不包含ISO。如果是的话,从模板部署虚拟机将失败。

模板是使用HTTP协议通过URL来上传的。模板通常都很大。你可以使用gzip压缩它们以缩短上传时间。

要上传模板:

  1. 在左边的导航栏,点击模板。

  2. 点击注册模板。

  3. 填写以下内容:

    • 名称和显示文本。这些会在UI中显示,所以建议写一些描述信息。

    • URL。管理服务器会从指定的URL下载模板,就像 http://my.web.server/filename.vhd.gz

    • 区域::选择你希望该模板在到哪个区域可用,或者选择所有区域使该模板在CloudStack中全部区域中可用。

    • **操作系统类型*::这有助于CloudStack和Hypervisor执行某些操作并假设可提高来宾虚拟机的性能。选择下列之一。

      • 如果已停止虚拟机的系统在列表中,选择它。

      • 如果已停止虚拟机系统类型不在列表中就选择其他。

        注解

        你不能选择比镜像版本老的OS版本。比如,选择CentOS 5.4来支持CentOS 6.2镜像通常来说是不工作的。在这种情况下,你应该选择其他。

    • Hypervisor:列表中显示支持的hypervisors。选择想要的一个。

    • 格式。上传的模板文件的格式,如VHD或OVA。

    • 启用密码。如果你的模板中安装了CLoudStack密码修改脚本,选择是。请参阅 给你的模板添加密码管理

    • 可提取。如果模板可以被提取请选择是。如果选择了此选项,终端用户可以下载此模板的完全镜。

    • 公共。选择是来让CloudStack里面的所有用户都能访问这个模板。模板将会出现在社区模板列表中。请参阅 “私有和公共模板”

    • 精选: 。如果你想这个用户在选择这个模板时更明显则选择Yes。该模板将出现在精选模板列表中。只有管理员可以设置模板为精选。

导出模板

最终用户和管理员可以从CloudStack导出模板。导航到用户界面中的模板并选择动作菜单中的下载功能。

创建Linux模板

为了准备使用模板部署你的Linux VMs,可以使用此文档来准备Linux模板。对于文档中的情况,你要通过配置模板,这会涉及”主模板”。这个指导目前覆盖了传统的安装,但不会涉及用户数据和cloud-init还有假设在装过程中安装了openshh服务。

过程概述如下:

  1. 上传你的Linux ISO。

    更多信息,请参阅 “添加ISO”

  2. 使用这个ISO创建VM实例。

    更多信息,请参阅 “创建VMs”

  3. 准备Linux VM

  4. 从VM创建模板。

    更多信息,请参阅 “从已有的虚拟机创建模板”

Linux的系统准备工作

下列步骤将会为模板准备一个基本的Linux安装。

  1. 安装

    通常在安装过程中给VM命名是一个好的做法,这么做能确保某些组件如LVM不会只在一台机器中出现。推荐在在安装过程中使用”localhost”命名。

    警告

    对于CentOS,必须要修改网络接口的配置文件,在这里我们编辑/etc/sysconfig/network-scripts/ifcfg-eth0文件,更改下面的内容。

    DEVICE=eth0
    TYPE=Ethernet
    BOOTPROTO=dhcp
    ONBOOT=yes
    

    下一步更新主模板中的包。

    • Ubuntu

      sudo -i
      apt-get update
      apt-get upgrade -y
      apt-get install -y acpid ntp
      reboot
      
    • CentOS

      ifup eth0
      yum update -y
      reboot
      
  2. 密码管理

    注解

    如果需要,客户(如在Ubuntu的安装过程中创建的用户)应该被移除。首先确认root用户账户是启用的并且使用了密码,然后使用root登录。

    sudo passwd root
    logout
    

    使用root,移除任何在安装过程中创建的自定义用户账户。

    deluser myuser --remove-home
    

    关于设置密码管理脚本的相关说明,请参阅 给你的模板添加密码管理 ,这样能允许CloudStack通过web界面更改root密码。

  3. 主机名管理

    默认情况下CentOS在启动的时候配置主机名。但是,Ubuntu却没有此功能,对于Ubuntu,安装时使用下面步骤。

    • Ubuntu

      一个模板化的VM使用`/etc/dhcp/dhclient-exit-hooks.d`中的一个自定义脚本来设置主机名,这个脚本首先检查当前的主机名是是否是hostname,如果是,它将从DHCP租约文件获取host-name,domain-name和fix-ip,并且使用这些值来设置主机名并将其追加到 /etc/hosts 文件以用来本地主机名解析。一旦这个脚本或者一个用户从本地改变了主机名,那么它将不再根据新的主机名调整系统文件。此脚本同样也会重建openssh-server keys,这个keys在做模板(如下所示)之前被删除了。保存下面的脚本到`/etc/dhcp/dhclient-exit-hooks.d/sethostname`,并且调整权限。

      #!/bin/sh
      # dhclient change hostname script for Ubuntu
      oldhostname=$(hostname -s)
      if [ $oldhostname = 'localhost' ]
      then
          sleep 10 # Wait for configuration to be written to disk
          hostname=$(cat /var/lib/dhcp/dhclient.eth0.leases  |  awk ' /host-name/ { host = $3 }  END { printf host } ' | sed     's/[";]//g' )
          fqdn="$hostname.$(cat /var/lib/dhcp/dhclient.eth0.leases  |  awk ' /domain-name/ { domain = $3 }  END { printf     domain } ' | sed 's/[";]//g')"
          ip=$(cat /var/lib/dhcp/dhclient.eth0.leases  |  awk ' /fixed-address/ { lease = $2 }  END { printf lease } ' | sed     's/[";]//g')
          echo "cloudstack-hostname: Hostname _localhost_ detected. Changing hostname and adding hosts."
          echo " Hostname: $hostname \n FQDN: $fqdn \n IP: $ip"
          # Update /etc/hosts
          awk -v i="$ip" -v f="$fqdn" -v h="$hostname" "/^127/{x=1} !/^127/ && x { x=0; print i,f,h; } { print $0; }" /etc/  hosts > /etc/hosts.dhcp.tmp
          mv /etc/hosts /etc/hosts.dhcp.bak
          mv /etc/hosts.dhcp.tmp /etc/hosts
          # Rename Host
          echo $hostname > /etc/hostname
          hostname $hostname
          # Recreate SSH2
          export DEBIAN_FRONTEND=noninteractive
          dpkg-reconfigure openssh-server
      fi
      ### End of Script ###
      
      chmod 774  /etc/dhcp/dhclient-exit-hooks.d/sethostname
      

    警告

    当你准备好做你的主模板的时候请运行下列步骤。如果主模板在这些步骤期间重启了,那么你要重新运行所有的步骤。在这个过程的最后,主模板应该关机并且将其创建为模板,然后再部署。

  4. 移除udev持久设备规则

    这一步会移除你的主模板的特殊信息,如网络MAC地址,租约信息和CD块设备,这个文件会在下次启动时自动生成。

    • Ubuntu

      rm -f /etc/udev/rules.d/70*
      rm -f /var/lib/dhcp/dhclient.*
      
    • CentOS

      rm -f /etc/udev/rules.d/70*
      rm -f /var/lib/dhclient/*
      
  5. 移除SSH Keys

    这步是为了确认所有要作为模板的VMs的SSH Keys都不相同,否则这样会降低虚拟机的安全性。

    rm -f /etc/ssh/*key*
    
  6. 清除日志文件

    从主模板移除旧的日志文件是一个好习惯。

    cat /dev/null > /var/log/audit/audit.log 2>/dev/null
    cat /dev/null > /var/log/wtmp 2>/dev/null
    logrotate -f /etc/logrotate.conf 2>/dev/null
    rm -f /var/log/*-* /var/log/*.gz 2>/dev/null
    
  7. 设置主机名

    为了Ubuntu DHCP的脚本功能和CentOS dhclient能设置VM主机名,他们都去要设置主模板的主机名设置为“localhost”,运行下面的命令来更改主机名。

    hostname localhost
    echo "localhost" > /etc/hostname
    
  8. 设置用户密码期限

    这步是要在模板部署之后强制用户更改VM的密码。

    passwd --expire root
    
  9. 清除用户历史

    下一步来清除你曾经运行过的bash命令。

    history -c
    unset HISTFILE
    
  10. 关闭VM

    现在你可以关闭你的主模板并且创建模板了!

    halt -p
    
  11. 创建模板!

    现在你可以创建模板了,更多信息请参阅 “从已存在的虚拟机创建模板”

注解

通过Ubuntu和CentOS的模板分发的虚机可能需要重启才让主机名生效。

创建Windows模板

Windows模板在分发多个虚拟机的之前必须使用Sysprep初始化。Sysprep允许你创建一个通用的Windows模板和避免任何可能的SID冲突。

注解

(XenServer)XXenServer上运行的Windows VMs需要安装PV驱动,它可能在模板中或在创建完VM后添加。PV驱动对于基本的管理功能是必要的,比如挂载额外的卷和ISO镜像、在线迁移和正常关机。

过程概述如下:

  1. 上传你的Windows ISO。

    更多信息,请参阅 “添加ISO”

  2. 使用这个ISO创建VM实例。

    更多信息,请参阅 “创建VMs”

  3. 按照你所使用的WIndows Server版本进行Windows Server 2008 R2(下面的)或者Windows Server 2003 R2中Sysprep的操作步骤。

  4. 准本工作完成了。现在你可以按照创建Windows模板中描述的来创建模板。

为Windows Server 2008 R2进行系统准备

对于Windows 2008 R2,你运行Windows系统镜像管理来创建一个自定义的sysprep应答XML文件。Windows系统镜像管理作为Windows Automated Installation Kit (AIK)的一部分安装在系统中。Windows AIK可以从 `微软下载中心 <http://www.microsoft.com/en-us/download/details.aspx?id=9085>`_下载到。

按照以下步骤运行Windows 2008 R2的sysprep:

注解

这些步骤的概述来源于Charity Shelbourne一个非常棒的指导,发布在 Windows Server 2008 Sysprep Mini-Setup.

  1. 下载和安装Windows AIK

    注解

    刚刚创建的Windows 2008 R2上面并没有安装Windows AIK。Windows AIK不是你创建的模板中的一部分。它仅仅用于创建sysprep应答文件。

  2. 在\Windows 2008 R2安装DVD中的源目录复制install.wim文件到本地硬盘。这是一个非常大的文件可能会复制较长时间。Windows AIK要求WIM文件是可写的。

  3. 打开Windows系统镜像管理器。

  4. 在Windows镜像面板,右击选择一个Windows镜像或编录文件选项来读取你刚刚复制的install.wim文件。

  5. 选择Windows 2008 R2版本。

    你可能会收到一个警告提示说不能打开编录文件。点击是来创建一个新的编录文件。

  6. 在应答文件面板,右击来创建一个新的应答文件。

  7. 使用以下步骤从Windows系统镜像管理器生成应答文件:

    1. 第一个页面你必须让语言和国家或位置选择页面是自动的。要使这个自动化,请在你的Windows 镜像面板扩展组件,右击Microsoft-Windows-International-Core添加设置以传送 7 oobeSystem。在你的应答文件面板中,为语言和国家或位置用适当的设置配置InputLocale、SystemLocale、UILanguage、和UserLocale。你可能对这些设置有疑问,你可以指定一个设置右击选择帮助。这将打开对应的CHM帮助文件,这里面包括了与你尝试配置的设置相关的一些示例。

      System Image Manager

    2. 你需要将软件授权选择页配置为自动进行,这不是众所周知的EULA。为此,展开Microsoft-Windows-Shell-Setup组件,选中OOBE 设置,将此设置加入到Pass 7 oobeSystem中去。在设置中,将HideEULAPage 设置为true。

      Depicts hiding the EULA page.

    3. 确保恰当的设置了序列号。如果使用MAK的话,可以在Windows2008R2的虚拟机上输入MAK。并不需要将MAK输入到Windows映像管理器中。如果你使用KMS主机进行激活,则不需要输入产品序列号。Windows卷激活的详细信息可以在http://technet.microsoft.com/en-us/library/bb892849.aspx 上查看。

    4. 类似机械化的操作还有更改管理员密码页。展开Microsoft-Windows-Shell-Setup组件(如果没有展开的话),展开用户账户,右键点击管理员密码,添加设置到Pass 7 oobeSystem配置,在设置中,指定一个密码。

      Depicts changing the administrator password

      可能会需要阅读AIK文档并设置多个选项来适合你的部署。以上的步骤至少需要使windows脱离建立网络的过程。

  8. 将答案文件保存为unattend.xml,可以忽略验证窗口中的警告信息。

  9. 将unattend.xml文件拷贝到Windows 2008 R2 虚拟机的c:\windows\system32\sysprep 文件夹下,

  10. 一旦将unattend.xml文件放到 c:\windows\system32\sysprep文件夹下,则按以下步骤运行sysprep工具:

    cd c:\Windows\System32\sysprep
    sysprep.exe /oobe /generalize /shutdown
    

    Windows 2008 R2虚拟机在sysprep完成后,会自动关闭。

针对Windows Server 2003R2的系统准备

早期的windows版本有个不同的sysprep的工具,按照这些步骤准备Windows Server 2003 R2。

  1. 从Windows 安装CD中提取\support\tools\deploy.cab到WIndows 2003 R2 虚拟机中的此目录 c:\sysprep。

  2. 运行 c:\sysprep\setupmgr.exe 来创建syprep.inf文件。

    1. 选择创建新的来创建一个新的应答文件。

    2. 安装的类型选择 “Sysprep 安装”

    3. 选择合适的OS版本

    4. 在许可协议界面,选择”是,完全自动安装”

    5. 提供你的名称和组织。

    6. 保留显示设置为默认。

    7. 设置合适的时区。

    8. 提供你的产品key。

    9. 给你的部署选择一个合适的许可模式。

    10. 选择”自动生成计算机名”。

    11. 输入一个默认的管理员密码。如果你启用了密码重置功能,用户实际上将不会使用这个密码。在来宾虚机启动之后实例管理器会重置密码。

    12. 网络组件使用”典型设置”。

    13. 选择“WORKGROUP”选项。

    14. 电话服务使用默认。

    15. 选择合适的区域设置。

    16. 选择合适的语言选项。

    17. 不要安装打印机。

    18. 不要指定”运行一次”。

    19. 你不必指定标示字符串。

    20. 将应答文件保存到c:\sysprep\sysprep.inf。

  3. 运行以下命令行来sysprep镜像:

    c:\sysprep\sysprep.exe -reseal -mini -activated
    

    在这个步骤之后,虚拟机会自动关机。

导入Amazon Machine Images

以下过程描述了当使用XenServer hypervisor时,如何导入一个AMI到Cloudstack中。

假定你有一个叫做CentOS_6.2|_x64的AMI文件。假定未来你将工作在CentOS主机上。如果AMI是一个Fedora镜像,你需要将它立即安装到Fedora主机上。

一旦镜像文件在CentOS/Fedora主机上自定义完毕,你必须有一台使用文件存储库(本地ext3 SR或者NFS SP)的XenServer主机将其转换VHD。

注解

当你在拷贝和粘贴这个命令时,请确保所有的命令都在同一行里。有的文档拷贝工具会将这个命令分割为多行。

导入一个AMI:

  1. 建立在镜像文件上的回滚

    # mkdir -p /mnt/loop/centos62
    # mount -o loop  CentOS_6.2_x64 /mnt/loop/centos54
    
  2. 安装kernel-xen包到镜像文件中。下载PV内核和ramdisk到镜像中。

    # yum -c /mnt/loop/centos54/etc/yum.conf --installroot=/mnt/loop/centos62/ -y install kernel-xen
    
  3. 在 /boot/grub/grub.conf中创建一个引导。

    # mkdir -p /mnt/loop/centos62/boot/grub
    # touch /mnt/loop/centos62/boot/grub/grub.conf
    # echo "" > /mnt/loop/centos62/boot/grub/grub.conf
    
  4. 终止已经安装到镜像文件中的PV内核名称

    # cd /mnt/loop/centos62
    # ls lib/modules/
    2.6.16.33-xenU  2.6.16-xenU  2.6.18-164.15.1.el5xen  2.6.18-164.6.1.el5.centos.plus  2.6.18-xenU-ec2-v1.0  2.6.21.7-2.fc8xen  2.6.31-302-ec2
    # ls boot/initrd*
    boot/initrd-2.6.18-164.6.1.el5.centos.plus.img boot/initrd-2.6.18-164.15.1.el5xen.img
    # ls boot/vmlinuz*
    boot/vmlinuz-2.6.18-164.15.1.el5xen  boot/vmlinuz-2.6.18-164.6.1.el5.centos.plus  boot/vmlinuz-2.6.18-xenU-ec2-v1.0  boot/vmlinuz-2.6.21-2952.fc8xen
    

    Xen的kernels/ramdisk通常以“xen”结尾。在lib/modules中选择相应的内核版本,intrd和vmlinuz将作出相应的反应.综上,唯一要求的条件是版本为2.6.18-164.15.1.el5xen。

  5. 根据你要找的内容,在grub.conf文件中创建一个入口。以下为入口例子。

    default=0
    timeout=5
    hiddenmenu
    title CentOS (2.6.18-164.15.1.el5xen)
       root (hd0,0)
       kernel /boot/vmlinuz-2.6.18-164.15.1.el5xen ro root=/dev/xvda
       initrd /boot/initrd-2.6.18-164.15.1.el5xen.img
    
  6. 编辑 etc/fstab,将”sda1“改为”xvda”并将“sdb”改为”xbdb”.

    # cat etc/fstab
    /dev/xvda  /         ext3    defaults        1 1
    /dev/xvdb  /mnt      ext3    defaults        0 0
    none       /dev/pts  devpts  gid=5,mode=620  0 0
    none       /proc     proc    defaults        0 0
    none       /sys      sysfs   defaults        0 0
    
  7. 通过终端开启登陆。在默认终端设备XenServer系统上的是xvc0.确定在etc/inittab和etc/securetty中有以下各自的行:

    # grep xvc0 etc/inittab
    co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
    # grep xvc0 etc/securetty
    xvc0
    
  8. 确保虚拟盘支持PV磁盘和PV网络。为你之前选择好的kernel版本自定义这些。

    # chroot /mnt/loop/centos54
    # cd /boot/
    # mv initrd-2.6.18-164.15.1.el5xen.img initrd-2.6.18-164.15.1.el5xen.img.bak
    # mkinitrd -f /boot/initrd-2.6.18-164.15.1.el5xen.img --with=xennet --preload=xenblk --omit-scsi-modules 2.6.18-164.15.1.el5xen
    
  9. 修改密码

    # passwd
    Changing password for user root.
    New UNIX password:
    Retype new UNIX password:
    passwd: all authentication tokens updated successfully.
    
  10. 退出chroot

    # exit
    
  11. 检查 `etc/ssh/sshd_config`中关于允许在ssh登录时使用密码的相关行。

    # egrep "PermitRootLogin|PasswordAuthentication" /mnt/loop/centos54/etc/ssh/sshd_config
    PermitRootLogin yes
    PasswordAuthentication yes
    
  12. 如果你需要通过CloudStack· UI或API启用重置模板的密码功能,在镜像中安装密码更改脚本。相关内容请参考:ref:adding-password-management-to-templates

  13. 卸载或删除loopback挂载。

    # umount /mnt/loop/centos54
    # losetup -d /dev/loop0
    
  14. 复制镜像文件到XenServer主机的文件存储库。在下面的例子中,XenServer是”xenhost”。这个XenServer有一个UUID为a9c5b8c8-536b-a193-a6dc-51af3e5ff799的NFS库。

    # scp CentOS_6.2_x64 xenhost:/var/run/sr-mount/a9c5b8c8-536b-a193-a6dc-51af3e5ff799/
    
  15. 登录到XenServer然后创建一个与镜像同样大小的VDI。

    [root@xenhost ~]# cd /var/run/sr-mount/a9c5b8c8-536b-a193-a6dc-51af3e5ff799
    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]#  ls -lh CentOS_6.2_x64
    -rw-r--r-- 1 root root 10G Mar 16 16:49 CentOS_6.2_x64
    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# xe vdi-create virtual-size=10GiB sr-uuid=a9c5b8c8-536b-a193-a6dc-51af3e5ff799 type=user name-label="Centos 6.2 x86_64"
    cad7317c-258b-4ef7-b207-cdf0283a7923
    
  16. 将镜像导入到VDI中。这可能会花费10-20分钟。

    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# xe vdi-import filename=CentOS_6.2_x64 uuid=cad7317c-258b-4ef7-b207-cdf0283a7923
    
  17. 找到这个VHD文件。它的名字是以VDI的UUID命名的。压缩并上床至你的web服务器。

    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# bzip2 -c cad7317c-258b-4ef7-b207-cdf0283a7923.vhd > CentOS_6.2_x64.vhd.bz2
    [root@xenhost a9c5b8c8-536b-a193-a6dc-51af3e5ff799]# scp CentOS_6.2_x64.vhd.bz2 webserver:/var/www/html/templates/
    

将Hyper-V的VM转换为模板

要转换Hyper-V的VM到兼容XenServer的CloudStack模板,你需要一个独立的添加了NFS VHD SR的XenServer主机。无论使用CloudStack管理哪个版本XenServer,但必须是XenCenter 5.6 FP1或SP2(它向下兼容5.6)。另外,添加了NFS ISO SR是有帮助的。

对于Linux VMs,在尝试让VM在XenServer中工作之前你可能必须在Hyper-V做一些准备工作。如果你仍然想要在Hyper-V中使用这个VM的话,克隆这个VM然后在克隆的虚机上操作。卸载Hyper-V集成组件然后检查任何/etc/fstab中涉及的相关设备名称:

  1. 从 linux_ic/drivers/dist 目录中,运行make uninstall(“linux_ic” 是复制的Hyper-V集成组件的路径)。

  2. 从备份/boot/ 恢复原始的文件系统(备份名称为 *.backup0)。

  3. 从 /boot/grub/menu.lst移除 “hdX=noprobe” 。

  4. 通过名称从 /etc/fstab中检查任何挂载的分区。将这些条目(如果有)改成使用LABEL或者UUID挂载。你能通过 blkid命令获得这些信息。

下一步请确保Hyper-V中的这个VM没有运行,然后把VHD送至XenServer。有两个选择。

选项一:

  1. 使用XenCenter导入VHD。在XenCenter中,找到Tools>Virtual Appliance Tools>Disk Image Import。

  2. 选择VHD,然后点击下一步。

  3. 给VM起个名字,在Storage下选择 NFS VHD SR,启用”Run Operating System Fixups” 然后选择NFS ISO SR。

  4. 点击下一步 完成。完成虚拟机创建。

选项二:

  1. 运行 XenConvert ,选择 VHD,选择 XenServer,点击下一步。

  2. 选择VHD,然后点击下一步。

  3. 输入XenServer主机信息,点击下一步。

  4. 输入VM名称,点击“下一步”,点击“转换”。VM应该就创建了。

一旦你完成从hyper-v VHD到虚拟机的创建,准备使用以下步骤:

  1. 启动虚拟机,卸载Hyper-V集成服务,并重新启动。

  2. 安装XenServer Tools,然后重新启动。

  3. 按需要准备VM。例如在Windows VM上执行sysprep。请参阅 “创建Windows模板”

以上任一选项将在HVM模式下创建一个VM。对于Windows虚拟机来说这是很好的,但Linux的虚拟机可能无法达到最佳性能。要转换Linux虚拟机到PV模式,对于不同的发行版本将需要额外的步骤。

  1. 关闭虚拟机,从NFS存储拷贝VHD到一个web服务器;比如,在web服务器上挂载NFS共享然后拷贝它,或者是在XenServer主机上用sftp或scp将VHD上传到web服务器。

  2. 在 CloudStack中,使用以下值创建一个新的模板:

    • URL。给VHD指定URL。

    • OS类型。使用适当的OS。对于CentOS的PV模式来说,选择其他PV (32位)或其他PV(64位)。这个选项仅适用于XenServer。

    • Hypervisor。XenServer

    • 格式。VHD

模板就创建好了,然后你可以通过它创建实例。

给你的模板添加密码管理

CloudStack提供了可选的密码重置功能,该功能允许用户在CloudStack UI上设置临时的admin或root密码,也可以重置现有的admin或root密码。

为启用密码重置功能,您需要下载额外的脚本到模版上。当您之后在CloudStack中添加模版时,您可以指定该模版是否启用重置admin或root密码的功能。

密码管理功能总是在虚机启动时重置账号的密码。该脚本通过对虚拟路由器的HTTP调用,获取需要重置的账号密码。启动时,只要虚拟路由器可以访问,虚机就可以获得应该设置的账号密码。当用户请求密码重置时,管理服务器会生成新密码,并发送到虚拟路由器。因而,虚机需要重启新密码才能生效。

在虚机重启时,如果脚本不能连接到虚拟路由器,则密码不会被重置,但启动过程还会继续正常执行。

Linux系统安装

使用一下步骤开始Linux系统的安装:

  1. 下载cloud-set-guest-password脚本文件:

  2. 拷贝本文件到 /etc/init.d 。

    在某些linux发行版拷贝此文件到 /etc/rc.d/init.d

  3. 执行以下命令使脚本可执行:

    chmod +x /etc/init.d/cloud-set-guest-password
    
  4. 根据不同的Linux发行版,选择适当的步骤继续。

    在Fedora,CentOS/RHEL和Debian上运行:

    chkconfig --add cloud-set-guest-password
    
Windows OS 安装

从`下载页<http://sourceforge.net/projects/cloudstack/files/Password%20Management%20Scripts/CloudInstanceManager.msi/download>`_ 下载安装程序CloudInstanceManager.msi,并在新创建的Windows 虚拟机中运行安装程序。

删除模板

模板可以被删除。在一般情况下,当一个模板跨越多个区域,只有被选中的副本才会被删除,在其他区域相同的模板将不会被删除。CentOS的模板是一个例外。如果所提供的CentOS的模板被删除,它从所有区域都将被删除。

当删除模板时,从它们中产生的虚拟机实例将继续运行。然而,新的虚拟机不能在被删除模板的基础上创建。

使用主机

使用主机

添加主机

添加主机能为来宾VMs提供更多的性能。更多需求与说明请参阅 “添加主机”

主机的维护计划与维护模式

你可以使一台主机进入维护模式。当激活维护模式时,这台主机将不会接纳新的来宾VMs,同时上面的VMs会无缝地迁移到其他非维护模式的主机上。这个迁移使用在线迁移技术并且不会中断用户的操作。

vCenter与维护模式

要使vCenter主机进入维护模式,vCenter和CloudStack上都必须进行此操作。CloudStack和vCenter有各自的维护模式,他们需要紧密合作。

  1. 在CloudStack中,将主机进入”维护计划”模式。这个操作不会调用vCenter的维护模式,但是会将VMs迁离该主机。

    当CloudStack维护模式启用后,主机首先会进入准备维护状态。在这个阶段它不能运行新的来宾VMs。然后所有的VMs将会被迁离该主机。主机使用在线迁移来迁移VMs。这种方式能够使来宾VMs在迁移到其他主机的过程中不会中断用户的操作。

  2. 等”准备好维护”指示灯出现在UI中。

  3. 现在使用vCenter通过必要的步骤将主机进入维护模式。在此期间,主机不会运行新的VM。

  4. 当维护任务完成之后,按以下操作将主机退出维护模式:

    1. 首先通过vCenter退出vCenter维护模式。

      这么做是为了主机能够准备好以便CloudStack恢复它。

    2. 然后通过CloudStack的管理员UI来取消CloudStack维护模式

      当主机恢复正常,被迁移走的VMs可能需要手工迁移回来并且也能在主机上添加新的VMs了。

XenServer和维护模式

对于XenServer,你能够通过使用XenCenter中的维护模式功能将一台服务器临时的离线。当你使一台服务器进入维护模式,所有运行的VMs都会自动的迁移到同一个池中的其他主机上。如果此服务器是池master主机,那么此池会选举一个新的master主机。当一台服务器在维护模式下时,你不能在上面创建或启动任何VMs。

使一台服务器进入维护模式:

  1. 在资源面板,选择服务器,然后按下列步骤进行操作:

    • 右击,然后在弹出的快捷菜单中点击进入维护模式。

    • 在服务器菜单,点击进入维护模式。

  2. 点击进入维护模式。

当所有运行当中的VMs成功的迁离该主机后,在资源面板中会显示服务器的状态。

使服务器退出维护模式:

  1. 在资源面板,选择服务器,然后按下列步骤进行操作:

    • 右击,然后在弹出的快捷菜单中点击退出维护模式。

    • 在服务器菜单,点击提出维护模式。

  2. 点击退出维护模式。

禁用和启用Zones,Pods和Clusters

你可以启用或禁用一个zone,pod或者cluster而不用永久的从云中移除他们。这对于维护或者当云中一部分架构的可靠性有问题的时候很有用。禁用状态下的zone,pod或cluster不会接受新的分配,除非其状态变为启用。当一个zone,pod或cluster是初次添加到云中的,默认的情况下它是禁用的。

要禁用和启用一个zone,pod或者cluster:

  1. 使用管理员权限的账号登录到CloudStack

  2. 在左侧导航栏中,点击基础架构

  3. 点击区域中的查看更多。

  4. 如果你要禁用或启用一个zone,请在列表里找到zone的名称,然后点击启用/禁用按钮。button to enable or disable zone, pod, or cluster.

  5. 如果你要禁用或启用一个pod或者cluster,点击包含该pod或cluster的zone名称。

  6. 点击计算这个标签。

  7. 在示意图中的Pods或者Clusters节点,点击查看所有。

  8. 点击列表中的pod或者cluster名称。

  9. 点击启用/禁用按钮。button to enable or disable zone, pod, or cluster.

移除主机

主机在需要的时候可以被移除。这个过程取决于主机所使用的hypervisor类型。

移除XenServer和KVM主机

cluster中的主机只有进入维护模式才能被移除。这么做是为了确保其上所有的VMs被迁移至其他主机。要从云中移除一个主机:

  1. 将主机进入维护模式。

    请参考`“主机的维护计划与维护模式” <#scheduled-maintenance-and-maintenance-mode-for-hosts>`_.

  2. 对于KVM,停止cloud-agent服务。

  3. 使用UI选项来移除主机。

    然后你可以关掉主机,重用它的IP地址,重新安装系统,等等。

移除vSphere主机

要移除此类型的主机,首先按照 `“主机的维护计划与维护模式” <#scheduled-maintenance-and-maintenance-mode-for-hosts>`_中的描述将其进入维护模式。然后使用CloudStack来移除主机。使用CloudStack移除主机时,CloudStack不会直接操作主机。但是,主机可能仍然存留在vCenter群集中。

重新安装主机

你可以在将主机进入维护模式并移除它之后重新安装系统。如果主机是宕机状态而不能进入维护模式,在重装它之前仍然能被移除。

在主机上维护Hypervisors

当主机上运行了hypervisor软件,请确保安装了供应商提供的所有修补程序。请通过供应商跟踪虚拟化平台的补丁发布情况,一旦发布,请尽快打上补丁。CloudStack不会跟踪或通知您所需要的虚拟化平台补丁。您的主机及时打上最新虚拟化平台补丁是非常重要的。虚拟化平台厂商很可能会拒绝支持未打最新补丁的系统。

注解

没有最新的修补程序可能会导致数据出错或虚拟机丢失。

(XenServer)更多信息,请参考`CloudStack知识库中高度推荐的XenServer修补程序 <http://docs.cloudstack.org/Knowledge_Base/Possible_VM_corruption_if_XenServer_Hotfix_is_not_Applied/Highly_Recommended_Hotfixes_for_XenServer_5.6_SP2>`_.

更改主机密码

数据库中的XenServer主机,KVM主机或者vShpere主机密码可能会被变更。注意群集中的所有节点密码必须一致。

用户名和密码要更改一台主机的密码:

  1. 让群集中所有的主机状态保持一致。

  2. 更改群集中所有主机的密码。此刻主机上的密码与CloudStack已知的密码不一致。两个密码不一致的话会导致群集上的操作失败。

  3. 获取群集中你更改了密码的主机ID列表。你必须访问数据库来获取这些ID。为每个你要更改密码的主机名(或者vSphere群集) “h”,执行:

    mysql> select id from cloud.host where name like '%h%';
    
  4. 这条命令将会返回一个ID。记录这些主机的ID。

  5. 在数据库中为这些主机更新密码。在这个例子中,我们将主机ID为5,10和12的密码更改为”password”。

    mysql> update cloud.host set password='password' where id=5 or id=10 or id=12;
    

超配和服务方案限制

(支持XenServer、KVM和VMware)

CPU和内存(RAM)超配直接影响每个群集中主机上可以运行的VMs数量。这样可以帮助优化资源的使用。依靠增加超配比率,能使使资源更充分的被利用。如果比率设为1,那么表示没有使用超配。

管理员也可以在cpu.overprovisioning.factor和mem.overprovisioning.factor这两个全局配置变量中设置全局默认超配比率。默认的值是1:默认情况下超配是关闭的。

超配比率是由CloudStack的容量计算器动态调整的。比如:

容量=2GB 超配系数=2 超配后容量=4GB

按照这个配置,假设你部署了3个VMs,每个VM 1GB:

已使用=3GB 空闲=1GB

管理员可以在每个群集中指定一个内存超配比率,也可以同时指定CPU和内存超配比率。

在任何已有的云中,hypervisor、存储和硬件配置影响每个主机上VMs最佳数量。同一个云中的每个群集的这些配置可能都不同。单一的全局超配设置不能为云中所有的群集提供最佳效果。它只能作为一个基线。无论CloudStack使用哪种算法来放置一个VM,每个群集都提供了细颗粒度的设置以提供更好的资源效果。

超配设置和专用资源(对一个账号分配一个特定的群集)一起使用时能对不同用户有效的提供的不同服务级别。比如,一个账号购买了比较高级别的服务,那么分配给他一个超配比率为1的专用群集,购买低级别服务的账号分配一个比率为2的群集。

当一个新主机被添加到群集中,CloudStack假设配置了超配的群集中的主机能够提供CPU和RAM的超配能力。这个要靠管理员来决定主机是否真的能够提供设置的超配级别。

XenServer和KVM中超配的局限性
  • 在XenServer中,由于这个hypervisor的限制,超配系数不能超过4。

  • KVM hypervisor不能动态的管理分配给VMs的内存。CloudStack设置VM能够使用的内存最小和最大量。Hypervisor基于存储器争用技术在设定范围内调整内存。

存储超配的要求

为了让超配能够正常工作需要几个前提条件。此特性取决于OS类型,hypervisor功能和特定的脚本。管理员负责确认这些条件都符合。

Balloon驱动

所有VMs中都安装了balloon驱动。Hypervisor靠balloon驱动与VM通讯以释放内存和让内存变得可用。

XenServer

Balloon驱动是Xen pv或者PVHVM驱动的一部分。Linux kernels 2.6.36和以上版本中包含了Xen pvhvm驱动。

VMware

Balloon驱动是VMware tools的一部分。在一台超配群集中部署的所有的VMs都应该安装VMware tools。

KVM

所有VMs都需要支持virtio驱动。Linux kernel versions 2.6.25和更高版本中已经安装了这些驱动。管理员必须在virtio的配置文件中配置CONFIG_VIRTIO_BALLOON=y。

Hypervisor功能

Hypervisor必须能够使用内存ballooning。

XenServer

Hypervisor必须启用了DMC(动态内存控制)功能。只有XenServer高级版以及更高版本拥有这个功能。

VMware、KVM

默认支持内存ballooning。

设置存储超配系数

管理员有两种方法来设置CPU和RAM超配系数。第一,当新的群集被创建完成的时候全局配置中的cpu.overprovisioning.factor和mem.overprovisioning.factor将生效。第二,对于已存在的群集可以直接修改系数。

只有在变更之后部署VMs,设置才会生效。如果想让变更之前部署的VMs也能继承新的超配比率,你必须重启VMs。当此操作完成之后,CloudStack会重新计算或者调整已使用的资源,并且基于新的超配比率预留出容量,以保证CloudStack正确的掌握了剩余容量的情况。

注解

如果新的可用容量不足以满足新的VMs需求,那么当重新计算容量的过程中不去部署新的VMs是比较安全的。等新的已用/可用容量完全可用时,确认这空间对于你想创建的VMs足够用。

在已存在的群集中更改超配系数:

  1. 使用管理员登录到CloudStack管理界面。

  2. 在左侧导航栏中,点击基础架构

  3. 在群集页面,点击查看所有。

  4. 选择你要操作的群集,点击编辑按钮。

  5. 在CPU overcommit ratio和RAM overcommit ratio区域里填入你希望的超配系数。这里的初始值是从全局配置设置里继承而来的。

    注解

    在XenServer中,由于这个hypervisor的限制,超配系数不能超过4。

服务方案限制和超配

服务方案限制(比如1GHz,1 core)是受到core数严格限制的。比如,一个使用1 core服务方案的用户只能用 1core,无论这个主机多空闲。

GHz的服务方案限制只存在于CPU资源的争用中。比如,假设用户在一个有2GHz core的主机上创建了一个1 GHz的服务方案,并且该用户是这个主机上唯一一个用户。那么该用户有2 GHz可用性能。当多个用户尝试使用CPU,则由权重系数来调度CPU资源。这个权重基于服务方案中的时钟速度。用户分配到的CPU GHz与服务方案中一致。比如,用户从一个2GHz服务方案中创建的VM分配到的CPU是从1 GHz方案中分配到的2倍。CloudStack不能提供内存的超配。

VLAN供应

CloudStack能在主机上自动创建和销毁桥接至VLAN的网络接口。一般来说,管理员不需要介入此处理过程。

CloudStack根据hypervisor类型的不同,管理VLANs的方式也不同。对于XenServer或者KVM来说,只在使用VLANs的主机上创建VLANs,并且当所有使用VLANs的来宾虚机被销毁或者移动至其他主机的时候,这些VLANs就会被销毁。

vSphere上的VLANs是在群集中的所有主机上配置的,不管主机上有没有使用VLAN的来宾虚机在运行。这样允许管理员在vCenter中不需在目标主机上创建VLAN就可以执行在线迁移和其他功能。此外,当主机不再需要的时候VLANs也不会被移除。

你能够使用由不同的拥有二层网络结构的物理网络提供同样的VLANs,比如交换机。比如,在对高级zone中部署物理网络A和B的时候,你可以指定VLAN范围为500-1000。如果你的VLANs用尽了,这个功能允许你在不同的物理网卡上设置追加一个二层物理网络并且使用同样的VLANs设置。另一个优点是你可以在不同的物理网卡上为不同的客户使用同样的IPs设置,每个都有自己的路由器和来宾网络。

VLAN分配示例

公共和来并流量需要VLAN,下面是一个VLAN分配的示例:

VLAN IDs

流量类型

范围

小于500

管理流量。

出于管理目的而预留的。CloudStack,hypervisors和系统虚机能访问它。

500-599

承载公共流量。

CloudStack账户。

600-799

承载来宾流量

CloudStack accounts. Account-specific VLAN is chosen from this pool.
800-899

承载来宾流量

CloudStack 账户。CloudStack管理员为账户指定特定的VLAN。

900-999

承载来宾流量

CloudStack 账户。可作为项目、域或所有账户的作用域。

大于1000

保留为将来使用

 
添加不连续的VLAN范围

CloudStack能让你灵活的给你的网络添加不连续的VLAN范围。在创建一个zone的时候,管理员要么更新一个已存在的VLAN范围,要么添加多个不连续的VLAN范围。你同样可以使用UpdatephysicalNetwork API来扩展VLAN范围。

  1. 使用管理员或者终端用户账号登录CloudStack UI。

  2. 确保VLAN范围没有被使用。

  3. 在左边的导航,选择基础架构。

  4. 在Zones上,点击查看更多,然后点击你要进行操作的zone。

  5. 点击物理网络。

  6. 在图中的来宾节点上,点击配置

  7. 点击编辑|edit-icon.png|。

    现在VLAN范围区域是可编辑的了。

  8. 指定VLAN范围的起始和结束用逗号隔开。

    指定所有你想使用的VLANs,如果你添加新的范围到已有列表里,那么没有指定的VLANs将被移除。

  9. 点击应用

给隔离的网络指定VLAN。

CloudStack能够让你控制VLAN分配至隔离网络。作为一个Root管理员,当一个网络被创建后,你能为其分配一个VLAN ID,这个网络只能是共享网络。

同样被支持—当网络转换为运行状态是,VLAN是随机地通过物理网络的VNET范围分配给网络。当网络作为网络垃圾回收过程的一部分而关闭时,VLAN会被回收到VNET池。当网络再次启用的时候VLAN还能被其重用,或者其他网络使用。在每个新启用的网络中,都有一个新的VLAN被分配。

只有Root管理员能够分配VLANs,因为常规的用户和域管理员并不清楚物理网络拓扑。他们也不能查看哪个VLAN被分配给网络。

要把VLANs分配给隔离的网络,

  1. 使用下列指定的步骤创建一个网络方案:

    • 来宾网络类型:选择隔离的。

    • 指定VLAN:选择一个选项。

    更多信息,请参考CloudStack安装指导。

  2. 使用这个网络方案,创建一个网络。

    你可以创建一个VPC层或者一个隔离网络。

  3. 当你创建网络的时候指定VLAN。

    当VLAN被指定后,CIDR和网关就被分配给这个网络了,并且它的状态也变成Setup了。在这个状态下,网络不会被回收。

注解

一旦VLAN被分配给一个网络的话,你就不能更改它。VLAN将伴随着网络的整个生命周期。

使用存储

使用存储

存储概述

CloudStack定义了两种存储:主存储和辅助存储。主存储可以使用iSCSI或NFS协议。另外,直接附加存储可被用于主存储。辅助存储通常使用NFS协议。

CloudStack不支持临时存储。所有节点上的所有卷都是持久存储。

主存储

本章节讲述的是关于CloudStack的主存储概念和技术细节。更多关于如何通过CloudStack UI安装和配置主存储的信息,请参阅安装向导。

“关于主存储”

主存储的最佳实践
  • 主存储的速度会直接影响来宾虚机的性能。如果可能,为主存储选择选择容量小,转速高的硬盘或SSDs。

  • CloudStack用两种方式使用主存储:

    静态:CloudStack管理存储的传统方式。在这个模式下,要给CloudStack预先分配几个存储(比如一个SAN上的卷)。然后CloudStack在上面创建若干个卷(可以是root和/或者数据盘)。如果使用这种技术,确保存储上没有数据。给CloudStack添加存储会销毁已存在的所有数据。

    动态:这是一个比较新的CloudStack管理存储的方式。在这个模式中,给CloudStack使用的是一个存储系统(但不是预分配的存储)。CloudStack配合存储一起工作,动态的在存储系统上创建卷并且存储系统上的每个卷都映射到一个CloudStack卷。这样做非常有利于存储的QoS。目前数据磁盘(磁盘方案)支持这个特性。

主存储的运行时行为

当创建虚拟机的时候,root卷也会自动的创建。在VM被销毁的时候root卷也会被删除。数据卷可以被创建并动态的挂载到VMs上。VMs销毁时并不会删除数据卷。

管理员可以监控主存储设备的容量和在需要时添加其他的主存储。强参阅高级安装指导。

管理员通过CloudStack创建存储池来给系统添加主存储。每个存储池对应一个群集或者区域。

对于数据磁盘,当一个用户执行一个磁盘方案来创建数据磁盘的时候,初始化信息就被写到了CloudStack的数据库中。根据第一次给VM附加数据磁盘的请求,CloudStack决定这个卷的位置和空间占用在哪个存储(预分配存储和存储系统(比如SAN)中的任意一种,这取决于CloudStack使用的哪种主存储)。

Hypervisor支持的主存储

下面的表格展示了不同hypervisors所支持的存储类型。

存储媒介 \ hypervisor

VMware vSphere Citrix XenServer KVM Hyper-V

磁盘、模板和快照的格式

VMDK VHD QCOW2

不支持VHD快照。

支持iSCSI

VMFS

集群化的LVM

是的,通过Shared Mountpoint

不支持

FC支持

VMFS

是的,通过已有的SR

是的,通过Shared Mountpoint

不支持

支持NFS

支持

支持

支持

不支持

支持本地存储

支持

支持

支持

支持

存储超配

NFS and iSCSI NFS NFS

不支持

SMB/CIFS

不支持

不支持

不支持

支持

XenServer通过在iSCSI和FC卷上使用集群化的LVM系统来存储VM镜像,并且不支持存储超配。尽管存储本身支持自动精简配置。不过CloudStack仍然支持在有自动精简配置的存储卷上使用存储超配。

KVM支持 “Shared Mountpoint”存储。Shared Mountpoint是群集中每个服务器本地文件系统中的一个路径。群集中所有主机上的这个路径必须一致,比如/mnt/primary1。假设Shared Mountpoint是一个集群文件系统如OCFS2。在这种情况下,CloudStack不会把它当做NFS存储去尝试挂载或卸载。CloudStack需要管理员保证存储时可用的。

在NFS存储中,CloudStack负责超配置。这种情况下,全局配置参数storage.overprovisioning.factor来控制超配的范围。这取决于hyperviso类型。

在vSphere, XenServer和KVM中,本地存储是一个可选项。当选择了使用本地存储,所有主机上会自动创建本地存储资源池。要让System Virtual Machines (如the Virtual Router)使用本地存储,请设置全局配置中的system.vm.use.local.storage为true.

CloudStack支持在一个群集内有多个主存储池。比如,有2个NFS服务器提供主存储。或原来有1个iSCSI LUN后来又添加了第二个iSCSI LUN。

存储标签

存储是可以被”标签”的。标签是与主存储、磁盘方案或服务方案关联的字符串属性。标签允许管理员给存储添加额外的信息。比如”SSD”或者”慢速”。CloudStack不负责解释标签。它不会匹配服务和磁盘方案的标签。CloudStack要求在主存储上分配root或数据磁盘之前,所有服务和磁盘方案的都已存在对应的标签。服务和磁盘方案的标签被用于识别方案对存储的要求。比如,高端服务方案可能需要它的root磁盘卷是”快速的”

标签,分配,跨集群或机架的卷复制之间的关系是很复杂的。简单的环境就是在一个机架内所有集群的主存储使用相同的标签。即使用这些标签表示不同设备,展现出来的标签组仍可以是一样的。

主存储的维护模式

主存储可以被设置成维护模式。这很有用,例如,替换存储设备中坏的RAM。对存储设备的维护模式将首先停止任何新的来自预处理的来宾虚机,然后停止所有有数据卷的来宾虚机。当所有来宾虚机被停止的时候,这个存储设备就进入维护模式了并且可以关机。当存储设备再次上线的时候,你可以对这个设备取消维护模式。CloudStack将返回在线状态并且试着启动所有曾在这个设备进入维护模式前运行的来宾机器。

辅助存储

本章节讲述的是关于CloudStack的辅助存储概念和技术细节。更多关于如何通过CloudStack UI安装和配置主存储的信息,请参阅高级安装向导。

“关于辅助存储”

使用磁盘卷

卷为来宾虚机提供存储。卷可以作为root分区或附加数据磁盘。CloudStack支持为来宾虚机添加卷。

不同的hypervisor创建的磁盘卷有所不同。当磁盘卷被附加到一种hypervisor的虚拟机(如:xenserver),就不能再被附加到其他类型的hypervisor,如:vmware、kvm的虚拟机中。因为它们所用的磁盘分区模式不同。

CloudStack定义一个卷作为来宾虚机的一个有效的存储单元。卷可能是root磁盘或者数据磁盘。root磁盘在文件系统中有 “/” 并且通常用于启动设备。数据磁盘提供额外的存储,比如:”/opt”或者”D:”。每个来宾VM都有一个root磁盘,VMs可能也还有数据磁盘。终端用可以给来宾VMs挂在多个数据磁盘。用户通过管理员创建的磁盘方案来选择数据磁盘。用户同样可以在卷上创建模板;这是标准私有模板的创建流程。针对不同的hypervisor卷也不同:一个hypervisor类型上的卷不能用于其它的hypervisor类型上的来宾虚机。

注解

CloudStack支持给XenServer 6.0和以上版本的VM最多附加13个数据磁盘。其它hypervisor类型上的VMs,最多附加6个数据磁盘。

创建新卷

你可以在符合你存储能力的情况下随时向来宾虚拟机添加多个数据卷。CloudStack的管理员和普通用户都可以向虚拟机实例中添加卷。当你创建了一个新卷,他以一个实体的形式存在于CloudStack中,但是在你将其附加到实例中之前他并不会被分配实际的物理空间。这个优化项允许CloudStack提供最接近来宾虚机的卷,并在第一个附加至虚机的时候使用它。

使用本地存储作为数据卷

您可以将数据盘创建在本地存储上(XenServer、KVM和VMware支持)。数据盘会存放在和所挂载的虚机相同的主机上。这些本地数据盘可以象其它类型的数据盘一样挂载到虚机、卸载、再挂载和删除。

在不需要持久化数据卷和HA的情况下,本地存储是个理想的选择。其优点包括降低磁盘I/O延迟、使用廉价的本地磁盘来降低费用等。

为了能使用本地磁盘,区域中必须启用该功能。

您可以为本地存储创建一个数据盘方案。当创建新虚机时,用户就能够选择该磁盘方案使数据盘存放到本地存储上。

你不能将使用了本地存储作为磁盘的虚机迁移到别的主机,也不能迁移磁盘本身到别的主机。若要将主机置于维护模式,您必须先将该主机上所有拥有本地数据卷的虚机关机。

创建新卷
  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 在左侧导航栏点击存储。

  3. 在选择视图中选择卷。

  4. 点击添加卷来创建一个新卷,填写以下信息后点击确定。

    • 名字。给卷取个唯一的名字以便于你以后找到它。

    • 可用的资源域。你想让这个存储在哪个地方有效?这个应该接近要是用这个卷的VM。(就是说你要 在单个资源域内使用这个存储就选择单个资源域,如果此存储要在多个资源与内共享你就选所有资源域)

    • 磁盘方案。选择存储特性。

    新建的存储会在卷列表中显示为“已分配”状态。卷数据已经存储到CloudStack了,但是该卷还不能被使用。

  5. 通过附加卷来开始使用这个卷。

上传一个已存在的卷给虚拟机

已存在的数据现在可以被虚拟机存取。这个被称为上传一个卷到VM。例如,这对于从本地数据系统上传数据并将数据附加到VM是非常有用的。Root管理员、域管理员和终端用户都可以给VMs上传已存在的卷。

使用HTTP上传。上传的卷被存储在区域中的辅助存储中。

如果预配置的卷已经达到了上限的话,那么你就不能上传卷了。默认的限制在全局配置参数max.account.volumes中设置,但是管理员同样可以为每个用户域设置不同于全局默认的上限值。请参阅设置使用限制。

要上传一个卷:

  1. (可选项)为将要上传的磁盘镜像文件创建一个MD5哈希(校验)。再上传数据磁盘之后,CloudStack将使用这个校验值来检查这个磁盘文件再上传过程中没有出错。

  2. 用管理员或用户账号登录CloudStack UI

  3. 在左侧导航栏点击存储。

  4. 点击上传卷。

  5. 填写以下内容:

    • 名称和描述。你想要的任何名称和一个简洁的描述,这些都会显示在UI中。

    • 可用的区域:选择你想存储卷的区域。运行在该区域中的主机上的VMs都可以附加这个卷。

    • 格式。在下面所指出的卷的磁盘镜像格式中选择一种。

      Hypervisor

      磁盘镜像格式

      XenServer VHD
      VMware OVA
      KVM QCOW2
    • URL。CloudStack用来访问你的磁盘的安全HTTP或HTTPS URL。URL对应的文件种类必须符合在格式中选择的。例如,格式为VHD,则URL必须像下面的:

      http://yourFileServerIP/userdata/myDataDisk.vhd

    • MD5校验。(可选项)使用在步骤1中创建的哈希。

  6. 等到卷的上传显示完成。点击实例-卷,找到你在步骤5中指定的名称,单后确保状态是已上传。

附加一个卷

你可以通过附加一个卷来提供虚拟机的额外磁盘存储。当你第一次创建新卷,或移动已存在的卷到另一台虚拟机,或实在从另一个存储池迁移过来一个卷的时候你才可以附加一个卷。

  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 在左侧导航栏点击存储。

  3. 在选择视图中选择卷。

  4. 在卷列表中点击卷的名称,然后点击附加磁盘按钮 Attach Disk Button.

  5. 在弹出的实例界面,选择你打算附加卷的那台虚拟机。你只能看到允许你附加卷的实例;比如,普通用户只能看到他自己创建的实例,而管理员将会有更多的选择。

  6. 当卷被附加之后,你通过点击实例看到实例名和该实例所附加的卷。

卸载和移动卷

注解

这个过程不同于从一个存储池移动卷到其他的池。这些内容在 `“VM存储迁移” <#vm-storage-migration>`_中有描述。

卷可以从来宾虚机上卸载再附加到其他来宾虚机上。CloudStack管理员和用户都能从VMs上卸载卷再给其他VMs附加上。

如果两个VMs存在于不同的群集中,并且卷很大,那么卷移动至新的VM上可能要耗费比较长的时间。

  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 在左侧的导航栏,点击存储,在选择视图中选择卷。或者,如果你知道卷要附加给哪个VM的话,你可以点击实例,再点击VM名称,然后点击查看卷。

  3. 点击你想卸载卷的名字,然后点击卸载磁盘按钮。 Detach Disk Button.

  4. 要移动卷至其他VM,按照`“附加卷” <#attaching-a-volume>`_中的步骤。

VM存储迁移

支持XenServer、KVM和VMware。

注解

这个过程不同于从一个虚拟机移动磁盘卷到另外的虚拟机。这些内容在 “查看挂载和移动卷” <#detaching-and-moving-volumes>`_中有描述。

你可以从同一区域中的一个存储池迁移虚机的root磁盘卷或任何其他的数据磁盘卷到其他的池

你可以使用存储迁移功能完成一些常用的管理目标。如将它们从有问题的存储池中迁移出去以平衡存储池的负载和增加虚拟机的可靠性。

在XenServer和VMware上,由于CloudStack支持XenMotion和vMotion,VM存储的在线迁移是启用的。在线存储迁移允许没有在共享存储上的VMs从一台主机迁移到另一台主机上。它提供选项让VM的磁盘与VM本身一起在线迁移。它让XenServer资源池之间/VMware群集之间迁移VM,或者在本地存储运行的VM,甚至是存储库之间迁移VM的磁盘成为可能,而且迁移同时VM是正在运行的。

注解

由于VMware中的限制,仅当源和目标存储池都能被源主机访问时才允许VM存储的在线迁移;也就是说,当需要在线迁移操作时,源主机是运行VM的主机。

将数据卷迁移到新的存储池

当你想迁移磁盘的时候可能有两种情况:

  • 将磁盘移动到新的存储,但是还将其附加在原来正在运行的VM上。

  • 从当前VM上卸载磁盘,然后将其移动至新的存储,再将其附加至新的VM。

为正在运行的VM迁移存储

(支持XenServer和VMware)

  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 在左侧的导航栏,点击实例,再点击VM名,接着点击查看卷。

  3. 点击你想迁移的卷。

  4. 从VM卸载磁盘。请参阅 “卸载和移动卷” 但是跳过最后的”重新附加”步骤。你会在迁移过后在新的存储上做这一步。

  5. 点击迁移卷按钮 button to migrate a volume. ,然后从下拉列表里面选择目标位置。

  6. 这期间,卷的状态会变成正在迁移,然后又变回已就绪。

迁移存储和附加到不同的VM
  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 从VM卸载磁盘。请参阅 “卸载和移动卷” 但是跳过最后的”重新附加”步骤。你会在迁移过后在新的存储上做这一步。

  3. 点击迁移卷按钮 button to migrate a volume. ,然后从下拉列表里面选择目标位置。

  4. 观察卷的状态会变成正在迁移,然后又变回已就绪。你可以点击左侧导航条中的存储找到卷。在选择查看的下拉列表中,确保卷显示在窗口的顶部。

  5. 在新的存储服务器中给运行在同一群集中的任何想要的VM附加卷。请参阅 “附加卷”

迁移VM的Root卷到新的存储池

(XenServer、VMware)你可以在停止VM的情况下,使用在线迁移将VM的root磁盘从一个存储池移动到另外一个。

(KVM)当前已root磁盘卷的时候,VM必须关机,这时用户不能访问VM。在迁移完成之后,VM就能重启了。

  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 在左侧的导航栏里,点击实例,然后点击VM名。

  3. (仅限于KVM)停止VM。

  4. 点击迁移按钮 button to migrate a volume. ,然后从下拉列表中选择目标位置。

    注解

    如果VM的存储与VM必须一起被迁移,这点会在主机列表中标注。CloudStack会为你自动的进行存储迁移。

  5. 观察卷的状态会变成迁移中,然后变回运行中(或者停止,在KVM中)。这过程会持续一段时间。

  6. (仅限于KVM)重启VM。

重新规划卷

CloudStack提供了调整数据盘大小的功能;CloudStack借助磁盘方案控制卷大小。这样提供了CloudStack管理员可以灵活地选择他们想要给终端用户多少可用空间。使用相同存储标签的磁盘方案中的卷可以重新划分。比如,如果你只想提供 10,50和100GB的方案,重新划分允许的极限就不会超过这些。也就是说,如果你定义了10GB,50GB和100GB的磁盘方案,用户可以从10GB升级到50GB,或者从50GB升级到100GB。如果你创建了自定义大小的磁盘方案,那么你可以重新规划卷的大小为更大的值。

另外,使用 resizeVolume API,数据卷可以从一个静态磁盘方案移动到指定大小的自定义磁盘方案。此功能允对特定容量或磁盘方案进行收费,同时可以灵活地更改磁盘大小。

KVM, XenServer和VMware主机支持这个功能。但是VMware主机不支持卷的收缩。

在你试图重新规划卷大小之前,请考虑以下几点:

  • 与卷关联的VMs是停止状态。

  • 与卷关联的数据磁盘已经移除了。

  • 当卷缩小的时候,上面的磁盘会被截断,这么做的话可能会丢失数据。因此,在缩小数据磁盘之前,重新规划任何分区或文件系统以便数据迁移出这个磁盘。

要重新规划卷容量:

  1. 使用用户或管理员登录到CloudStack用户界面。

  2. 在左侧导航栏点击存储。

  3. 在选择视图中选择卷。

  4. 在卷列表中选择卷名称,然后点击调整卷大小按钮 button to display the resize volume option.

  5. 在弹出的调整卷大小窗口中,为存储选择想要的方案。

    option to resize a volume.

    1. 如果你选择自定义磁盘,请指定一个自定义大小。

    2. 点击是否确实要缩小卷大小来确认你要缩小的容量。

      此参数避免了不小心的失误造成数据的丢失。你必须知道你在做什么。

  6. 点击确定。

在VM重启时重设VM的root盘

你可以指定你想要放弃的root磁盘和创建一个新的,并且无论何时在VM重启时都使用新的。每次启动时都是一个全新的VM并且桌面不需要保存它的状态,出于安全环境考虑这非常有用。VM的IP地址在这个操作期间不会改变。

要启用在VM重启时重置root磁盘:

当创建一个新的服务方案时,设置isVolatile这个参数为True。从这个服务方案创建的VMs一旦重启,它们的磁盘就会重置。请参阅 “创建新的计算方案”

卷的删除和回收

删除卷不会删除曾经对卷做的快照

当一个VM被销毁时,附加到该VM的数据磁盘卷不会被删除。

使用回收程序后,卷就永久的被销毁了。全局配置变量expunge.delay和expunge.interval决定了何时物理删除卷。

  • expunge.delay:决定在卷被销毁之前卷存在多长时间,以秒计算。

  • expunge.interval:决定回收检查运行频率。

管理员可以根据站点数据保留策略来调整这些值。

使用卷快照

(支持以下hypervisors:XenServer, VMware vSphereKVM)

CloudStack支持磁盘卷的快照。快照为虚拟机某一时间点的抓取。内存和CPU状态不会被抓取。如果你使用Oracle VM hypervisor,那么你不能做快照,因为OVM不支持。

卷,包括root和数据磁盘(使用Oracle VM hypervisor除外,因为OVM不支持快照)都可以做快照。管理员可以限制每个用户的快照数量。用户可以通过快照创建新的卷,用来恢复特定的文件,还可以通过快照创建模板来启动恢复的磁盘。

用户可以手动或者设置自动循环快照策略创建快照。用户也可以从快照创建附磁盘卷,并像其他磁盘卷一样附加到虚机上。root和数据磁盘支持快照。但是,CloudStack目前不支持通过恢复的root盘启动VM。从快照恢复的root盘会被认为是数据盘;恢复的磁盘可以附加到VM上以访问上面的数据。

完整快照慧聪主存储拷贝到附加存储,并会一直存储在里面知道删除或被新的快照覆盖。

如何给卷做快照
  1. 是用用户或者管理员登录CloudStack。

  2. 在左侧导航栏点击存储。

  3. 在选择视图,确认选择的是卷。

  4. 点击你要做快照的卷的名称。

  5. 点击快照按钮。 Snapshot Button.

创建和保留自动快照

(支持以下hypervisors:XenServer, VMware vSphereKVM)

用户可以设置循环快照策略来自动的为磁盘定期地创建多个快照。快照可以按小时,天,周或者月为周期。每个磁盘卷都可以设置快照策略。比如,用户可以设置每天的02:30做快照。

依靠每个快照计划,用户还可以指定计划快照的保留数量。超出保留期限的老快照会被自动的删除。用户定义的限制必须等于或小于CloudStack管理员设置的全局限制。请参阅 “全局配置的限制”.。限制只能应用给作为自动循环快照策略的一部分的快照。额外的手动快照能被创建和保留。

增量快照和备份

创建的快照保存在磁盘所在的主存储。在创建快照之后,它会立即被备份到辅助存储并在主存储上删除以节省主存储的空间。

CloudStack给一些 hypervisors做增量备份。当使用了增量备份,那么每N备份就是一个完全备份。

  VMware vSphere Citrix XenServer KVM

支持增量备份

不支持

支持

不支持

卷状态

当快照操作是由一个循环快照策略所引发的时候,如果从其上次创建快照后,卷一直处于非活跃状态,快照被跳过。如果卷被分离或附加的虚拟机没有运行,那么它就被认为是非活跃的。CloudStack会确保从卷上一次变得不活跃后,至少创建了一个快照。

当手动创建了快照,不管改卷是不是活跃的,快照会一直被创建。

快照恢复

有两种方式恢复快照。用户能够从快照中创建一个卷。卷可以随后被挂载到虚拟机上并且文件根据需要被复原。另一种方式是,模板可以从一个root 盘的快照创建。用户能够从这个模板启动虚拟机从而实际上复原root盘。

快照工作调节

当虚拟机需要快照时,VM所在的主机上就会运行快照工作,或者在VM最后运行的主机上。如果在一台主机上的VMs需要很多快照,那么这会导致太多的快照工作进而占用过多的主机资源。

针对这种情况,云端的root管理员可以利用全局配置设置中的concurrent.snapshots.threshold.perhost调节有多少快照工作同时在主机上运行。借助这个设置,当太多快照请求发生时,管理员更好的确认快照工作不会超时并且hypervisor主机不会有性能问题。

给concurrent.snapshots.threshold.perhost设置一个你结合目前主机资源和在主机上运行的VMs数量的最佳值,这个值代表了在同一时刻有多少快照工作在hypervisor主机上执行。如果一个主机有比较多的快照请求,额外的请求就会被放在等待队列里。在当前执行 的快照工作数量下降至限制值之内,新的快照工作才会开始。

管理员也可以通过job.expire.minutes给快照请求等待队列的长度设置一个最大值。如果达到了这个限制,那么快照请求会失败并且返回一个错误消息。

VMware卷快照性能

当你为VMware中的数据卷或root卷做快照时,CloudStack使用一种高效率的存储技术来提高性能。

快照不会立即从vCenter导出OVA格式文件到挂载的NFS共享中。这个操作会消耗时间和资源。相反的,由vCenter提供的原始文件格式(比如VMDK)被保留。OVA文件只有在需要的时候被创建。CloudStack使用与原始快照数据存储在一起的属性文件(*.ova.meta)中的信息来生成OVA。

注解

对于旧版本升级的客户:这个过程只适用于在升级到CloudStack 4.2之后新创建的快照。已经做过快照并且使用OVA格式存储的将继续使用已有的格式,并且也能正常工作。

使用系统虚拟机

使用系统虚拟机

CloudStack使用几类系统虚拟机来完成云中的任务。总的来说,CloudStack管理这些系统虚拟机,并根据某些范围内或快速需要创建、启动和停止它们。然而,管理员需要意识到他们在调试中的作用。

系统VM模板

系统VM来自于一个单独的模板,系统VM具有以下特性:

  • Debian 6.0(“Squeeze”),2.6.32内核具有最新的来自Debian安全APT存储库的安全补丁

  • 具有一系列最小化安装的包,可以降低安全攻击风险。

  • 基于 Xen/VMWare 的32位增强性能

  • 包含Xen PV 驱动,KVM virtio 驱动和VMware tools的pvops 内核可以使所有hypervisor得到最佳性能。

  • Xen tools 包含性能监控

  • 最新版本的HAProxy,ip表,IPsec和来自debian库的Apache保证了提高安全性和速度。

  • 从 Sun/Oracle 安装最新版本的JRE可以保证安全性与速度的提高

改变默认系统VM模板

CloudStack允许你将默认的32位系统模板变为64位,使用64位模板,可以升级虚拟路由器,使得网络支撑更大的连接数。

  1. 基于你所使用的hypervisor,从以下地址下载64位模板:

    Hypervisor

    下载地址

    XenServer http://download.cloud.com/templates/4.2/64bit/systemvmtemplate64-2013-07-15-master-xen.vhd.bz2
    KVM http://download.cloud.com/templates/4.2/64bit/systemvmtemplate64-2013-07-15-master-kvm.qcow2.bz2
  2. 使用管理员登录到CloudStack管理界面。

  3. 注册64位的模板。

    例如:KVM64bit 模板

  4. 当注册模板时,选择路由(routing)。

  5. 导航至 基础结构 > 地域 > 设置

  6. 在全局参数 *``router.template.kvm``*中设置64位模板的名称,KVM64bitTemplate。

    如果你正在使用XenServer64位模板,将名字设置在*``router.template.xen``*全局参数中。

    任何在此地域中创建的新虚拟路由器均使用这个模板。

  7. 重启管理服务器。

支持VMware的多种系统虚拟机

每个CloudStack地域都有一个单独的系统VM用于模板处理任务,如下载模板,上传模板,上传ISO。在使用VMware的地域中,有另外的系统VM用来处理VMware专有的任务,如制作快照,创建私有模板。 当VMware专有任务的负载增加时,CloudStack管理端将推出额外的系统VM。管理端监控并平衡发送到这些系统VM的命令,实行动态负载均衡并增加更多的系统VM。

控制台代理

控制台代理是一种系统VM,可以通过网页用户接口为用户呈现一个控制台视图。它通过hypervisor为来宾提供控制台将用户的浏览器与vnc端口相连。管理员和终端用户动能通过网页用户接口获得一个控制台连接。

点击控制台图标会弹出一个新窗口。根据控制台代理的公共IP ,AJAX代码会下载到这个新窗口。每个控制台代理都会分配一个公共IP。AJAX程序会连接到这个IP。控制台代理会将连接代理到正在运行所请求虚拟机的宿主机的vnc端口。

注解

hypervisors可能会分配很多端口到VNC上,因此可能同时并发多个VNC会话。

不会有任何流量是来宾虚拟IP的,因此不需要打开来宾虚拟机的vnc。

控制台虚拟机会定时的向管理服务器汇报当前活动的会话数。默认报告间隔是五秒钟。可以通过管理服务器的配置参数 consoleproxy.loadscan.interval.更改。

如果来宾虚拟机之前有已经分配的关联控制台代理的会话,控制台代理的分配会由第一次分配的控制台代理决定。如果该来宾虚拟级之前存在已分配的控制台代理,则不论该控制台代理目前负载如何管理服务器都会将该来宾虚拟机分配到目标控制台代理虚拟机。如果失败则会将来宾虚拟机分配到第一个拥有足够资源处理新会话的控制台代理上。

管理员能重启控制台代理,但此操作会中断用户与控制台会话。

对控制台代理使用SSL证书。

注解

In the past CloudStack used the realhostip.com dynamic DNS resolution service. As this service will be shut down as of June 30th, 2014, CloudStack has stopped using the service as of version 4.3.

默认情况下,代理视图功能使用HTTP协议,在任何生产环境下,代理服务连接至少要通过SSL进行加密。

CloudStack管理员有2种方式来保证SSL加密控制代理连接的安全:

  • 建立一个SSL通配证书以及域名解析。

  • 为指定的FQDN建立一个SSL证书并配置负载均衡

更改控制代理SSL证书及域

管理员可以通过选择一个域并上传一个新的SSL证书和密钥配置SSL加密。这个域必须运行一个能DNS服务器,该服务器能解决地址格式是aaa-bbb-ccc-ddd.your.domain到IPv4 IP地址的结构aaa.bbb.ccc.ddd,例如 for example, 202.8.44.1。就是为了改变终端代理与,SSL证书和私有密钥。

  1. 建立一个动态的方案或者适用于所有可能DNS名称在你的公共IP范围以format aaa-bbb-ccc-ddd.consoleproxy.company.com-> aaa.bbb.ccc.ddd.到你已经存在的DNS服务器上

    注解

    在上述步骤中你会注意到*consoleproxy.company.com* -为安全最好的实践,我们推荐在独立的子域中创建一个新的有通配符的SSL证书,所以证书在事件中可能妥协,一个恶意的用户不能模仿一个company.com域。

  2. 通常有私钥和证书签名请求(CSR)。当你使用openssl产生私钥/公钥对和CSR,你将私钥复制到CloudStack中,保证转换为PKCS#8格式。

    1. 产生一个新的2048位的私钥

      openssl genrsa -des3 -out yourprivate.key 2048
      
    2. Generate a new certificate CSR. Ensure the creation of a wildcard certificate, eg *.consoleproxy.company.com

      openssl req -new -key yourprivate.key -out yourcertificate.csr
      
    3. 前往你喜爱的站点相信授权证书,购买一个SSL证书并获得CSR确认。你将会收到一个返回的有效地证书

    4. 转化你的私钥格式成PKCS#8加密格式。

      openssl pkcs8 -topk8 -in yourprivate.key -out yourprivate.pkcs8.encrypted.key
      
    5. 转化你的PKC#8加密的私钥到PKC#8格式是CloudStack遵循的方式。

      openssl pkcs8 -in yourprivate.pkcs8.encrypted.key -out yourprivate.pkcs8.key
      
  3. 在CloudStack用户界面的修改SSL证书,复制以下内容:

    • 刚刚生成的证书。

    • 刚刚生成的私钥。

    • The desired domain name, prefixed with *.; for example, *.consoleproxy.company.com

    Updating Console Proxy SSL Certificate

  4. 这停止了所有正运行的终端代理VM,然后已新的许可和密钥重启。用户可能会注意到对控制台有益的一个简短的打断。

管理服务器在这个改变后生成格式如 “aaa-bbb-ccc-ddd.consoleproxy.company.com” 的URLs。这个新的终端请求将会依照新的DNS域名,证书和密钥提供服务。

负载均衡终端代理

一个在最近的段落中使用动态DNS或者创建一定范围DNS记录作为可替换的描述将被用来创建一个特定域名的SSL证书,配置CloudStack使用特定的FQDN,然后配置一个负载均衡以均衡终端代理IP地址后的FQDN。作为这项新功能的更多详情,参见 http://cwiki.apache.org/confluence/display/CLOUDSTACK/Realhost+IP+changes

虚拟路由

虚拟路由器是一个系统虚拟机。它经常在CloudStack服务方案中被使用;终端用户不能够直接访问虚拟路由器。用户可ping和影响它(比如设置端口转发)但是不能通过ssh访问

这里没有一种机制使得管理员可以登录虚拟路由器。管理员可以重启虚拟路由器,但是会中断终端用户网络访问和服务。在一个基本的网络故障排错中,尝试在一个虚拟机上ping虚拟路由器。虚拟路由器的一些功能特性是通过系统服务方案配置的。

配置虚拟路由

你可以设置以下内容:

  • IP地址范围

  • 支持的网络服务

  • 由虚拟路由网络服务提供的默认域名

  • 网关IP地址

  • CloudStack多久从虚拟路由器获取一次网络使用数据。如果你向搜集虚拟路由器的流量计量数据,设置全局变量router.stats.interval。如果你不使用虚拟路怄气收集网络使用数据,设置该值为0

使用系统计算服务升级虚拟路由器

当 CloudStack创建一个虚拟路由器,它是按照默认的系统计算服务方案进行的设置。参见 <xref linkend=”system-service-offerings” />。所有在单独客户网络中的虚拟路由器都使用相同的系统计算服务方案。可以通过新建和使用自定义的系统计算服务方案来提高虚拟路由器的性能。

  1. 定义定制的系统计算服务方案,参见<xref linkend=”creating-system-service-offerings” />。在系统虚拟机类型中,选择域路由器。

  2. 使用网络服务方案配合系统计算服务方案,参见 “创建一个新网络方案”.

  3. 将网络服务方案应用到使用新系统计算服务方案的虚拟路由器的网络上。如果这是一个新的网络,请根据66页的添加额外客户网络的步骤操作。想要改变已生成的虚拟路由器的计算服务方案,请根据 “在客户端网络改变网络方案”.

虚拟路由器的最佳实践
  • 警告:从一个虚拟机管理程序控制台重新启动一台虚拟路由器,将删除所有iptables规则。要解决这个问题,从CloudStack用户界面停止虚拟路由器和启动。

  • 警告

    在网络中只有一个路由器可用时,不要使用destroyRouter API,因为restartNetwork API 带cleanup=false参数不能随后重新创建它。如果你想销毁并重新创建网络中的单一路由器,使用restartNetwork API 带cleanup=true参数。

虚拟路由的服务监视工具

运行在CloudStack虚拟路由上的不同的各种服务都可以使用服务监视工具监视。工具将确保服务成功运行除非CloudStack被故意损坏。如果一个服务停止,工具将自动重启该服务,如果不能帮助重启该服务,将产生一个导致失败的警报事件。一个全局参数,”network.router.enableservicemonitoring”,已经被介绍了它能控制这种特性。默认值是false,也就是说监控不是默认。当你激活后,确保服务管理器和路由被重启。

监视工具可以帮助启动一个由不期望的原因导致的冲突的VR服务。例如:

  • 由原代码的缺点引起的服务冲突。

  • 当此服务的内存空间或者CPU运算出现不足时,OS将终止此服务。

注解

只有这些服务的守护进程仍被监视。这些服务因在服务器/守护进程配置文件中的错误而失败将导致不能被监视工具重启。VPC网络不予支持。

在VR中监视下列服务:

  • DNS
  • HA代理

  • SSH
  • Apache网络服务器

支持以下网络:

  • 独立的网络

  • 在高级和基础域中分享网络

    注解

    VPC网络不被支持

在下列hypervisor上支持此特性:XenServer,VMware和KVM

增强的网络路由升级

升级VR也是很灵活的。CloudStack管理员将能够控制VR升级序列。该序列基于Infrastructure层,例如Cluster,pod或者Zone,管理层 (账户),例如Tenan或者Domain.作为管理员,当一个特殊的用户服务,例如VR,在一个短暂的具体升级后的间隔后,你可以终止它。升级操作将允许多个升级操作并行操作促使升级速度增加。

在一个完整的持续的升级过程中,用户不能启动新服务或者改变已经存在的服务。

另外,使用多版本的VR在一个单例上也是支持的。对于具体地VR,你可以预览版本和是否升级。在管理服务器升级时,CLoudStack检查VR是否是VR上操作的最新版本。为支持此特点,一个新的全局参数,``router.version.check``, 已经被加入.这个参数默认设置为true,它意味着当操作前,最少要求版本检查。如果不是VR要求的版本就没有任何操作。在旧的版本上的VR仍然是有效地,但必须升级后才能进行更多的操作。在升级之前,它将是临时状态。这将保证VR服务和状态不受管理服务器升级的影响。

以下服务将是有效的,无论VR是否升级。或者,没有任何服务在VR升级后发送到VR.

  • 安全组

  • 用户数据

  • DHCP
  • DNS
  • LB
  • 端口转发

  • VPN
  • 静态 NAT

  • Source NAT
  • 防火墙

  • 网关

  • 网络ACL

支持虚拟路由
  • VR
  • VPC VR
  • 多余的VR

升级中的虚拟路由
  1. 下载最新的系统VM模板。

  2. 下载最新的系统VM到所有主存储池。

  3. 升级管理服务器

  4. 从用户界面和使用下列描述中升级CPVM和SSVM

    # cloudstack-sysvmadm -d <IP address> -u cloud -p -s
    

    即使VR仍然是老版本,已经存在的服务会继续对VM有效。管理服务器除非升级,否则不会再VR上有任何动作。

  5. 选择性的升级VR:

    1. 用系统管理员登陆到CloudStack UI界面。

    2. 在左边的导航,选择基础架构。

    3. 在虚拟路由上,单击更多视图。

      所有的VR都在虚拟路由页中列出。

    4. 在选择视图的下拉列表中,选择所需的群组

      你可以设置以下内容:

      • 按域分组

      • 按提供点分组

      • 按群集分组

      • 按账户分组

    5. 单击已经被升级的VR组

      例如,你可以按域分组,选择希望的域名。

    6. 单击升级按钮升级所有的VRs.|vr-upgrade.png|

    7. 点击确定。

辅助存储VM

除了主机,CloudStack的二级存储虚拟机会挂载和往二级存储中写入内容。

通过二级存储虚拟机来提交信息到二级存储。二级存储虚拟机会使用多种协议通过URL来获取模版和ISO镜像文件。

二级存储虚拟机会提供后台任务来负责各种二级存储的活动:将新模版的下载到资源域中,多个资源域之间的模版复制,和快照备份。

如果有需要,管理员可以登录到辅助存储VM上。

使用服务

使用服务

使用服务器是CloudStack一个可选项,分别安装产品的一部分,提供了聚合使用记录您可以使用它来创建计费集成产品。使用服务器通过使用数据从事件日志和创建汇总使用记录,您可以访问使用listUsageRecords API调用。

使用记录显示数量的资源,比如虚拟机运行时间或模板存储空间,以被客人消耗为例。

使用服务器运行至少每天一次。它可以被配置为每天多次运行。

配置使用服务器

配置使用服务器

  1. 确定使用服务器已经被安装。它要求安装额外的CloudStack软件步骤。参见高级安装手册中的使用服务器(可选)。

  2. 作为管理员登录到CloudStack用户界面。

  3. 单击全局设置

  4. 在搜索栏输入 usage。找到no需要改变的配置参数。下表是这些参数的详细描述。

  5. 在操作栏点击编辑图标。

  6. 输入数值点击保存图标。

  7. 重新启动管理服务器(通常在改变了全局配置之后都要进行这步)并重启使用服务器。

    # service cloudstack-management restart
    # service cloudstack-usage restart
    

下表列出了全局配置中控制使用服务器的配置项。

参数名描述

enable.usage.server是否开启使用服务器。

usage.aggregation.timezone

记录使用信息所用的时区。如果使用记录和日程工作执行在不同的时区时进行设置。例如,进行下列设置,使用任务运行在PST 00:15和24小时内产生的从00:00:00GMT到23:59:59GMT的使用记录:

usage.stats.job.exec.time = 00:15
usage.execution.timezone = PST
usage.aggregation.timezone = GMT

时间域的有效值已经被具体化到 Appendix A, *Time Zones* 中

默认:GMT

usage.execution.timezone

域时间项usage.stats.job.exec.time。时间域的有效值被具体化在`Appendix A, Time Zones <http://docs.cloudstack.apache.org/en/latest/dev.html?highlight=time%20zones#time-zones>`_

默认时区是管理服务器的时区。

usage.sanity.check.interval

完整性检查时间间隔。设置此值来定期在生成用户账单之前检查出错误的数据。比如,他能检查虚拟机被销毁之后的使用记录和模板,卷等的类似记录。还会检查超过聚合时间的使用时间。如果发生了错误就会发送ALERT_TYPE_USAGE_SANITY_RESULT = 21 警告。

usage.stats.job.aggregation.range

使用服务器执行任务时间间隔(分钟为单位)。比如,如果你将此值设为1440,使用服务器将每天执行一次。如果你将此值设为600,则会10小时执行一次。一般情况下使用服务器执行任务时会继续在上次的使用统计基础上处理所有事件。

当值为1440(一天一次)时有点特殊。该情况下,用量服务器并不需要处理上次运行之后的所有事件。&PRODUCT;假定您要一天一次处理昨天的,完成每日记录。例如,如果今天是10月7号,会假定您要处理6号的记录,从0点到24点。CloudStack假定的0点到24点采用的时区为 usage.execution.timezone的值。

默认:1440

usage.stats.job.exec.time

使用服务器处理任务的启动时间。采用24小时格式 (HH:MM),时区为服务器的时区,应该为GMT。比如要在GMT时区10:30 启动用量任务,请输入“10:30”

如果同时设置了usage.stats.job.aggregation.range参数,并且该参数值不是1440,这个值将被添加到usage.stats.job.exec.time到时再次运行使用服务器任务。重复此过程,直到24小时已经过去,第二天到达 usage.stats.job.exec.time处理任务开始。

默认:00:15。

比如,假设你服务器时区是GMT,你的用户主要在美国东海岸,而你有打算在当地时间(EST)每天凌晨两点执行使用记录统计。选择这些选项:

  • enable.usage.server = true
  • usage.execution.timezone = America/New_York
  • usage.stats.job.exec.time = 07:00。这将会在东部时间2:00执行使用任务。注意考虑进出白昼保存时间在将时间迁移大U.S.东海岸。

  • usage.stats.job.aggregation.range = 1440

在这种配置下,使用任务 会在东部时间每天2 AM执行,同时会如定义的一样以东部时间(美国纽约时间)统计前一天的“午夜到午夜使用记录。”

注解

由于在usage.stats.job.aggregation.range中使用特殊值1440,用量服务器将忽略午夜到凌晨2:00之间的数据。这些数据将会包含在第二天的统计中。

设置使用限制

CloudStack提供多个管理员控制点以便控制用户资源使用。其中一些限制是全局配置参数。其他一些应用在root域,并且会覆盖每个账号的基本配置

全局配置限制

在一个域中,客户虚拟网络默认有24位CIDR.它限制了客户网络运行上线是254个实例。它可以根据需求调节,但必须在域中实例创建之前。例如,10.1.1.0/22 将提供 ~1000 地址。

下表列出了设置全局配置的限制:

参数名称

定义

max.account.public.ips

账户可拥有的公用IP地址个数

max.account.snapshots

快照的数量存在于一个记述中

max.account.templates

模板的数量可能存于一个记述中

max.account.user.vms

虚拟机实例的数量也可能存在于一个记述中

max.account.volumes

磁盘卷的数量也可能存于一个记述中

max.template.iso.size

下载的模板或ISO最大的单位是GB

max.volume.size.gb

卷的最大单位是GB

网络调节比率

默认的传输速率是允许用户按照Mb每秒(支持XenServer)

snapshot.max.hourly

最大的可再现时时性快照可以保存在卷中。如果数量的限制达到,早期的来自开始阶段的快照就会被删除以使新的快照可以被存储。这种限制方法不支持手工快照。若设为0,可再现性时时快照将没有保存。

snapshot.max.daily

最大的可再现时时性快照可以保存在卷中。如果数量的限制达到,早期的来自开始阶段的快照就会被删除以使新的快照可以被存储。这种限制方法不支持手工快照。若设为0,可再现性时时快照将没有保存。

snapshot.max.weekly

最大的可再现时时性快照可以保存在卷中。如果数量的限制达到,早期的来自开始阶段的快照就会被删除以使新的快照可以被存储。这种限制方法不支持手工快照。若设为0,可再现性时时快照将没有保存。

snapshot.max.monthly

最大的可再现时时性快照可以保存在卷中。如果数量的限制达到,早期的来自开始阶段的快照就会被删除以使新的快照可以被存储。这种限制方法不支持手工快照。若设为0,可再现性时时快照将没有保存。

使用CloudStack中的用户界面中的全局配置界面可以修改全局配置参数。

限制资源使用

CloudStack允许根据资源类型控制资源使用,例如CPU,RAM,主存储,辅助存储。一个新的资源类型集已经被添加到了已经存在的支持新通俗模型的资源池-基于需要使用,例如大型VM或者小型VM.新的资源类型被广泛分类在如CPU,RAM,主存储和辅助存储中。超级管理员能够利用下列资源的使用限制,例如域,项目或者账户。

  • CPUs
  • Memory (RAM)
  • 主存储(卷)

  • 辅助存储(快照,模板,ISOs)

为了控制该特征的行为,以下配置参数已经添加:

参数名称

描述

max.account.cpus

最大的可以被账户使用的CPU核心个数。默认是40.

max.account.ram (MB)

最大的可以被账户使用的RMA容量。默认值是40960。

max.account.primary.storage (GB)

最大的可以被账户使用的主存储个数。默认是200。

max.account.secondary.storage (GB)

最大的可被账户使用的辅助存储空间。默认是400。

max.project.cpus

最大的可以被账户使用的CPU核心个数。默认是40.

max.project.ram (MB)

最大的可以被账户使用的RMA容量。默认值是40960。

max.project.primary.storage (GB)

最大的可以被账户使用的主存储个数。默认是200。

max.project.secondary.storage (GB)

最大的可被账户使用的辅助存储空间。默认是400。

用户许可

超级管理员,域管理员和用户能够列出资源。确保合适的属性日至保存在 vmops.log``api.log``文件中。

  • 超级管理员将有列出和升级资源限制的特权。

  • 域管理员仅允许列出和改变在其拥有的域或者子域的子域和账户的资源限制。

  • 终端用户拥有改变列出资源限制的特权。使用listResourceLimits API.。

限制使用注意事项:
  • 主存储或者辅助存储空间参考启动的容器大小而不是物理容量,实际使用的物理空间要小于提供的空间。

  • 如果管理员为账户减少资源限制,并设置少于目前设定的资源数,已经存在的VM/模板/卷也不会破坏。限制仅在账户中的用户试图使用这些资源执行新的操作时表现出来。例如,在一个VM中倘若存在下列行为:

    • 迁移虚拟机:在账户中的用户试图迁移一个正在运行的VM到任何不面临限制问题的主机。

    • 恢复虚拟机:破坏的VM不能被修复。

  • 对于更多资源类型,如果一个域存在限制X,这个域的子域或者账户也能也有它们自身的限制。尽管如此,一个子域允许分配的资源总和或者域中账户在任何时间点都不能执行X限制。

    例如,当一个域有CPU的限制数量为40,其子域D1和账户A1可以有每个30的限制,但是任何时候分配给D1和A1的资源都不能超过限制40.

  • 如果一些操作需要通过两个或更多的资源限制核查,最少的两个限制将被执行,例如:如果一个账户有10个限制VM和20个限制CPU,用户在账户中申请每4个CPU5个VM。用户可以部署5个或更多的VM,因为VM的限制个数是10。尽管如此,用户不能部署任何实例,因为CPU的限制已经用尽。

在一个域中限制资源使用

CloudStack允许在一个域基础上配置限制。使用域限制的地点是所有的用户都有其账户的的限制。它们有额外的限制,例如群组,不能执行设置在它们域上的资源限制。域限制聚集了任何域中使用的账户和任何账户的所有域的子域。限制设定在根域层次应用于所有域的账户和子域使用的资源综合都低于根域的限制。

设定一个域限制:

  1. 登录到CloudStack的界面

  2. 在导航栏左侧树中,单击域。

  3. 选择你想修改的域。当前域限制将被显示出来。

    值为-1表明此处没有任何限制。

  4. 单击编辑按钮 edits the settings.

  5. 编辑下列每一条要求:

    • 参数名称

    • 描述

    • 实例限制:

      可以被用在域中的实例的个数

    • 公共IP限制

      可以被用在域中的公共IP地址的个数。

    • 卷限制

      可以被创建在雨中的磁盘卷的个数

    • 快照限制

      可以被创建在域中的快照个数

    • 模板限制

      可以在域中注册的模板数量

    • VPC限制

      可以被创建在域中的VPC的个数

    • CPU限制

      可以在域中使用的CPU核心数。

    • 内存限制(MB)

      可以在域中使用的RAM数量。

    • 主存储限制(GB)

      在域中可以使用的主存储空间大小。

    • 辅助存储限制(GB)

      在域中可使用的辅助存储空间大小。

  6. 点击应用

默认账户资源限制

你可以限制账户使用的资源。默认限制通过全局配置参数设置,它们影响在云中的所有账户。其相关参数开始于max.account,例如:max.account.snapshots。

对于覆盖一个默认限制的特殊账户,设置per-account资源限制。

  1. 登录到CloudStack的界面

  2. 在导航树左侧,单击账户。

  3. 选择你想修改的账户。当前限制将显示出来。

    值为-1表明此处没有任何限制。

  4. 单击编辑按钮。edits the settings.

  5. 编辑下列每一条要求:

    • 参数名称

    • 描述

    • 实例限制:

      在一个账户中可以适用的实例个数。

      默认是20个。

    • 公共IP限制

      在账户中可以使用的公共IP地址个数。

      默认是20个。

    • 卷限制

      在账户中可以创建的磁盘卷个数。

      默认是20个。

    • 快照限制

      在账户中可以创建的快照数量。

      默认是20个。

    • 模板限制

      在账户中可以注册的模板数量。

      默认是20个。

    • VPC限制

      在账户中可以创建的VPC数量。

      默认是20个。

    • CPU限制

      在账户中可以使用的CPU内核的个数。

      默认是40。

    • 内存限制(MB)

      在账户中可以使用的RAM数量。

      默认值是40960。

    • 主存储限制(GB)

      在账户中可以使用的主存储空间。

      默认是200。

    • 辅助存储限制(GB)

      在账户中可以使用的辅助存储空间。

      默认是400。

  6. 点击应用

使用记录格式

虚拟机使用记录格式

对运行或者已分配虚拟机使用,下列字段存在于使用记录中:

  • account–账户的名字

  • accountid–账户的ID

  • domainid –在此账户存在的域的 ID

  • zoneid – 使用资源域的ID号

  • description – 字符描述,用于追查使用量的记录

  • usage – 代表使用量的文字,包括使用量的单位(如 ‘Hrs’代表虚机运行时间)

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • rawusage – 代表以小时为单位实时使用量的数字

  • virtualMachineId – 虚拟服务器的ID号

  • name - 虚机名称

  • offeringid - 计算方案的ID

  • templateid – 模版或父模版的ID。当模版是从磁盘卷创建时,此处会使用父模版的ID。

  • usageid – 虚拟机

  • type - 虚拟化平台

  • startdate, enddate –一定时间内的使用量总和; 参见 使用量记录中的日期

网络用量记录格式

网络用量(发送/接收字节数)记录中存在以下字段:

  • account–账户的名字

  • accountid–账户的ID

  • domainid –在此账户存在的域的 ID

  • zoneid – 使用资源域的ID号

  • description – 字符描述,用于追查使用量的记录

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • rawusage – 代表以小时为单位实时使用量的数字

  • usageid - 设备ID(虚拟路由器ID或外部设备ID)

  • type - 设备类型(虚拟路由器、外部负载均衡等)

  • startdate, enddate –一定时间内的使用量总和; 参见 使用量记录中的日期

IP地址使用记录格式

对于IP地址使用下列字段存在于使用记录中。

  • account - 账户的名称

  • accountid-账户的ID

  • domainid-本账户具有的鱼的ID

  • zoneid-已经使用的域ID

  • description – 字符描述追查用量记录

  • usage – 代表使用量的字符串,包括使用量的单位

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • rawusage – 代表以小时为单位实时使用量的数字

  • usageid-IP地址ID

  • startdate, enddate – 一定时间范围内的用量总和; 参见 用量记录中的日期

  • issourcenat - IP地址的NAT资源是否有效

  • iselastic - 如果IP地址是弹性的为True

磁盘用量记录格式

对于磁盘,用量记录存在下列字段。

  • account–账户的名字

  • accountid–账户的ID

  • domainid –在此账户存在的域的 ID

  • zoneid – 使用资源域的ID号

  • description – 字符描述,用于追查使用量的记录

  • usage – 代表使用量的文字,包括使用量的单位 (如 ‘Hrs’是小时)

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • rawusage – 代表以小时为单位实时使用量的数字

  • usageid - 磁盘ID

  • offeringid - 磁盘方案的ID

  • type - 虚拟化平台

  • templateid - 根模版ID

  • size - 分配的存储大小

  • startdate, enddate –一定时间内的使用量总和; 参见 使用量记录中的日期

模版、ISO和快照用量记录格式
  • account–账户的名字

  • accountid–账户的ID

  • domainid –在此账户存在的域的 ID

  • zoneid – 使用资源域的ID号

  • description – 字符描述,用于追查使用量的记录

  • usage – 代表使用量的文字,包括使用量的单位 (如 ‘Hrs’是小时)

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • rawusage – 代表以小时为单位实时使用量的数字

  • usageid - 模版、ISO或快照的ID

  • offeringid - 磁盘方案的ID

  • templateid – 仅模版(用量类型为7)时包括。源模版ID。

  • size - 模版、ISO或快照的大小

  • startdate, enddate –一定时间内的使用量总和; 参见 使用量记录中的日期

负载均衡策略或端口导向规则用量记录格式
  • account - 账户的名称

  • accountid-账户的ID

  • domainid-本账户具有的鱼的ID

  • zoneid-已经使用的域ID

  • description – 字符描述追查用量记录

  • usage – 代表使用量的字符串,包括使用量的单位 (如 ‘Hrs’是小时)

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • rawusage – 代表以小时为单位实时使用量的数字

  • usageid -负载均衡或端口导向规则ID

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • startdate, enddate – 一定时间范围内的用量总和; 参见 用量记录中的日期

网络服务方案用量记录格式
  • account–账户的名字

  • accountid–账户的ID

  • domainid –在此账户存在的域的 ID

  • zoneid – 使用资源域的ID号

  • description – 字符描述,用于追查使用量的记录

  • usage – 代表使用量的文字,包括使用量的单位 (如 ‘Hrs’是小时)

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • rawusage – 代表以小时为单位实时使用量的数字

  • usageid – 网络服务方案的ID

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • offeringid – 网络服务方案的ID

  • virtualMachineId – 虚拟服务器的ID号

  • virtualMachineId – 虚拟服务器的ID号

  • startdate, enddate –一定时间内的使用量总和; 参见 使用量记录中的日期

VPN用户用量记录格式
  • account–账户的名字

  • accountid–账户的ID

  • domainid –在此账户存在的域的 ID

  • zoneid – 使用资源域的ID号

  • description – 字符描述,用于追查使用量的记录

  • usage – 代表使用量的文字,包括使用量的单位 (如 ‘Hrs’是小时)

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • rawusage – 代表以小时为单位实时使用量的数字

  • usageid - VPN用户ID

  • usagetype – 代表使用量类型的数字 (参见使用量类型)

  • startdate, enddate –一定时间内的使用量总和; 参见 使用量记录中的日期

用量类型

下表显示了所有的用量类型。

类型ID

类型名称

描述

1 RUNNING_VM

追踪VM每个用量记录时期所有运行时间总和。如果VM在用量时期升级,你将会获得新升级后的VM单独的用量记录。

2 ALLOCATED_VM

追踪VM从创建到销毁的所有时间总和。这个用量类型也是在终止例如 Windows-based模板这样的具体模板用量上是有用的。

3 IP_ADDRESS

追踪拥有公共IP地址的账户

4 NETWORK_BYTES_SENT

追踪一个账户发送所有VM的比特的总时间。Cloud.com不仅追踪当前网络传输时的每个VM。

5 NETWORK_BYTES_RECEIVED

追踪一个账户接受所有VM的比特的总时间。Cloud.com不仅追踪当前网络传输时的每个VM。

6 VOLUME

追踪磁盘从创建到销毁的总时间。

7 TEMPLATE

追踪模板(包含从快照中创建的或是已经上传到云的)从创建到销毁的总时间。模板的大小也被返回。

8 ISO

追踪ISO在云中从上传到移除的总时间。ISO的大小也被返回。

9 SNAPSHOT

追踪快照从创建到销毁的总时间。

11 LOAD_BALANCER_POLICY

追踪负载均衡策略从创建到移除的总时间。Cloud.com不追踪VM是否已经分配到了的策略。

12 PORT_FORWARDING_RULE

追踪端口导向规则从创建到移除的时间。

13 NETWORK_OFFERING

从网络方案分配到VM到移除的时间。

14 VPN_USERS

计时从VPN用户创建时开始,移除时结束。

listUsageRecords指令的反应示例:

所有CloudStack API请求都是以HTTP GET/POST形式提交, 同时附上相关的命令和参数. 无论是HTTP或HTTPS, 一个请求都有以下内容组成:

<listusagerecordsresponse>
   <count>1816</count>
   <usagerecord>
      <account>user5</account>
      <accountid>10004</accountid>
      <domainid>1</domainid>
      <zoneid>1</zoneid>
      <description>i-3-4-WC running time (ServiceOffering: 1) (Template: 3)</description>
      <usage>2.95288 Hrs</usage>
      <usagetype>1</usagetype>
      <rawusage>2.95288</rawusage>
      <virtualmachineid>4</virtualmachineid>
      <name>i-3-4-WC</name>
      <offeringid>1</offeringid>
      <templateid>3</templateid>
      <usageid>245554</usageid>
      <type>XenServer</type>
      <startdate>2009-09-15T00:00:00-0700</startdate>
      <enddate>2009-09-18T16:14:26-0700</enddate>
   </usagerecord>

   … (1,815 more usage records)
</listusagerecordsresponse>

在使用记录中的数据

用量记录包含了开始日期和结束日期。这些日期定义了原始用量数字的统计时间阶段。如果每天统计被使用,开始日期是问题出现的的当天午夜,结束日期是问题(关于异常,参见本内容)出现的当天23:59:59。虚拟机可以被部署到当天正午,在次日6pm,然后在11pm重启。在用量统计的当天,将有7小时运行VM用量(用量类型1)和12个小时分配VM用量(用量类型2)。如果同一个VM完全运行到下一天,将会有24小时的运行VM用量(类型1)和已分配用量(类型2)。

注意:开始日期不是虚拟机启动时间,结束日期不是虚拟机停止时间。开始日期和结束日期在用量统计的给定范围内。

对于网络用量,开始日期和结束日期定义为一定数量的比特传输统计的时间范围。如果用户一天内下载10MB并上传1M,将会有两个记录,一个显示10MB接受和一个显示1MB发送。

当每日统计使用时,只有一种情况开始日期和结束日期不响应午夜到11:59:59。它仅发生在网络用量记录。当用量记录服务器有超过一天的未处理数据时,旧数据将被包含在统计区间内。用量记录中的开始日期将显示最早的事件的日期和时间。对于其它类型的用量,例如IP地址和VM,旧的未处理数据是不包含在每日统计中的。

网络和流量管理

Managing Networks and Traffic

In a CloudStack, guest VMs can communicate with each other using shared infrastructure with the security and user perception that the guests have a private LAN. The CloudStack virtual router is the main component providing networking features for guest traffic.

Guest Traffic

A network can carry guest traffic only between VMs within one zone. Virtual machines in different zones cannot communicate with each other using their IP addresses; they must communicate with each other by routing through a public IP address.

See a typical guest traffic setup given below:

Depicts a guest traffic setup

Typically, the Management Server automatically creates a virtual router for each network. A virtual router is a special virtual machine that runs on the hosts. Each virtual router in an isolated network has three network interfaces. If multiple public VLAN is used, the router will have multiple public interfaces. Its eth0 interface serves as the gateway for the guest traffic and has the IP address of 10.1.1.1. Its eth1 interface is used by the system to configure the virtual router. Its eth2 interface is assigned a public IP address for public traffic. If multiple public VLAN is used, the router will have multiple public interfaces.

The virtual router provides DHCP and will automatically assign an IP address for each guest VM within the IP range assigned for the network. The user can manually reconfigure guest VMs to assume different IP addresses.

Source NAT is automatically configured in the virtual router to forward outbound traffic for all guest VMs

Networking in a Pod

The figure below illustrates network setup within a single pod. The hosts are connected to a pod-level switch. At a minimum, the hosts should have one physical uplink to each switch. Bonded NICs are supported as well. The pod-level switch is a pair of redundant gigabit switches with 10 G uplinks.

diagram showing logical view of network in a pod.

Servers are connected as follows:

  • Storage devices are connected to only the network that carries management traffic.
  • Hosts are connected to networks for both management traffic and public traffic.
  • Hosts are also connected to one or more networks carrying guest traffic.

We recommend the use of multiple physical Ethernet cards to implement each network interface as well as redundant switch fabric in order to maximize throughput and improve reliability.

Networking in a Zone

The following figure illustrates the network setup within a single zone.

Depicts network setup in a single zone.

A firewall for management traffic operates in the NAT mode. The network typically is assigned IP addresses in the 192.168.0.0/16 Class B private address space. Each pod is assigned IP addresses in the 192.168.*.0/24 Class C private address space.

Each zone has its own set of public IP addresses. Public IP addresses from different zones do not overlap.

Basic Zone Physical Network Configuration

In a basic network, configuring the physical network is fairly straightforward. You only need to configure one guest network to carry traffic that is generated by guest VMs. When you first add a zone to CloudStack, you set up the guest network through the Add Zone screens.

Advanced Zone Physical Network Configuration

Within a zone that uses advanced networking, you need to tell the Management Server how the physical network is set up to carry different kinds of traffic in isolation.

Configure Guest Traffic in an Advanced Zone

These steps assume you have already logged in to the CloudStack UI. To configure the base guest network:

  1. In the left navigation, choose Infrastructure. On Zones, click View More, then click the zone to which you want to add a network.

  2. Click the Network tab.

  3. Click Add guest network.

    The Add guest network window is displayed:

    Add Guest network setup in a single zone.

  4. Provide the following information:

    • Name: The name of the network. This will be user-visible
    • Display Text: The description of the network. This will be user-visible
    • Zone: The zone in which you are configuring the guest network.
    • Network offering: If the administrator has configured multiple network offerings, select the one you want to use for this network
    • Guest Gateway: The gateway that the guests should use
    • Guest Netmask: The netmask in use on the subnet the guests will use
  5. Click OK.

Configure Public Traffic in an Advanced Zone

In a zone that uses advanced networking, you need to configure at least one range of IP addresses for Internet traffic.

Configuring a Shared Guest Network
  1. Log in to the CloudStack UI as administrator.

  2. In the left navigation, choose Infrastructure.

  3. On Zones, click View More.

  4. Click the zone to which you want to add a guest network.

  5. Click the Physical Network tab.

  6. Click the physical network you want to work with.

  7. On the Guest node of the diagram, click Configure.

  8. Click the Network tab.

  9. Click Add guest network.

    The Add guest network window is displayed.

  10. Specify the following:

    • Name: The name of the network. This will be visible to the user.

    • Description: The short description of the network that can be displayed to users.

    • VLAN ID: The unique ID of the VLAN.

    • Isolated VLAN ID: The unique ID of the Secondary Isolated VLAN.

    • Scope: The available scopes are Domain, Account, Project, and All.

      • Domain: Selecting Domain limits the scope of this guest network to the domain you specify. The network will not be available for other domains. If you select Subdomain Access, the guest network is available to all the sub domains within the selected domain.
      • Account: The account for which the guest network is being created for. You must specify the domain the account belongs to.
      • Project: The project for which the guest network is being created for. You must specify the domain the project belongs to.
      • All: The guest network is available for all the domains, account, projects within the selected zone.
    • Network Offering: If the administrator has configured multiple network offerings, select the one you want to use for this network.

    • Gateway: The gateway that the guests should use.

    • Netmask: The netmask in use on the subnet the guests will use.

    • IP Range: A range of IP addresses that are accessible from the Internet and are assigned to the guest VMs.

      If one NIC is used, these IPs should be in the same CIDR in the case of IPv6.

    • IPv6 CIDR: The network prefix that defines the guest network subnet. This is the CIDR that describes the IPv6 addresses in use in the guest networks in this zone. To allot IP addresses from within a particular address block, enter a CIDR.

    • Network Domain: A custom DNS suffix at the level of a network. If you want to assign a special domain name to the guest VM network, specify a DNS suffix.

  11. Click OK to confirm.

Using Multiple Guest Networks

In zones that use advanced networking, additional networks for guest traffic may be added at any time after the initial installation. You can also customize the domain name associated with the network by specifying a DNS suffix for each network.

A VM’s networks are defined at VM creation time. A VM cannot add or remove networks after it has been created, although the user can go into the guest and remove the IP address from the NIC on a particular network.

Each VM has just one default network. The virtual router’s DHCP reply will set the guest’s default gateway as that for the default network. Multiple non-default networks may be added to a guest in addition to the single, required default network. The administrator can control which networks are available as the default network.

Additional networks can either be available to all accounts or be assigned to a specific account. Networks that are available to all accounts are zone-wide. Any user with access to the zone can create a VM with access to that network. These zone-wide networks provide little or no isolation between guests.Networks that are assigned to a specific account provide strong isolation.

Adding an Additional Guest Network
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. Click Add guest network. Provide the following information:
    • Name: The name of the network. This will be user-visible.
    • Display Text: The description of the network. This will be user-visible.
    • Zone. The name of the zone this network applies to. Each zone is a broadcast domain, and therefore each zone has a different IP range for the guest network. The administrator must configure the IP range for each zone.
    • Network offering: If the administrator has configured multiple network offerings, select the one you want to use for this network.
    • Guest Gateway: The gateway that the guests should use.
    • Guest Netmask: The netmask in use on the subnet the guests will use.
  4. Click Create.
Reconfiguring Networks in VMs

CloudStack provides you the ability to move VMs between networks and reconfigure a VM’s network. You can remove a VM from a network and add to a new network. You can also change the default network of a virtual machine. With this functionality, hybrid or traditional server loads can be accommodated with ease.

This feature is supported on XenServer, VMware, and KVM hypervisors.

Prerequisites

Ensure that vm-tools are running on guest VMs for adding or removing networks to work on VMware hypervisor.

Adding a Network
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, click Instances.

  3. Choose the VM that you want to work with.

  4. Click the NICs tab.

  5. Click Add network to VM.

    The Add network to VM dialog is displayed.

  6. In the drop-down list, select the network that you would like to add this VM to.

    A new NIC is added for this network. You can view the following details in the NICs page:

    • ID
    • Network Name
    • Type
    • IP Address
    • Gateway
    • Netmask
    • Is default
    • CIDR (for IPv6)
Removing a Network
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, click Instances.
  3. Choose the VM that you want to work with.
  4. Click the NICs tab.
  5. Locate the NIC you want to remove.
  6. Click Remove NIC button. button to remove a NIC.
  7. Click Yes to confirm.
Selecting the Default Network
  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, click Instances.
  3. Choose the VM that you want to work with.
  4. Click the NICs tab.
  5. Locate the NIC you want to work with.
  6. Click the Set default NIC button. button to set a NIC as default one..
  7. Click Yes to confirm.
Changing the Network Offering on a Guest Network

A user or administrator can change the network offering that is associated with an existing guest network.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. If you are changing from a network offering that uses the CloudStack virtual router to one that uses external devices as network service providers, you must first stop all the VMs on the network.

  3. In the left navigation, choose Network.

  4. Click the name of the network you want to modify.

  5. In the Details tab, click Edit. button to edit.

  6. In Network Offering, choose the new network offering, then click Apply.

    A prompt is displayed asking whether you want to keep the existing CIDR. This is to let you know that if you change the network offering, the CIDR will be affected.

    If you upgrade between virtual router as a provider and an external network device as provider, acknowledge the change of CIDR to continue, so choose Yes.

  7. Wait for the update to complete. Don’t try to restart VMs until the network change is complete.

  8. If you stopped any VMs, restart them.

IP Reservation in Isolated Guest Networks

In isolated guest networks, a part of the guest IP address space can be reserved for non-CloudStack VMs or physical servers. To do so, you configure a range of Reserved IP addresses by specifying the CIDR when a guest network is in Implemented state. If your customers wish to have non-CloudStack controlled VMs or physical servers on the same network, they can share a part of the IP address space that is primarily provided to the guest network.

In an Advanced zone, an IP address range or a CIDR is assigned to a network when the network is defined. The CloudStack virtual router acts as the DHCP server and uses CIDR for assigning IP addresses to the guest VMs. If you decide to reserve CIDR for non-CloudStack purposes, you can specify a part of the IP address range or the CIDR that should only be allocated by the DHCP service of the virtual router to the guest VMs created in CloudStack. The remaining IPs in that network are called Reserved IP Range. When IP reservation is configured, the administrator can add additional VMs or physical servers that are not part of CloudStack to the same network and assign them the Reserved IP addresses. CloudStack guest VMs cannot acquire IPs from the Reserved IP Range.

IP Reservation Considerations

Consider the following before you reserve an IP range for non-CloudStack machines:

  • IP Reservation is supported only in Isolated networks.

  • IP Reservation can be applied only when the network is in Implemented state.

  • No IP Reservation is done by default.

  • Guest VM CIDR you specify must be a subset of the network CIDR.

  • Specify a valid Guest VM CIDR. IP Reservation is applied only if no active IPs exist outside the Guest VM CIDR.

    You cannot apply IP Reservation if any VM is alloted with an IP address that is outside the Guest VM CIDR.

  • To reset an existing IP Reservation, apply IP reservation by specifying the value of network CIDR in the CIDR field.

    For example, the following table describes three scenarios of guest network creation:

    Case CIDR Network CIDR Reserved IP Range for Non-CloudStack VMs Description
    1 10.1.1.0/24 None None No IP Reservation.
    2 10.1.1.0/26 10.1.1.0/24 10.1.1.64 to 10.1.1.254 IP Reservation configured by the UpdateNetwork API with guestvmcidr=10.1.1.0/26 or enter 10.1.1.0/26 in the CIDR field in the UI.
    3 10.1.1.0/24 None None Removing IP Reservation by the UpdateNetwork API with guestvmcidr=10.1.1.0/24 or enter 10.1.1.0/24 in the CIDR field in the UI.
Limitations
  • The IP Reservation is not supported if active IPs that are found outside the Guest VM CIDR.
  • Upgrading network offering which causes a change in CIDR (such as upgrading an offering with no external devices to one with external devices) IP Reservation becomes void if any. Reconfigure IP Reservation in the new re-implemeted network.
Best Practices

Apply IP Reservation to the guest network as soon as the network state changes to Implemented. If you apply reservation soon after the first guest VM is deployed, lesser conflicts occurs while applying reservation.

Reserving an IP Range
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. Click the name of the network you want to modify.

  4. In the Details tab, click Edit. button to edit.

    The CIDR field changes to editable one.

  5. In CIDR, specify the Guest VM CIDR.

  6. Click Apply.

    Wait for the update to complete. The Network CIDR and the Reserved IP Range are displayed on the Details page.

Reserving Public IP Addresses and VLANs for Accounts

CloudStack provides you the ability to reserve a set of public IP addresses and VLANs exclusively for an account. During zone creation, you can continue defining a set of VLANs and multiple public IP ranges. This feature extends the functionality to enable you to dedicate a fixed set of VLANs and guest IP addresses for a tenant.

Note that if an account has consumed all the VLANs and IPs dedicated to it, the account can acquire two more resources from the system. CloudStack provides the root admin with two configuration parameter to modify this default behavior: use.system.public.ips and use.system.guest.vlans. These global parameters enable the root admin to disallow an account from acquiring public IPs and guest VLANs from the system, if the account has dedicated resources and these dedicated resources have all been consumed. Both these configurations are configurable at the account level.

This feature provides you the following capabilities:

  • Reserve a VLAN range and public IP address range from an Advanced zone and assign it to an account

  • Disassociate a VLAN and public IP address range from an account

  • View the number of public IP addresses allocated to an account

  • Check whether the required range is available and is conforms to account limits.

    The maximum IPs per account limit cannot be superseded.

Dedicating IP Address Ranges to an Account
  1. Log in to the CloudStack UI as administrator.

  2. In the left navigation bar, click Infrastructure.

  3. In Zones, click View All.

  4. Choose the zone you want to work with.

  5. Click the Physical Network tab.

  6. In the Public node of the diagram, click Configure.

  7. Click the IP Ranges tab.

    You can either assign an existing IP range to an account, or create a new IP range and assign to an account.

  8. To assign an existing IP range to an account, perform the following:

    1. Locate the IP range you want to work with.

    2. Click Add Account button to assign an IP range to an account. button.

      The Add Account dialog is displayed.

    3. Specify the following:

      • Account: The account to which you want to assign the IP address range.
      • Domain: The domain associated with the account.

      To create a new IP range and assign an account, perform the following:

      1. Specify the following:

        • Gateway

        • Netmask

        • VLAN

        • Start IP

        • End IP

        • Account: Perform the following:

          1. Click Account.

            The Add Account page is displayed.

          2. Specify the following:

            • Account: The account to which you want to assign an IP address range.
            • Domain: The domain associated with the account.
          3. Click OK.

      2. Click Add.

Dedicating VLAN Ranges to an Account
  1. After the CloudStack Management Server is installed, log in to the CloudStack UI as administrator.

  2. In the left navigation bar, click Infrastructure.

  3. In Zones, click View All.

  4. Choose the zone you want to work with.

  5. Click the Physical Network tab.

  6. In the Guest node of the diagram, click Configure.

  7. Select the Dedicated VLAN Ranges tab.

  8. Click Dedicate VLAN Range.

    The Dedicate VLAN Range dialog is displayed.

  9. Specify the following:

    • VLAN Range: The VLAN range that you want to assign to an account.
    • Account: The account to which you want to assign the selected VLAN range.
    • Domain: The domain associated with the account.

Configuring Multiple IP Addresses on a Single NIC

CloudStack provides you the ability to associate multiple private IP addresses per guest VM NIC. In addition to the primary IP, you can assign additional IPs to the guest VM NIC. This feature is supported on all the network configurations: Basic, Advanced, and VPC. Security Groups, Static NAT and Port forwarding services are supported on these additional IPs.

As always, you can specify an IP from the guest subnet; if not specified, an IP is automatically picked up from the guest VM subnet. You can view the IPs associated with for each guest VM NICs on the UI. You can apply NAT on these additional guest IPs by using network configuration option in the CloudStack UI. You must specify the NIC to which the IP should be associated.

This feature is supported on XenServer, KVM, and VMware hypervisors. Note that Basic zone security groups are not supported on VMware.

Use Cases

Some of the use cases are described below:

  • Network devices, such as firewalls and load balancers, generally work best when they have access to multiple IP addresses on the network interface.
  • Moving private IP addresses between interfaces or instances. Applications that are bound to specific IP addresses can be moved between instances.
  • Hosting multiple SSL Websites on a single instance. You can install multiple SSL certificates on a single instance, each associated with a distinct IP address.
Guidelines

To prevent IP conflict, configure different subnets when multiple networks are connected to the same VM.

Assigning Additional IPs to a VM
  1. Log in to the CloudStack UI.

  2. In the left navigation bar, click Instances.

  3. Click the name of the instance you want to work with.

  4. In the Details tab, click NICs.

  5. Click View Secondary IPs.

  6. Click Acquire New Secondary IP, and click Yes in the confirmation dialog.

    You need to configure the IP on the guest VM NIC manually. CloudStack will not automatically configure the acquired IP address on the VM. Ensure that the IP address configuration persist on VM reboot.

    Within a few moments, the new IP address should appear with the state Allocated. You can now use the IP address in Port Forwarding or StaticNAT rules.

Port Forwarding and StaticNAT Services Changes

Because multiple IPs can be associated per NIC, you are allowed to select a desired IP for the Port Forwarding and StaticNAT services. The default is the primary IP. To enable this functionality, an extra optional parameter ‘vmguestip’ is added to the Port forwarding and StaticNAT APIs (enableStaticNat, createIpForwardingRule) to indicate on what IP address NAT need to be configured. If vmguestip is passed, NAT is configured on the specified private IP of the VM. if not passed, NAT is configured on the primary IP of the VM.

About Multiple IP Ranges

注解

The feature can only be implemented on IPv4 addresses.

CloudStack provides you with the flexibility to add guest IP ranges from different subnets in Basic zones and security groups-enabled Advanced zones. For security groups-enabled Advanced zones, it implies multiple subnets can be added to the same VLAN. With the addition of this feature, you will be able to add IP address ranges from the same subnet or from a different one when IP address are exhausted. This would in turn allows you to employ higher number of subnets and thus reduce the address management overhead. To support this feature, the capability of createVlanIpRange API is extended to add IP ranges also from a different subnet.

Ensure that you manually configure the gateway of the new subnet before adding the IP range. Note that CloudStack supports only one gateway for a subnet; overlapping subnets are not currently supported.

Use the deleteVlanRange API to delete IP ranges. This operation fails if an IP from the remove range is in use. If the remove range contains the IP address on which the DHCP server is running, CloudStack acquires a new IP from the same subnet. If no IP is available in the subnet, the remove operation fails.

This feature is supported on KVM, xenServer, and VMware hypervisors.

About Elastic IPs

Elastic IP (EIP) addresses are the IP addresses that are associated with an account, and act as static IP addresses. The account owner has the complete control over the Elastic IP addresses that belong to the account. As an account owner, you can allocate an Elastic IP to a VM of your choice from the EIP pool of your account. Later if required you can reassign the IP address to a different VM. This feature is extremely helpful during VM failure. Instead of replacing the VM which is down, the IP address can be reassigned to a new VM in your account.

Similar to the public IP address, Elastic IP addresses are mapped to their associated private IP addresses by using StaticNAT. The EIP service is equipped with StaticNAT (1:1) service in an EIP-enabled basic zone. The default network offering, DefaultSharedNetscalerEIPandELBNetworkOffering, provides your network with EIP and ELB network services if a NetScaler device is deployed in your zone. Consider the following illustration for more details.

Elastic IP in a NetScaler-enabled Basic Zone.

In the illustration, a NetScaler appliance is the default entry or exit point for the CloudStack instances, and firewall is the default entry or exit point for the rest of the data center. Netscaler provides LB services and staticNAT service to the guest networks. The guest traffic in the pods and the Management Server are on different subnets / VLANs. The policy-based routing in the data center core switch sends the public traffic through the NetScaler, whereas the rest of the data center goes through the firewall.

The EIP work flow is as follows:

  • When a user VM is deployed, a public IP is automatically acquired from the pool of public IPs configured in the zone. This IP is owned by the VM’s account.

  • Each VM will have its own private IP. When the user VM starts, Static NAT is provisioned on the NetScaler device by using the Inbound Network Address Translation (INAT) and Reverse NAT (RNAT) rules between the public IP and the private IP.

    注解

    Inbound NAT (INAT) is a type of NAT supported by NetScaler, in which the destination IP address is replaced in the packets from the public network, such as the Internet, with the private IP address of a VM in the private network. Reverse NAT (RNAT) is a type of NAT supported by NetScaler, in which the source IP address is replaced in the packets generated by a VM in the private network with the public IP address.

  • This default public IP will be released in two cases:

    • When the VM is stopped. When the VM starts, it again receives a new public IP, not necessarily the same one allocated initially, from the pool of Public IPs.
    • The user acquires a public IP (Elastic IP). This public IP is associated with the account, but will not be mapped to any private IP. However, the user can enable Static NAT to associate this IP to the private IP of a VM in the account. The Static NAT rule for the public IP can be disabled at any time. When Static NAT is disabled, a new public IP is allocated from the pool, which is not necessarily be the same one allocated initially.

For the deployments where public IPs are limited resources, you have the flexibility to choose not to allocate a public IP by default. You can use the Associate Public IP option to turn on or off the automatic public IP assignment in the EIP-enabled Basic zones. If you turn off the automatic public IP assignment while creating a network offering, only a private IP is assigned to a VM when the VM is deployed with that network offering. Later, the user can acquire an IP for the VM and enable static NAT.

For more information on the Associate Public IP option, see “Creating a New Network Offering”.

注解

The Associate Public IP feature is designed only for use with user VMs. The System VMs continue to get both public IP and private by default, irrespective of the network offering configuration.

New deployments which use the default shared network offering with EIP and ELB services to create a shared network in the Basic zone will continue allocating public IPs to each user VM.

Portable IPs

About Portable IP

Portable IPs in CloudStack are region-level pool of IPs, which are elastic in nature, that can be transferred across geographically separated zones. As an administrator, you can provision a pool of portable public IPs at region level and are available for user consumption. The users can acquire portable IPs if admin has provisioned portable IPs at the region level they are part of. These IPs can be use for any service within an advanced zone. You can also use portable IPs for EIP services in basic zones.

The salient features of Portable IP are as follows:

  • IP is statically allocated
  • IP need not be associated with a network
  • IP association is transferable across networks
  • IP is transferable across both Basic and Advanced zones
  • IP is transferable across VPC, non-VPC isolated and shared networks
  • Portable IP transfer is available only for static NAT.
Guidelines

Before transferring to another network, ensure that no network rules (Firewall, Static NAT, Port Forwarding, and so on) exist on that portable IP.

Configuring Portable IPs
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, click Regions.

  3. Choose the Regions that you want to work with.

  4. Click View Portable IP.

  5. Click Portable IP Range.

    The Add Portable IP Range window is displayed.

  6. Specify the following:

    • Start IP/ End IP: A range of IP addresses that are accessible from the Internet and will be allocated to guest VMs. Enter the first and last IP addresses that define a range that CloudStack can assign to guest VMs.
    • Gateway: The gateway in use for the Portable IP addresses you are configuring.
    • Netmask: The netmask associated with the Portable IP range.
    • VLAN: The VLAN that will be used for public traffic.
  7. Click OK.

Acquiring a Portable IP
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. Click the name of the network where you want to work with.

  4. Click View IP Addresses.

  5. Click Acquire New IP.

    The Acquire New IP window is displayed.

  6. Specify whether you want cross-zone IP or not.

  7. Click Yes in the confirmation dialog.

    Within a few moments, the new IP address should appear with the state Allocated. You can now use the IP address in port forwarding or static NAT rules.

Transferring Portable IP

An IP can be transferred from one network to another only if Static NAT is enabled. However, when a portable IP is associated with a network, you can use it for any service in the network.

To transfer a portable IP across the networks, execute the following API:

http://localhost:8096/client/api?command=enableStaticNat&response=json&ipaddressid=a4bc37b2-4b4e-461d-9a62-b66414618e36&virtualmachineid=a242c476-ef37-441e-9c7b-b303e2a9cb4f&networkid=6e7cd8d1-d1ba-4c35-bdaf-333354cbd49810

Replace the UUID with appropriate UUID. For example, if you want to transfer a portable IP to network X and VM Y in a network, execute the following:

http://localhost:8096/client/api?command=enableStaticNat&response=json&ipaddressid=a4bc37b2-4b4e-461d-9a62-b66414618e36&virtualmachineid=Y&networkid=X

Multiple Subnets in Shared Network

CloudStack provides you with the flexibility to add guest IP ranges from different subnets in Basic zones and security groups-enabled Advanced zones. For security groups-enabled Advanced zones, it implies multiple subnets can be added to the same VLAN. With the addition of this feature, you will be able to add IP address ranges from the same subnet or from a different one when IP address are exhausted. This would in turn allows you to employ higher number of subnets and thus reduce the address management overhead. You can delete the IP ranges you have added.

Prerequisites and Guidelines
  • This feature can only be implemented:
    • on IPv4 addresses
    • if virtual router is the DHCP provider
    • on KVM, xenServer, and VMware hypervisors
  • Manually configure the gateway of the new subnet before adding the IP range.
  • CloudStack supports only one gateway for a subnet; overlapping subnets are not currently supported
Adding Multiple Subnets to a Shared Network
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Infrastructure.

  3. On Zones, click View More, then click the zone to which you want to work with..

  4. Click Physical Network.

  5. In the Guest node of the diagram, click Configure.

  6. Click Networks.

  7. Select the networks you want to work with.

  8. Click View IP Ranges.

  9. Click Add IP Range.

    The Add IP Range dialog is displayed, as follows:

    adding an IP range to a network.

  10. Specify the following:

    All the fields are mandatory.

    • Gateway: The gateway for the tier you create. Ensure that the gateway is within the Super CIDR range that you specified while creating the VPC, and is not overlapped with the CIDR of any existing tier within the VPC.

    • Netmask: The netmask for the tier you create.

      For example, if the VPC CIDR is 10.0.0.0/16 and the network tier CIDR is 10.0.1.0/24, the gateway of the tier is 10.0.1.1, and the netmask of the tier is 255.255.255.0.

    • Start IP/ End IP: A range of IP addresses that are accessible from the Internet and will be allocated to guest VMs. Enter the first and last IP addresses that define a range that CloudStack can assign to guest VMs .

  11. Click OK.

Isolation in Advanced Zone Using Private VLAN

Isolation of guest traffic in shared networks can be achieved by using Private VLANs (PVLAN). PVLANs provide Layer 2 isolation between ports within the same VLAN. In a PVLAN-enabled shared network, a user VM cannot reach other user VM though they can reach the DHCP server and gateway, this would in turn allow users to control traffic within a network and help them deploy multiple applications without communication between application as well as prevent communication with other users’ VMs.

  • Isolate VMs in a shared networks by using Private VLANs.
  • Supported on KVM, XenServer, and VMware hypervisors
  • PVLAN-enabled shared network can be a part of multiple networks of a guest VM.
About Private VLAN

In an Ethernet switch, a VLAN is a broadcast domain where hosts can establish direct communication with each another at Layer 2. Private VLAN is designed as an extension of VLAN standard to add further segmentation of the logical broadcast domain. A regular VLAN is a single broadcast domain, whereas a private VLAN partitions a larger VLAN broadcast domain into smaller sub-domains. A sub-domain is represented by a pair of VLANs: a Primary VLAN and a Secondary VLAN. The original VLAN that is being divided into smaller groups is called Primary, which implies that all VLAN pairs in a private VLAN share the same Primary VLAN. All the secondary VLANs exist only inside the Primary. Each Secondary VLAN has a specific VLAN ID associated to it, which differentiates one sub-domain from another.

Three types of ports exist in a private VLAN domain, which essentially determine the behaviour of the participating hosts. Each ports will have its own unique set of rules, which regulate a connected host’s ability to communicate with other connected host within the same private VLAN domain. Configure each host that is part of a PVLAN pair can be by using one of these three port designation:

  • Promiscuous: A promiscuous port can communicate with all the interfaces, including the community and isolated host ports that belong to the secondary VLANs. In Promiscuous mode, hosts are connected to promiscuous ports and are able to communicate directly with resources on both primary and secondary VLAN. Routers, DHCP servers, and other trusted devices are typically attached to promiscuous ports.
  • Isolated VLANs: The ports within an isolated VLAN cannot communicate with each other at the layer-2 level. The hosts that are connected to Isolated ports can directly communicate only with the Promiscuous resources. If your customer device needs to have access only to a gateway router, attach it to an isolated port.
  • Community VLANs: The ports within a community VLAN can communicate with each other and with the promiscuous ports, but they cannot communicate with the ports in other communities at the layer-2 level. In a Community mode, direct communication is permitted only with the hosts in the same community and those that are connected to the Primary PVLAN in promiscuous mode. If your customer has two devices that need to be isolated from other customers’ devices, but to be able to communicate among themselves, deploy them in community ports.

For further reading:

Prerequisites
  • Use a PVLAN supported switch.

    See Private VLAN Catalyst Switch Support Matrix for more information.

  • All the layer 2 switches, which are PVLAN-aware, are connected to each other, and one of them is connected to a router. All the ports connected to the host would be configured in trunk mode. Open Management VLAN, Primary VLAN (public) and Secondary Isolated VLAN ports. Configure the switch port connected to the router in PVLAN promiscuous trunk mode, which would translate an isolated VLAN to primary VLAN for the PVLAN-unaware router.

    Note that only Cisco Catalyst 4500 has the PVLAN promiscuous trunk mode to connect both normal VLAN and PVLAN to a PVLAN-unaware switch. For the other Catalyst PVLAN support switch, connect the switch to upper switch by using cables, one each for a PVLAN pair.

  • Configure private VLAN on your physical switches out-of-band.

  • Before you use PVLAN on XenServer and KVM, enable Open vSwitch (OVS).

    注解

    OVS on XenServer and KVM does not support PVLAN natively. Therefore, CloudStack managed to simulate PVLAN on OVS for XenServer and KVM by modifying the flow table.

Creating a PVLAN-Enabled Guest Network
  1. Log in to the CloudStack UI as administrator.

  2. In the left navigation, choose Infrastructure.

  3. On Zones, click View More.

  4. Click the zone to which you want to add a guest network.

  5. Click the Physical Network tab.

  6. Click the physical network you want to work with.

  7. On the Guest node of the diagram, click Configure.

  8. Click the Network tab.

  9. Click Add guest network.

    The Add guest network window is displayed.

  10. Specify the following:

    • Name: The name of the network. This will be visible to the user.

    • Description: The short description of the network that can be displayed to users.

    • VLAN ID: The unique ID of the VLAN.

    • Secondary Isolated VLAN ID: The unique ID of the Secondary Isolated VLAN.

      For the description on Secondary Isolated VLAN, see About Private VLAN”.

    • Scope: The available scopes are Domain, Account, Project, and All.

      • Domain: Selecting Domain limits the scope of this guest network to the domain you specify. The network will not be available for other domains. If you select Subdomain Access, the guest network is available to all the sub domains within the selected domain.
      • Account: The account for which the guest network is being created for. You must specify the domain the account belongs to.
      • Project: The project for which the guest network is being created for. You must specify the domain the project belongs to.
      • All: The guest network is available for all the domains, account, projects within the selected zone.
    • Network Offering: If the administrator has configured multiple network offerings, select the one you want to use for this network.

    • Gateway: The gateway that the guests should use.

    • Netmask: The netmask in use on the subnet the guests will use.

    • IP Range: A range of IP addresses that are accessible from the Internet and are assigned to the guest VMs.

    • Network Domain: A custom DNS suffix at the level of a network. If you want to assign a special domain name to the guest VM network, specify a DNS suffix.

  11. Click OK to confirm.

Security Groups

About Security Groups

Security groups provide a way to isolate traffic to VMs. A security group is a group of VMs that filter their incoming and outgoing traffic according to a set of rules, called ingress and egress rules. These rules filter network traffic according to the IP address that is attempting to communicate with the VM. Security groups are particularly useful in zones that use basic networking, because there is a single guest network for all guest VMs. In advanced zones, security groups are supported only on the KVM hypervisor.

注解

In a zone that uses advanced networking, you can instead define multiple guest networks to isolate traffic to VMs.

Each CloudStack account comes with a default security group that denies all inbound traffic and allows all outbound traffic. The default security group can be modified so that all new VMs inherit some other desired set of rules.

Any CloudStack user can set up any number of additional security groups. When a new VM is launched, it is assigned to the default security group unless another user-defined security group is specified. A VM can be a member of any number of security groups. Once a VM is assigned to a security group, it remains in that group for its entire lifetime; you can not move a running VM from one security group to another.

You can modify a security group by deleting or adding any number of ingress and egress rules. When you do, the new rules apply to all VMs in the group, whether running or stopped.

If no ingress rules are specified, then no traffic will be allowed in, except for responses to any traffic that has been allowed out through an egress rule.

Adding a Security Group

A user or administrator can define a new security group.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In Select view, choose Security Groups.

  4. Click Add Security Group.

  5. Provide a name and description.

  6. Click OK.

    The new security group appears in the Security Groups Details tab.

  7. To make the security group useful, continue to Adding Ingress and Egress Rules to a Security Group.

Security Groups in Advanced Zones (KVM Only)

CloudStack provides the ability to use security groups to provide isolation between guests on a single shared, zone-wide network in an advanced zone where KVM is the hypervisor. Using security groups in advanced zones rather than multiple VLANs allows a greater range of options for setting up guest isolation in a cloud.

Limitations

The following are not supported for this feature:

  • Two IP ranges with the same VLAN and different gateway or netmask in security group-enabled shared network.
  • Two IP ranges with the same VLAN and different gateway or netmask in account-specific shared networks.
  • Multiple VLAN ranges in security group-enabled shared network.
  • Multiple VLAN ranges in account-specific shared networks.

Security groups must be enabled in the zone in order for this feature to be used.

Enabling Security Groups

In order for security groups to function in a zone, the security groups feature must first be enabled for the zone. The administrator can do this when creating a new zone, by selecting a network offering that includes security groups. The procedure is described in Basic Zone Configuration in the Advanced Installation Guide. The administrator can not enable security groups for an existing zone, only when creating a new zone.

Adding Ingress and Egress Rules to a Security Group
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network

  3. In Select view, choose Security Groups, then click the security group you want.

  4. To add an ingress rule, click the Ingress Rules tab and fill out the following fields to specify what network traffic is allowed into VM instances in this security group. If no ingress rules are specified, then no traffic will be allowed in, except for responses to any traffic that has been allowed out through an egress rule.

    • Add by CIDR/Account. Indicate whether the source of the traffic will be defined by IP address (CIDR) or an existing security group in a CloudStack account (Account). Choose Account if you want to allow incoming traffic from all VMs in another security group
    • Protocol. The networking protocol that sources will use to send traffic to the security group. TCP and UDP are typically used for data exchange and end-user communications. ICMP is typically used to send error messages or network monitoring data.
    • Start Port, End Port. (TCP, UDP only) A range of listening ports that are the destination for the incoming traffic. If you are opening a single port, use the same number in both fields.
    • ICMP Type, ICMP Code. (ICMP only) The type of message and error code that will be accepted.
    • CIDR. (Add by CIDR only) To accept only traffic from IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the incoming traffic. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
    • Account, Security Group. (Add by Account only) To accept only traffic from another security group, enter the CloudStack account and name of a security group that has already been defined in that account. To allow traffic between VMs within the security group you are editing now, enter the same name you used in step 7.

    The following example allows inbound HTTP access from anywhere:

    allows inbound HTTP access from anywhere.

  5. To add an egress rule, click the Egress Rules tab and fill out the following fields to specify what type of traffic is allowed to be sent out of VM instances in this security group. If no egress rules are specified, then all traffic will be allowed out. Once egress rules are specified, the following types of traffic are allowed out: traffic specified in egress rules; queries to DNS and DHCP servers; and responses to any traffic that has been allowed in through an ingress rule

    • Add by CIDR/Account. Indicate whether the destination of the traffic will be defined by IP address (CIDR) or an existing security group in a CloudStack account (Account). Choose Account if you want to allow outgoing traffic to all VMs in another security group.
    • Protocol. The networking protocol that VMs will use to send outgoing traffic. TCP and UDP are typically used for data exchange and end-user communications. ICMP is typically used to send error messages or network monitoring data.
    • Start Port, End Port. (TCP, UDP only) A range of listening ports that are the destination for the outgoing traffic. If you are opening a single port, use the same number in both fields.
    • ICMP Type, ICMP Code. (ICMP only) The type of message and error code that will be sent
    • CIDR. (Add by CIDR only) To send traffic only to IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the destination. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
    • Account, Security Group. (Add by Account only) To allow traffic to be sent to another security group, enter the CloudStack account and name of a security group that has already been defined in that account. To allow traffic between VMs within the security group you are editing now, enter its name.
  6. Click Add.

External Firewalls and Load Balancers

CloudStack is capable of replacing its Virtual Router with an external Juniper SRX device and an optional external NetScaler or F5 load balancer for gateway and load balancing services. In this case, the VMs use the SRX as their gateway.

About Using a NetScaler Load Balancer

Citrix NetScaler is supported as an external network element for load balancing in zones that use isolated networking in advanced zones. Set up an external load balancer when you want to provide load balancing through means other than CloudStack’s provided virtual router.

注解

In a Basic zone, load balancing service is supported only if Elastic IP or Elastic LB services are enabled.

When NetScaler load balancer is used to provide EIP or ELB services in a Basic zone, ensure that all guest VM traffic must enter and exit through the NetScaler device. When inbound traffic goes through the NetScaler device, traffic is routed by using the NAT protocol depending on the EIP/ELB configured on the public IP to the private IP. The traffic that is originated from the guest VMs usually goes through the layer 3 router. To ensure that outbound traffic goes through NetScaler device providing EIP/ELB, layer 3 router must have a policy-based routing. A policy-based route must be set up so that all traffic originated from the guest VM’s are directed to NetScaler device. This is required to ensure that the outbound traffic from the guest VM’s is routed to a public IP by using NAT.For more information on Elastic IP, see “About Elastic IP”.

The NetScaler can be set up in direct (outside the firewall) mode. It must be added before any load balancing rules are deployed on guest VMs in the zone.

The functional behavior of the NetScaler with CloudStack is the same as described in the CloudStack documentation for using an F5 external load balancer. The only exception is that the F5 supports routing domains, and NetScaler does not. NetScaler can not yet be used as a firewall.

To install and enable an external load balancer for CloudStack management, see External Guest Load Balancer Integration in the Installation Guide.

The Citrix NetScaler comes in three varieties. The following summarizes how these variants are treated in CloudStack.

MPX

  • Physical appliance. Capable of deep packet inspection. Can act as application firewall and load balancer
  • In advanced zones, load balancer functionality fully supported without limitation. In basic zones, static NAT, elastic IP (EIP), and elastic load balancing (ELB) are also provided.

VPX

  • Virtual appliance. Can run as VM on XenServer, ESXi, and Hyper-V hypervisors. Same functionality as MPX
  • Supported on ESXi and XenServer. Same functional support as for MPX. CloudStack will treat VPX and MPX as the same device type.

SDX

  • Physical appliance. Can create multiple fully isolated VPX instances on a single appliance to support multi-tenant usage
  • CloudStack will dynamically provision, configure, and manage the life cycle of VPX instances on the SDX. Provisioned instances are added into CloudStack automatically - no manual configuration by the administrator is required. Once a VPX instance is added into CloudStack, it is treated the same as a VPX on an ESXi host.
Configuring SNMP Community String on a RHEL Server

The SNMP Community string is similar to a user id or password that provides access to a network device, such as router. This string is sent along with all SNMP requests. If the community string is correct, the device responds with the requested information. If the community string is incorrect, the device discards the request and does not respond.

The NetScaler device uses SNMP to communicate with the VMs. You must install SNMP and configure SNMP Community string for a secure communication between the NetScaler device and the RHEL machine.

  1. Ensure that you installed SNMP on RedHat. If not, run the following command:

    yum install net-snmp-utils
    
  2. Edit the /etc/snmp/snmpd.conf file to allow the SNMP polling from the NetScaler device.

    1. Map the community name into a security name (local and mynetwork, depending on where the request is coming from):

      注解

      Use a strong password instead of public when you edit the following table.

      #         sec.name   source        community
      com2sec   local      localhost     public
      com2sec   mynetwork  0.0.0.0       public
      

      注解

      Setting to 0.0.0.0 allows all IPs to poll the NetScaler server.

    2. Map the security names into group names:

      #       group.name    sec.model  sec.name
      group   MyRWGroup     v1         local
      group   MyRWGroup     v2c        local
      group   MyROGroup     v1         mynetwork
      group   MyROGroup     v2c        mynetwork
      
    3. Create a view to allow the groups to have the permission to:

      incl/excl subtree mask view all included .1
      
    4. Grant access with different write permissions to the two groups to the view you created.

      # context     sec.model     sec.level      prefix     read     write    notif
        access      MyROGroup ""  any noauth     exact      all      none     none
        access      MyRWGroup ""  any noauth     exact      all      all      all
      
  3. Unblock SNMP in iptables.

    iptables -A INPUT -p udp --dport 161 -j ACCEPT
    
  4. Start the SNMP service:

    service snmpd start
    
  5. Ensure that the SNMP service is started automatically during the system startup:

    chkconfig snmpd on
    
Initial Setup of External Firewalls and Load Balancers

When the first VM is created for a new account, CloudStack programs the external firewall and load balancer to work with the VM. The following objects are created on the firewall:

  • A new logical interface to connect to the account’s private VLAN. The interface IP is always the first IP of the account’s private subnet (e.g. 10.1.1.1).
  • A source NAT rule that forwards all outgoing traffic from the account’s private VLAN to the public Internet, using the account’s public IP address as the source address
  • A firewall filter counter that measures the number of bytes of outgoing traffic for the account

The following objects are created on the load balancer:

  • A new VLAN that matches the account’s provisioned Zone VLAN
  • A self IP for the VLAN. This is always the second IP of the account’s private subnet (e.g. 10.1.1.2).
Ongoing Configuration of External Firewalls and Load Balancers

Additional user actions (e.g. setting a port forward) will cause further programming of the firewall and load balancer. A user may request additional public IP addresses and forward traffic received at these IPs to specific VMs. This is accomplished by enabling static NAT for a public IP address, assigning the IP to a VM, and specifying a set of protocols and port ranges to open. When a static NAT rule is created, CloudStack programs the zone’s external firewall with the following objects:

  • A static NAT rule that maps the public IP address to the private IP address of a VM.
  • A security policy that allows traffic within the set of protocols and port ranges that are specified.
  • A firewall filter counter that measures the number of bytes of incoming traffic to the public IP.

The number of incoming and outgoing bytes through source NAT, static NAT, and load balancing rules is measured and saved on each external element. This data is collected on a regular basis and stored in the CloudStack database.

Load Balancer Rules

A CloudStack user or administrator may create load balancing rules that balance traffic received at a public IP to one or more VMs. A user creates a rule, specifies an algorithm, and assigns the rule to a set of VMs.

注解

If you create load balancing rules while using a network service offering that includes an external load balancer device such as NetScaler, and later change the network service offering to one that uses the CloudStack virtual router, you must create a firewall rule on the virtual router for each of your existing load balancing rules so that they continue to function.

Adding a Load Balancer Rule
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. Click the name of the network where you want to load balance the traffic.

  4. Click View IP Addresses.

  5. Click the IP address for which you want to create the rule, then click the Configuration tab.

  6. In the Load Balancing node of the diagram, click View All.

    In a Basic zone, you can also create a load balancing rule without acquiring or selecting an IP address. CloudStack internally assign an IP when you create the load balancing rule, which is listed in the IP Addresses page when the rule is created.

    To do that, select the name of the network, then click Add Load Balancer tab. Continue with #7.

  7. Fill in the following:

    • Name: A name for the load balancer rule.
    • Public Port: The port receiving incoming traffic to be balanced.
    • Private Port: The port that the VMs will use to receive the traffic.
    • Algorithm: Choose the load balancing algorithm you want CloudStack to use. CloudStack supports a variety of well-known algorithms. If you are not familiar with these choices, you will find plenty of information about them on the Internet.
    • Stickiness: (Optional) Click Configure and choose the algorithm for the stickiness policy. See Sticky Session Policies for Load Balancer Rules.
    • AutoScale: Click Configure and complete the AutoScale configuration as explained in Configuring AutoScale.
    • Health Check: (Optional; NetScaler load balancers only) Click Configure and fill in the characteristics of the health check policy. See Health Checks for Load Balancer Rules.
      • Ping path (Optional): Sequence of destinations to which to send health check queries. Default: / (all).
      • Response time (Optional): How long to wait for a response from the health check (2 - 60 seconds). Default: 5 seconds.
      • Interval time (Optional): Amount of time between health checks (1 second - 5 minutes). Default value is set in the global configuration parameter lbrule_health check_time_interval.
      • Healthy threshold (Optional): Number of consecutive health check successes that are required before declaring an instance healthy. Default: 2.
      • Unhealthy threshold (Optional): Number of consecutive health check failures that are required before declaring an instance unhealthy. Default: 10.
  8. Click Add VMs, then select two or more VMs that will divide the load of incoming traffic, and click Apply.

    The new load balancer rule appears in the list. You can repeat these steps to add more load balancer rules for this IP address.

Sticky Session Policies for Load Balancer Rules

Sticky sessions are used in Web-based applications to ensure continued availability of information across the multiple requests in a user’s session. For example, if a shopper is filling a cart, you need to remember what has been purchased so far. The concept of “stickiness” is also referred to as persistence or maintaining state.

Any load balancer rule defined in CloudStack can have a stickiness policy. The policy consists of a name, stickiness method, and parameters. The parameters are name-value pairs or flags, which are defined by the load balancer vendor. The stickiness method could be load balancer-generated cookie, application-generated cookie, or source-based. In the source-based method, the source IP address is used to identify the user and locate the user’s stored data. In the other methods, cookies are used. The cookie generated by the load balancer or application is included in request and response URLs to create persistence. The cookie name can be specified by the administrator or automatically generated. A variety of options are provided to control the exact behavior of cookies, such as how they are generated and whether they are cached.

For the most up to date list of available stickiness methods, see the CloudStack UI or call listNetworks and check the SupportedStickinessMethods capability.

Health Checks for Load Balancer Rules

(NetScaler load balancer only; requires NetScaler version 10.0)

Health checks are used in load-balanced applications to ensure that requests are forwarded only to running, available services. When creating a load balancer rule, you can specify a health check policy. This is in addition to specifying the stickiness policy, algorithm, and other load balancer rule options. You can configure one health check policy per load balancer rule.

Any load balancer rule defined on a NetScaler load balancer in CloudStack can have a health check policy. The policy consists of a ping path, thresholds to define “healthy” and “unhealthy” states, health check frequency, and timeout wait interval.

When a health check policy is in effect, the load balancer will stop forwarding requests to any resources that are found to be unhealthy. If the resource later becomes available again, the periodic health check will discover it, and the resource will once again be added to the pool of resources that can receive requests from the load balancer. At any given time, the most recent result of the health check is displayed in the UI. For any VM that is attached to a load balancer rule with a health check configured, the state will be shown as UP or DOWN in the UI depending on the result of the most recent health check.

You can delete or modify existing health check policies.

To configure how often the health check is performed by default, use the global configuration setting healthcheck.update.interval (default value is 600 seconds). You can override this value for an individual health check policy.

For details on how to set a health check policy using the UI, see Adding a Load Balancer Rule.

Configuring AutoScale

AutoScaling allows you to scale your back-end services or application VMs up or down seamlessly and automatically according to the conditions you define. With AutoScaling enabled, you can ensure that the number of VMs you are using seamlessly scale up when demand increases, and automatically decreases when demand subsides. Thus it helps you save compute costs by terminating underused VMs automatically and launching new VMs when you need them, without the need for manual intervention.

NetScaler AutoScaling is designed to seamlessly launch or terminate VMs based on user-defined conditions. Conditions for triggering a scaleup or scaledown action can vary from a simple use case like monitoring the CPU usage of a server to a complex use case of monitoring a combination of server’s responsiveness and its CPU usage. For example, you can configure AutoScaling to launch an additional VM whenever CPU usage exceeds 80 percent for 15 minutes, or to remove a VM whenever CPU usage is less than 20 percent for 30 minutes.

CloudStack uses the NetScaler load balancer to monitor all aspects of a system’s health and work in unison with CloudStack to initiate scale-up or scale-down actions.

注解

AutoScale is supported on NetScaler Release 10 Build 74.4006.e and beyond.

Prerequisites

Before you configure an AutoScale rule, consider the following:

  • Ensure that the necessary template is prepared before configuring AutoScale. When a VM is deployed by using a template and when it comes up, the application should be up and running.

    注解

    If the application is not running, the NetScaler device considers the VM as ineffective and continues provisioning the VMs unconditionally until the resource limit is exhausted.

  • Deploy the templates you prepared. Ensure that the applications come up on the first boot and is ready to take the traffic. Observe the time requires to deploy the template. Consider this time when you specify the quiet time while configuring AutoScale.

  • The AutoScale feature supports the SNMP counters that can be used to define conditions for taking scale up or scale down actions. To monitor the SNMP-based counter, ensure that the SNMP agent is installed in the template used for creating the AutoScale VMs, and the SNMP operations work with the configured SNMP community and port by using standard SNMP managers. For example, see “Configuring SNMP Community String on a RHELServer” to configure SNMP on a RHEL machine.

  • Ensure that the endpointe.url parameter present in the Global Settings is set to the Management Server API URL. For example, http://10.102.102.22:8080/client/api. In a multi-node Management Server deployment, use the virtual IP address configured in the load balancer for the management server’s cluster. Additionally, ensure that the NetScaler device has access to this IP address to provide AutoScale support.

    If you update the endpointe.url, disable the AutoScale functionality of the load balancer rules in the system, then enable them back to reflect the changes. For more information see Updating an AutoScale Configuration.

  • If the API Key and Secret Key are regenerated for an AutoScale user, ensure that the AutoScale functionality of the load balancers that the user participates in are disabled and then enabled to reflect the configuration changes in the NetScaler.

  • In an advanced Zone, ensure that at least one VM should be present before configuring a load balancer rule with AutoScale. Having one VM in the network ensures that the network is in implemented state for configuring AutoScale.

Configuration

Specify the following:

Configuring AutoScale.

  • Template: A template consists of a base OS image and application. A template is used to provision the new instance of an application on a scaleup action. When a VM is deployed from a template, the VM can start taking the traffic from the load balancer without any admin intervention. For example, if the VM is deployed for a Web service, it should have the Web server running, the database connected, and so on.

  • Compute offering: A predefined set of virtual hardware attributes, including CPU speed, number of CPUs, and RAM size, that the user can select when creating a new virtual machine instance. Choose one of the compute offerings to be used while provisioning a VM instance as part of scaleup action.

  • Min Instance: The minimum number of active VM instances that is assigned to a load balancing rule. The active VM instances are the application instances that are up and serving the traffic, and are being load balanced. This parameter ensures that a load balancing rule has at least the configured number of active VM instances are available to serve the traffic.

    注解

    If an application, such as SAP, running on a VM instance is down for some reason, the VM is then not counted as part of Min Instance parameter, and the AutoScale feature initiates a scaleup action if the number of active VM instances is below the configured value. Similarly, when an application instance comes up from its earlier down state, this application instance is counted as part of the active instance count and the AutoScale process initiates a scaledown action when the active instance count breaches the Max instance value.

  • Max Instance: Maximum number of active VM instances that should be assigned toa load balancing rule. This parameter defines the upper limit of active VM instances that can be assigned to a load balancing rule.

    Specifying a large value for the maximum instance parameter might result in provisioning large number of VM instances, which in turn leads to a single load balancing rule exhausting the VM instances limit specified at the account or domain level.

    注解

    If an application, such as SAP, running on a VM instance is down for some reason, the VM is not counted as part of Max Instance parameter. So there may be scenarios where the number of VMs provisioned for a scaleup action might be more than the configured Max Instance value. Once the application instances in the VMs are up from an earlier down state, the AutoScale feature starts aligning to the configured Max Instance value.

Specify the following scale-up and scale-down policies:

  • Duration: The duration, in seconds, for which the conditions you specify must be true to trigger a scaleup action. The conditions defined should hold true for the entire duration you specify for an AutoScale action to be invoked.
  • Counter: The performance counters expose the state of the monitored instances. By default, CloudStack offers four performance counters: Three SNMP counters and one NetScaler counter. The SNMP counters are Linux User CPU, Linux System CPU, and Linux CPU Idle. The NetScaler counter is ResponseTime. The root administrator can add additional counters into CloudStack by using the CloudStack API.
  • Operator: The following five relational operators are supported in AutoScale feature: Greater than, Less than, Less than or equal to, Greater than or equal to, and Equal to.
  • Threshold: Threshold value to be used for the counter. Once the counter defined above breaches the threshold value, the AutoScale feature initiates a scaleup or scaledown action.
  • Add: Click Add to add the condition.

Additionally, if you want to configure the advanced settings, click Show advanced settings, and specify the following:

  • Polling interval: Frequency in which the conditions, combination of counter, operator and threshold, are to be evaluated before taking a scale up or down action. The default polling interval is 30 seconds.
  • Quiet Time: This is the cool down period after an AutoScale action is initiated. The time includes the time taken to complete provisioning a VM instance from its template and the time taken by an application to be ready to serve traffic. This quiet time allows the fleet to come up to a stable state before any action can take place. The default is 300 seconds.
  • Destroy VM Grace Period: The duration in seconds, after a scaledown action is initiated, to wait before the VM is destroyed as part of scaledown action. This is to ensure graceful close of any pending sessions or transactions being served by the VM marked for destroy. The default is 120 seconds.
  • Security Groups: Security groups provide a way to isolate traffic to the VM instances. A security group is a group of VMs that filter their incoming and outgoing traffic according to a set of rules, called ingress and egress rules. These rules filter network traffic according to the IP address that is attempting to communicate with the VM.
  • Disk Offerings: A predefined set of disk size for primary data storage.
  • SNMP Community: The SNMP community string to be used by the NetScaler device to query the configured counter value from the provisioned VM instances. Default is public.
  • SNMP Port: The port number on which the SNMP agent that run on the provisioned VMs is listening. Default port is 161.
  • User: This is the user that the NetScaler device use to invoke scaleup and scaledown API calls to the cloud. If no option is specified, the user who configures AutoScaling is applied. Specify another user name to override.
  • Apply: Click Apply to create the AutoScale configuration.
Disabling and Enabling an AutoScale Configuration

If you want to perform any maintenance operation on the AutoScale VM instances, disable the AutoScale configuration. When the AutoScale configuration is disabled, no scaleup or scaledown action is performed. You can use this downtime for the maintenance activities. To disable the AutoScale configuration, click the Disable AutoScale button to enable or disable AutoScale. button.

The button toggles between enable and disable, depending on whether AutoScale is currently enabled or not. After the maintenance operations are done, you can enable the AutoScale configuration back. To enable, open the AutoScale configuration page again, then click the Enable AutoScale button to enable or disable AutoScale. button.

Updating an AutoScale Configuration

You can update the various parameters and add or delete the conditions in a scaleup or scaledown rule. Before you update an AutoScale configuration, ensure that you disable the AutoScale load balancer rule by clicking the Disable AutoScale button.

After you modify the required AutoScale parameters, click Apply. To apply the new AutoScale policies, open the AutoScale configuration page again, then click the Enable AutoScale button.

Runtime Considerations
  • An administrator should not assign a VM to a load balancing rule which is configured for AutoScale.
  • Before a VM provisioning is completed if NetScaler is shutdown or restarted, the provisioned VM cannot be a part of the load balancing rule though the intent was to assign it to a load balancing rule. To workaround, rename the AutoScale provisioned VMs based on the rule name or ID so at any point of time the VMs can be reconciled to its load balancing rule.
  • Making API calls outside the context of AutoScale, such as destroyVM, on an autoscaled VM leaves the load balancing configuration in an inconsistent state. Though VM is destroyed from the load balancer rule, NetScaler continues to show the VM as a service assigned to a rule.

Global Server Load Balancing Support

CloudStack supports Global Server Load Balancing (GSLB) functionalities to provide business continuity, and enable seamless resource movement within a CloudStack environment. CloudStack achieve this by extending its functionality of integrating with NetScaler Application Delivery Controller (ADC), which also provides various GSLB capabilities, such as disaster recovery and load balancing. The DNS redirection technique is used to achieve GSLB in CloudStack.

In order to support this functionality, region level services and service provider are introduced. A new service ‘GSLB’ is introduced as a region level service. The GSLB service provider is introduced that will provider the GSLB service. Currently, NetScaler is the supported GSLB provider in CloudStack. GSLB functionality works in an Active-Active data center environment.

About Global Server Load Balancing

Global Server Load Balancing (GSLB) is an extension of load balancing functionality, which is highly efficient in avoiding downtime. Based on the nature of deployment, GSLB represents a set of technologies that is used for various purposes, such as load sharing, disaster recovery, performance, and legal obligations. With GSLB, workloads can be distributed across multiple data centers situated at geographically separated locations. GSLB can also provide an alternate location for accessing a resource in the event of a failure, or to provide a means of shifting traffic easily to simplify maintenance, or both.

Components of GSLB

A typical GSLB environment is comprised of the following components:

  • GSLB Site: In CloudStack terminology, GSLB sites are represented by zones that are mapped to data centers, each of which has various network appliances. Each GSLB site is managed by a NetScaler appliance that is local to that site. Each of these appliances treats its own site as the local site and all other sites, managed by other appliances, as remote sites. It is the central entity in a GSLB deployment, and is represented by a name and an IP address.
  • GSLB Services: A GSLB service is typically represented by a load balancing or content switching virtual server. In a GSLB environment, you can have a local as well as remote GSLB services. A local GSLB service represents a local load balancing or content switching virtual server. A remote GSLB service is the one configured at one of the other sites in the GSLB setup. At each site in the GSLB setup, you can create one local GSLB service and any number of remote GSLB services.
  • GSLB Virtual Servers: A GSLB virtual server refers to one or more GSLB services and balances traffic between traffic across the VMs in multiple zones by using the CloudStack functionality. It evaluates the configured GSLB methods or algorithms to select a GSLB service to which to send the client requests. One or more virtual servers from different zones are bound to the GSLB virtual server. GSLB virtual server does not have a public IP associated with it, instead it will have a FQDN DNS name.
  • Load Balancing or Content Switching Virtual Servers: According to Citrix NetScaler terminology, a load balancing or content switching virtual server represents one or many servers on the local network. Clients send their requests to the load balancing or content switching virtual server’s virtual IP (VIP) address, and the virtual server balances the load across the local servers. After a GSLB virtual server selects a GSLB service representing either a local or a remote load balancing or content switching virtual server, the client sends the request to that virtual server’s VIP address.
  • DNS VIPs: DNS virtual IP represents a load balancing DNS virtual server on the GSLB service provider. The DNS requests for domains for which the GSLB service provider is authoritative can be sent to a DNS VIP.
  • Authoritative DNS: ADNS (Authoritative Domain Name Server) is a service that provides actual answer to DNS queries, such as web site IP address. In a GSLB environment, an ADNS service responds only to DNS requests for domains for which the GSLB service provider is authoritative. When an ADNS service is configured, the service provider owns that IP address and advertises it. When you create an ADNS service, the NetScaler responds to DNS queries on the configured ADNS service IP and port.
How Does GSLB Works in CloudStack?

Global server load balancing is used to manage the traffic flow to a web site hosted on two separate zones that ideally are in different geographic locations. The following is an illustration of how GLSB functionality is provided in CloudStack: An organization, xyztelco, has set up a public cloud that spans two zones, Zone-1 and Zone-2, across geographically separated data centers that are managed by CloudStack. Tenant-A of the cloud launches a highly available solution by using xyztelco cloud. For that purpose, they launch two instances each in both the zones: VM1 and VM2 in Zone-1 and VM5 and VM6 in Zone-2. Tenant-A acquires a public IP, IP-1 in Zone-1, and configures a load balancer rule to load balance the traffic between VM1 and VM2 instances. CloudStack orchestrates setting up a virtual server on the LB service provider in Zone-1. Virtual server 1 that is set up on the LB service provider in Zone-1 represents a publicly accessible virtual server that client reaches at IP-1. The client traffic to virtual server 1 at IP-1 will be load balanced across VM1 and VM2 instances.

Tenant-A acquires another public IP, IP-2 in Zone-2 and sets up a load balancer rule to load balance the traffic between VM5 and VM6 instances. Similarly in Zone-2, CloudStack orchestrates setting up a virtual server on the LB service provider. Virtual server 2 that is setup on the LB service provider in Zone-2 represents a publicly accessible virtual server that client reaches at IP-2. The client traffic that reaches virtual server 2 at IP-2 is load balanced across VM5 and VM6 instances. At this point Tenant-A has the service enabled in both the zones, but has no means to set up a disaster recovery plan if one of the zone fails. Additionally, there is no way for Tenant-A to load balance the traffic intelligently to one of the zones based on load, proximity and so on. The cloud administrator of xyztelco provisions a GSLB service provider to both the zones. A GSLB provider is typically an ADC that has the ability to act as an ADNS (Authoritative Domain Name Server) and has the mechanism to monitor health of virtual servers both at local and remote sites. The cloud admin enables GSLB as a service to the tenants that use zones 1 and 2.

GSLB architecture

Tenant-A wishes to leverage the GSLB service provided by the xyztelco cloud. Tenant-A configures a GSLB rule to load balance traffic across virtual server 1 at Zone-1 and virtual server 2 at Zone-2. The domain name is provided as A.xyztelco.com. CloudStack orchestrates setting up GSLB virtual server 1 on the GSLB service provider at Zone-1. CloudStack binds virtual server 1 of Zone-1 and virtual server 2 of Zone-2 to GLSB virtual server 1. GSLB virtual server 1 is configured to start monitoring the health of virtual server 1 and 2 in Zone-1. CloudStack will also orchestrate setting up GSLB virtual server 2 on GSLB service provider at Zone-2. CloudStack will bind virtual server 1 of Zone-1 and virtual server 2 of Zone-2 to GLSB virtual server 2. GSLB virtual server 2 is configured to start monitoring the health of virtual server 1 and 2. CloudStack will bind the domain A.xyztelco.com to both the GSLB virtual server 1 and 2. At this point, Tenant-A service will be globally reachable at A.xyztelco.com. The private DNS server for the domain xyztelcom.com is configured by the admin out-of-band to resolve the domain A.xyztelco.com to the GSLB providers at both the zones, which are configured as ADNS for the domain A.xyztelco.com. A client when sends a DNS request to resolve A.xyztelcom.com, will eventually get DNS delegation to the address of GSLB providers at zone 1 and 2. A client DNS request will be received by the GSLB provider. The GSLB provider, depending on the domain for which it needs to resolve, will pick up the GSLB virtual server associated with the domain. Depending on the health of the virtual servers being load balanced, DNS request for the domain will be resolved to the public IP associated with the selected virtual server.

Configuring GSLB

To configure a GSLB deployment, you must first configure a standard load balancing setup for each zone. This enables you to balance load across the different servers in each zone in the region. Then on the NetScaler side, configure both NetScaler appliances that you plan to add to each zone as authoritative DNS (ADNS) servers. Next, create a GSLB site for each zone, configure GSLB virtual servers for each site, create GLSB services, and bind the GSLB services to the GSLB virtual servers. Finally, bind the domain to the GSLB virtual servers. The GSLB configurations on the two appliances at the two different zones are identical, although each sites load-balancing configuration is specific to that site.

Perform the following as a cloud administrator. As per the example given above, the administrator of xyztelco is the one who sets up GSLB:

  1. In the cloud.dns.name global parameter, specify the DNS name of your tenant’s cloud that make use of the GSLB service.

  2. On the NetScaler side, configure GSLB as given in Configuring Global Server Load Balancing (GSLB):

    1. Configuring a standard load balancing setup.

    2. Configure Authoritative DNS, as explained in Configuring an Authoritative DNS Service.

    3. Configure a GSLB site with site name formed from the domain name details.

      Configure a GSLB site with the site name formed from the domain name.

      As per the example given above, the site names are A.xyztelco.com and B.xyztelco.com.

      For more information, see Configuring a Basic GSLB Site.

    4. Configure a GSLB virtual server.

      For more information, see Configuring a GSLB Virtual Server.

    5. Configure a GSLB service for each virtual server.

      For more information, see Configuring a GSLB Service.

    6. Bind the GSLB services to the GSLB virtual server.

      For more information, see Binding GSLB Services to a GSLB Virtual Server.

    7. Bind domain name to GSLB virtual server. Domain name is obtained from the domain details.

      For more information, see Binding a Domain to a GSLB Virtual Server.

  3. In each zone that are participating in GSLB, add GSLB-enabled NetScaler device.

    For more information, see Enabling GSLB in NetScaler.

As a domain administrator/ user perform the following:

  1. Add a GSLB rule on both the sites.

    See “Adding a GSLB Rule”.

  2. Assign load balancer rules.

    See “Assigning Load Balancing Rules to GSLB”.

Prerequisites and Guidelines
  • The GSLB functionality is supported both Basic and Advanced zones.

  • GSLB is added as a new network service.

  • GSLB service provider can be added to a physical network in a zone.

  • The admin is allowed to enable or disable GSLB functionality at region level.

  • The admin is allowed to configure a zone as GSLB capable or enabled.

    A zone shall be considered as GSLB capable only if a GSLB service provider is provisioned in the zone.

  • When users have VMs deployed in multiple availability zones which are GSLB enabled, they can use the GSLB functionality to load balance traffic across the VMs in multiple zones.

  • The users can use GSLB to load balance across the VMs across zones in a region only if the admin has enabled GSLB in that region.

  • The users can load balance traffic across the availability zones in the same region or different regions.

  • The admin can configure DNS name for the entire cloud.

  • The users can specify an unique name across the cloud for a globally load balanced service. The provided name is used as the domain name under the DNS name associated with the cloud.

    The user-provided name along with the admin-provided DNS name is used to produce a globally resolvable FQDN for the globally load balanced service of the user. For example, if the admin has configured xyztelco.com as the DNS name for the cloud, and user specifies ‘foo’ for the GSLB virtual service, then the FQDN name of the GSLB virtual service is foo.xyztelco.com.

  • While setting up GSLB, users can select a load balancing method, such as round robin, for using across the zones that are part of GSLB.

  • The user shall be able to set weight to zone-level virtual server. Weight shall be considered by the load balancing method for distributing the traffic.

  • The GSLB functionality shall support session persistence, where series of client requests for particular domain name is sent to a virtual server on the same zone.

    Statistics is collected from each GSLB virtual server.

Enabling GSLB in NetScaler

In each zone, add GSLB-enabled NetScaler device for load balancing.

  1. Log in as administrator to the CloudStack UI.

  2. In the left navigation bar, click Infrastructure.

  3. In Zones, click View More.

  4. Choose the zone you want to work with.

  5. Click the Physical Network tab, then click the name of the physical network.

  6. In the Network Service Providers node of the diagram, click Configure.

    You might have to scroll down to see this.

  7. Click NetScaler.

  8. Click Add NetScaler device and provide the following:

    For NetScaler:

    • IP Address: The IP address of the SDX.
    • Username/Password: The authentication credentials to access the device. CloudStack uses these credentials to access the device.
    • Type: The type of device that is being added. It could be F5 Big Ip Load Balancer, NetScaler VPX, NetScaler MPX, or NetScaler SDX. For a comparison of the NetScaler types, see the CloudStack Administration Guide.
    • Public interface: Interface of device that is configured to be part of the public network.
    • Private interface: Interface of device that is configured to be part of the private network.
    • GSLB service: Select this option.
    • GSLB service Public IP: The public IP address of the NAT translator for a GSLB service that is on a private network.
    • GSLB service Private IP: The private IP of the GSLB service.
    • Number of Retries. Number of times to attempt a command on the device before considering the operation failed. Default is 2.
    • Capacity: The number of networks the device can handle.
    • Dedicated: When marked as dedicated, this device will be dedicated to a single account. When Dedicated is checked, the value in the Capacity field has no significance implicitly, its value is 1.
  9. Click OK.

Adding a GSLB Rule
  1. Log in to the CloudStack UI as a domain administrator or user.

  2. In the left navigation pane, click Region.

  3. Select the region for which you want to create a GSLB rule.

  4. In the Details tab, click View GSLB.

  5. Click Add GSLB.

    The Add GSLB page is displayed as follows:

    adding a gslb rule.

  6. Specify the following:

    • Name: Name for the GSLB rule.
    • Description: (Optional) A short description of the GSLB rule that can be displayed to users.
    • GSLB Domain Name: A preferred domain name for the service.
    • Algorithm: (Optional) The algorithm to use to load balance the traffic across the zones. The options are Round Robin, Least Connection, and Proximity.
    • Service Type: The transport protocol to use for GSLB. The options are TCP and UDP.
    • Domain: (Optional) The domain for which you want to create the GSLB rule.
    • Account: (Optional) The account on which you want to apply the GSLB rule.
  7. Click OK to confirm.

Assigning Load Balancing Rules to GSLB
  1. Log in to the CloudStack UI as a domain administrator or user.
  2. In the left navigation pane, click Region.
  3. Select the region for which you want to create a GSLB rule.
  4. In the Details tab, click View GSLB.
  5. Select the desired GSLB.
  6. Click view assigned load balancing.
  7. Click assign more load balancing.
  8. Select the load balancing rule you have created for the zone.
  9. Click OK to confirm.
Known Limitation

Currently, CloudStack does not support orchestration of services across the zones. The notion of services and service providers in region are to be introduced.

Guest IP Ranges

The IP ranges for guest network traffic are set on a per-account basis by the user. This allows the users to configure their network in a fashion that will enable VPN linking between their guest network and their clients.

In shared networks in Basic zone and Security Group-enabled Advanced networks, you will have the flexibility to add multiple guest IP ranges from different subnets. You can add or remove one IP range at a time. For more information, see “About Multiple IP Ranges”.

Acquiring a New IP Address

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. Click the name of the network where you want to work with.

  4. Click View IP Addresses.

  5. Click Acquire New IP.

    The Acquire New IP window is displayed.

  6. Specify whether you want cross-zone IP or not.

    If you want Portable IP click Yes in the confirmation dialog. If you want a normal Public IP click No.

    For more information on Portable IP, see “Portable IPs”.

    Within a few moments, the new IP address should appear with the state Allocated. You can now use the IP address in port forwarding or static NAT rules.

Releasing an IP Address

When the last rule for an IP address is removed, you can release that IP address. The IP address still belongs to the VPC; however, it can be picked up for any guest network again.

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. Click the name of the network where you want to work with.
  4. Click View IP Addresses.
  5. Click the IP address you want to release.
  6. Click the Release IP button. button to release an IP

Static NAT

A static NAT rule maps a public IP address to the private IP address of a VM in order to allow Internet traffic into the VM. The public IP address always remains the same, which is why it is called static NAT. This section tells how to enable or disable static NAT for a particular IP address.

Enabling or Disabling Static NAT

If port forwarding rules are already in effect for an IP address, you cannot enable static NAT to that IP.

If a guest VM is part of more than one network, static NAT rules will function only if they are defined on the default network.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. Click the name of the network where you want to work with.

  4. Click View IP Addresses.

  5. Click the IP address you want to work with.

  6. Click the Static NAT button to enable/disable NAT. button.

    The button toggles between Enable and Disable, depending on whether static NAT is currently enabled for the IP address.

  7. If you are enabling static NAT, a dialog appears where you can choose the destination VM and click Apply.

IP Forwarding and Firewalling

By default, all incoming traffic to the public IP address is rejected. All outgoing traffic from the guests is also blocked by default.

To allow outgoing traffic, follow the procedure in Egress Firewall Rules in an Advanced Zone.

To allow incoming traffic, users may set up firewall rules and/or port forwarding rules. For example, you can use a firewall rule to open a range of ports on the public IP address, such as 33 through 44. Then use port forwarding rules to direct traffic from individual ports within that range to specific ports on user VMs. For example, one port forwarding rule could route incoming traffic on the public IP’s port 33 to port 100 on one user VM’s private IP.

Firewall Rules

By default, all incoming traffic to the public IP address is rejected by the firewall. To allow external traffic, you can open firewall ports by specifying firewall rules. You can optionally specify one or more CIDRs to filter the source IPs. This is useful when you want to allow only incoming requests from certain IP addresses.

You cannot use firewall rules to open ports for an elastic IP address. When elastic IP is used, outside access is instead controlled through the use of security groups. See “Adding a Security Group”.

In an advanced zone, you can also create egress firewall rules by using the virtual router. For more information, see “Egress Firewall Rules in an Advanced Zone”.

Firewall rules can be created using the Firewall tab in the Management Server UI. This tab is not displayed by default when CloudStack is installed. To display the Firewall tab, the CloudStack administrator must set the global configuration parameter firewall.rule.ui.enabled to “true.”

To create a firewall rule:

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. Click the name of the network where you want to work with.
  4. Click View IP Addresses.
  5. Click the IP address you want to work with.
  6. Click the Configuration tab and fill in the following values.
    • Source CIDR: (Optional) To accept only traffic from IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. Example: 192.168.0.0/22. Leave empty to allow all CIDRs.
    • Protocol: The communication protocol in use on the opened port(s).
    • Start Port and End Port: The port(s) you want to open on the firewall. If you are opening a single port, use the same number in both fields
    • ICMP Type and ICMP Code: Used only if Protocol is set to ICMP. Provide the type and code required by the ICMP protocol to fill out the ICMP header. Refer to ICMP documentation for more details if you are not sure what to enter
  7. Click Add.
Egress Firewall Rules in an Advanced Zone

The egress traffic originates from a private network to a public network, such as the Internet. By default, the egress traffic is blocked in default network offerings, so no outgoing traffic is allowed from a guest network to the Internet. However, you can control the egress traffic in an Advanced zone by creating egress firewall rules. When an egress firewall rule is applied, the traffic specific to the rule is allowed and the remaining traffic is blocked. When all the firewall rules are removed the default policy, Block, is applied.

Prerequisites and Guidelines

Consider the following scenarios to apply egress firewall rules:

  • Egress firewall rules are supported on Juniper SRX and virtual router.
  • The egress firewall rules are not supported on shared networks.
  • Allow the egress traffic from specified source CIDR. The Source CIDR is part of guest network CIDR.
  • Allow the egress traffic with protocol TCP,UDP,ICMP, or ALL.
  • Allow the egress traffic with protocol and destination port range. The port range is specified for TCP, UDP or for ICMP type and code.
  • The default policy is Allow for the new network offerings, whereas on upgrade existing network offerings with firewall service providers will have the default egress policy Deny.
Configuring an Egress Firewall Rule
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In Select view, choose Guest networks, then click the Guest network you want.

  4. To add an egress rule, click the Egress rules tab and fill out the following fields to specify what type of traffic is allowed to be sent out of VM instances in this guest network:

    adding an egress firewall rule.

    • CIDR: (Add by CIDR only) To send traffic only to the IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the destination. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
    • Protocol: The networking protocol that VMs uses to send outgoing traffic. The TCP and UDP protocols are typically used for data exchange and end-user communications. The ICMP protocol is typically used to send error messages or network monitoring data.
    • Start Port, End Port: (TCP, UDP only) A range of listening ports that are the destination for the outgoing traffic. If you are opening a single port, use the same number in both fields.
    • ICMP Type, ICMP Code: (ICMP only) The type of message and error code that are sent.
  5. Click Add.

Configuring the Default Egress Policy

The default egress policy for Isolated guest network is configured by using Network offering. Use the create network offering option to determine whether the default policy should be block or allow all the traffic to the public network from a guest network. Use this network offering to create the network. If no policy is specified, by default all the traffic is allowed from the guest network that you create by using this network offering.

You have two options: Allow and Deny.

Allow

If you select Allow for a network offering, by default egress traffic is allowed. However, when an egress rule is configured for a guest network, rules are applied to block the specified traffic and rest are allowed. If no egress rules are configured for the network, egress traffic is accepted.

Deny

If you select Deny for a network offering, by default egress traffic for the guest network is blocked. However, when an egress rules is configured for a guest network, rules are applied to allow the specified traffic. While implementing a guest network, CloudStack adds the firewall egress rule specific to the default egress policy for the guest network.

This feature is supported only on virtual router and Juniper SRX.

  1. Create a network offering with your desirable default egress policy:

    1. Log in with admin privileges to the CloudStack UI.
    2. In the left navigation bar, click Service Offerings.
    3. In Select Offering, choose Network Offering.
    4. Click Add Network Offering.
    5. In the dialog, make necessary choices, including firewall provider.
    6. In the Default egress policy field, specify the behaviour.
    7. Click OK.
  2. Create an isolated network by using this network offering.

    Based on your selection, the network will have the egress public traffic blocked or allowed.

Port Forwarding

A port forward service is a set of port forwarding rules that define a policy. A port forward service is then applied to one or more guest VMs. The guest VM then has its inbound network access managed according to the policy defined by the port forwarding service. You can optionally specify one or more CIDRs to filter the source IPs. This is useful when you want to allow only incoming requests from certain IP addresses to be forwarded.

A guest VM can be in any number of port forward services. Port forward services can be defined but have no members. If a guest VM is part of more than one network, port forwarding rules will function only if they are defined on the default network

You cannot use port forwarding to open ports for an elastic IP address. When elastic IP is used, outside access is instead controlled through the use of security groups. See Security Groups.

To set up port forwarding:

  1. Log in to the CloudStack UI as an administrator or end user.
  2. If you have not already done so, add a public IP address range to a zone in CloudStack. See Adding a Zone and Pod in the Installation Guide.
  3. Add one or more VM instances to CloudStack.
  4. In the left navigation bar, click Network.
  5. Click the name of the guest network where the VMs are running.
  6. Choose an existing IP address or acquire a new IP address. See “Acquiring a New IP Address”. Click the name of the IP address in the list.
  7. Click the Configuration tab.
  8. In the Port Forwarding node of the diagram, click View All.
  9. Fill in the following:
    • Public Port: The port to which public traffic will be addressed on the IP address you acquired in the previous step.
    • Private Port: The port on which the instance is listening for forwarded public traffic.
    • Protocol: The communication protocol in use between the two ports
  10. Click Add.

IP Load Balancing

The user may choose to associate the same public IP for multiple guests. CloudStack implements a TCP-level load balancer with the following policies.

  • Round-robin
  • Least connection
  • Source IP

This is similar to port forwarding but the destination may be multiple IP addresses.

DNS and DHCP

The Virtual Router provides DNS and DHCP services to the guests. It proxies DNS requests to the DNS server configured on the Availability Zone.

Remote Access VPN

CloudStack account owners can create virtual private networks (VPN) to access their virtual machines. If the guest network is instantiated from a network offering that offers the Remote Access VPN service, the virtual router (based on the System VM) is used to provide the service. CloudStack provides a L2TP-over-IPsec-based remote access VPN service to guest virtual networks. Since each network gets its own virtual router, VPNs are not shared across the networks. VPN clients native to Windows, Mac OS X and iOS can be used to connect to the guest networks. The account owner can create and manage users for their VPN. CloudStack does not use its account database for this purpose but uses a separate table. The VPN user database is shared across all the VPNs created by the account owner. All VPN users get access to all VPNs created by the account owner.

注解

Make sure that not all traffic goes through the VPN. That is, the route installed by the VPN should be only for the guest network and not for all traffic.

  • Road Warrior / Remote Access. Users want to be able to connect securely from a home or office to a private network in the cloud. Typically, the IP address of the connecting client is dynamic and cannot be preconfigured on the VPN server.
  • Site to Site. In this scenario, two private subnets are connected over the public Internet with a secure VPN tunnel. The cloud user’s subnet (for example, an office network) is connected through a gateway to the network in the cloud. The address of the user’s gateway must be preconfigured on the VPN server in the cloud. Note that although L2TP-over-IPsec can be used to set up Site-to-Site VPNs, this is not the primary intent of this feature. For more information, see “Setting Up a Site-to-Site VPN Connection”.
Configuring Remote Access VPN

To set up VPN for the cloud:

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, click Global Settings.
  3. Set the following global configuration parameters.
    • remote.access.vpn.client.ip.range - The range of IP addresses to be allocated to remote access VPN clients. The first IP in the range is used by the VPN server.
    • remote.access.vpn.psk.length - Length of the IPSec key.
    • remote.access.vpn.user.limit - Maximum number of VPN users per account.

To enable VPN for a particular network:

  1. Log in as a user or administrator to the CloudStack UI.

  2. In the left navigation, click Network.

  3. Click the name of the network you want to work with.

  4. Click View IP Addresses.

  5. Click one of the displayed IP address names.

  6. Click the Enable VPN button. button to enable VPN.

    The IPsec key is displayed in a popup window.

Configuring Remote Access VPN in VPC

On enabling Remote Access VPN on a VPC, any VPN client present outside the VPC can access VMs present in the VPC by using the Remote VPN connection. The VPN client can be present anywhere except inside the VPC on which the user enabled the Remote Access VPN service.

To enable VPN for a VPC:

  1. Log in as a user or administrator to the CloudStack UI.

  2. In the left navigation, click Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC.

    For each tier, the following options are displayed:

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  5. In the Router node, select Public IP Addresses.

    The IP Addresses page is displayed.

  6. Click Source NAT IP address.

  7. Click the Enable VPN button. button to enable VPN.

    Click OK to confirm. The IPsec key is displayed in a pop-up window.

Now, you need to add the VPN users.

  1. Click the Source NAT IP.
  2. Select the VPN tab.
  3. Add the username and the corresponding password of the user you wanted to add.
  4. Click Add.
  5. Repeat the same steps to add the VPN users.
Using Remote Access VPN with Windows

The procedure to use VPN varies by Windows version. Generally, the user must edit the VPN properties and make sure that the default route is not the VPN. The following steps are for Windows L2TP clients on Windows Vista. The commands should be similar for other Windows versions.

  1. Log in to the CloudStack UI and click on the source NAT IP for the account. The VPN tab should display the IPsec preshared key. Make a note of this and the source NAT IP. The UI also lists one or more users and their passwords. Choose one of these users, or, if none exists, add a user and password.
  2. On the Windows box, go to Control Panel, then select Network and Sharing center. Click Setup a connection or network.
  3. In the next dialog, select No, create a new connection.
  4. In the next dialog, select Use my Internet Connection (VPN).
  5. In the next dialog, enter the source NAT IP from step #1 and give the connection a name. Check Don’t connect now.
  6. In the next dialog, enter the user name and password selected in step #1.
  7. Click Create.
  8. Go back to the Control Panel and click Network Connections to see the new connection. The connection is not active yet.
  9. Right-click the new connection and select Properties. In the Properties dialog, select the Networking tab.
  10. In Type of VPN, choose L2TP IPsec VPN, then click IPsec settings. Select Use preshared key. Enter the preshared key from step #1.
  11. The connection is ready for activation. Go back to Control Panel -> Network Connections and double-click the created connection.
  12. Enter the user name and password from step #1.
Using Remote Access VPN with Mac OS X

First, be sure you’ve configured the VPN settings in your CloudStack install. This section is only concerned with connecting via Mac OS X to your VPN.

Note, these instructions were written on Mac OS X 10.7.5. They may differ slightly in older or newer releases of Mac OS X.

  1. On your Mac, open System Preferences and click Network.
  2. Make sure Send all traffic over VPN connection is not checked.
  3. If your preferences are locked, you’ll need to click the lock in the bottom left-hand corner to make any changes and provide your administrator credentials.
  4. You will need to create a new network entry. Click the plus icon on the bottom left-hand side and you’ll see a dialog that says “Select the interface and enter a name for the new service.” Select VPN from the Interface drop-down menu, and “L2TP over IPSec” for the VPN Type. Enter whatever you like within the “Service Name” field.
  5. You’ll now have a new network interface with the name of whatever you put in the “Service Name” field. For the purposes of this example, we’ll assume you’ve named it “CloudStack.” Click on that interface and provide the IP address of the interface for your VPN under the Server Address field, and the user name for your VPN under Account Name.
  6. Click Authentication Settings, and add the user’s password under User Authentication and enter the pre-shared IPSec key in the Shared Secret field under Machine Authentication. Click OK.
  7. You may also want to click the “Show VPN status in menu bar” but that’s entirely optional.
  8. Now click “Connect” and you will be connected to the CloudStack VPN.
Setting Up a Site-to-Site VPN Connection

A Site-to-Site VPN connection helps you establish a secure connection from an enterprise datacenter to the cloud infrastructure. This allows users to access the guest VMs by establishing a VPN connection to the virtual router of the account from a device in the datacenter of the enterprise. You can also establish a secure connection between two VPC setups or high availability zones in your environment. Having this facility eliminates the need to establish VPN connections to individual VMs.

The difference from Remote VPN is that Site-to-site VPNs connects entire networks to each other, for example, connecting a branch office network to a company headquarters network. In a site-to-site VPN, hosts do not have VPN client software; they send and receive normal TCP/IP traffic through a VPN gateway.

The supported endpoints on the remote datacenters are:

  • Cisco ISR with IOS 12.4 or later
  • Juniper J-Series routers with JunOS 9.5 or later
  • CloudStack virtual routers

注解

In addition to the specific Cisco and Juniper devices listed above, the expectation is that any Cisco or Juniper device running on the supported operating systems are able to establish VPN connections.

To set up a Site-to-Site VPN connection, perform the following:

  1. Create a Virtual Private Cloud (VPC).

    See “Configuring a Virtual Private Cloud”.

  2. Create a VPN Customer Gateway.

  3. Create a VPN gateway for the VPC that you created.

  4. Create VPN connection from the VPC VPN gateway to the customer VPN gateway.

Creating and Updating a VPN Customer Gateway

注解

A VPN customer gateway can be connected to only one VPN gateway at a time.

To add a VPN Customer Gateway:

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPN Customer Gateway.

  4. Click Add VPN Customer Gateway.

    adding a customer gateway.

    Provide the following information:

    • Name: A unique name for the VPN customer gateway you create.

    • Gateway: The IP address for the remote gateway.

    • CIDR list: The guest CIDR list of the remote subnets. Enter a CIDR or a comma-separated list of CIDRs. Ensure that a guest CIDR list is not overlapped with the VPC’s CIDR, or another guest CIDR. The CIDR must be RFC1918-compliant.

    • IPsec Preshared Key: Preshared keying is a method where the endpoints of the VPN share a secret key. This key value is used to authenticate the customer gateway and the VPC VPN gateway to each other.

      注解

      The IKE peers (VPN end points) authenticate each other by computing and sending a keyed hash of data that includes the Preshared key. If the receiving peer is able to create the same hash independently by using its Preshared key, it knows that both peers must share the same secret, thus authenticating the customer gateway.

    • IKE Encryption: The Internet Key Exchange (IKE) policy for phase-1. The supported encryption algorithms are AES128, AES192, AES256, and 3DES. Authentication is accomplished through the Preshared Keys.

      注解

      The phase-1 is the first phase in the IKE process. In this initial negotiation phase, the two VPN endpoints agree on the methods to be used to provide security for the underlying IP traffic. The phase-1 authenticates the two VPN gateways to each other, by confirming that the remote gateway has a matching Preshared Key.

    • IKE Hash: The IKE hash for phase-1. The supported hash algorithms are SHA1 and MD5.

    • IKE DH: A public-key cryptography protocol which allows two parties to establish a shared secret over an insecure communications channel. The 1536-bit Diffie-Hellman group is used within IKE to establish session keys. The supported options are None, Group-5 (1536-bit) and Group-2 (1024-bit).

    • ESP Encryption: Encapsulating Security Payload (ESP) algorithm within phase-2. The supported encryption algorithms are AES128, AES192, AES256, and 3DES.

      注解

      The phase-2 is the second phase in the IKE process. The purpose of IKE phase-2 is to negotiate IPSec security associations (SA) to set up the IPSec tunnel. In phase-2, new keying material is extracted from the Diffie-Hellman key exchange in phase-1, to provide session keys to use in protecting the VPN data flow.

    • ESP Hash: Encapsulating Security Payload (ESP) hash for phase-2. Supported hash algorithms are SHA1 and MD5.

    • Perfect Forward Secrecy: Perfect Forward Secrecy (or PFS) is the property that ensures that a session key derived from a set of long-term public and private keys will not be compromised. This property enforces a new Diffie-Hellman key exchange. It provides the keying material that has greater key material life and thereby greater resistance to cryptographic attacks. The available options are None, Group-5 (1536-bit) and Group-2 (1024-bit). The security of the key exchanges increase as the DH groups grow larger, as does the time of the exchanges.

      注解

      When PFS is turned on, for every negotiation of a new phase-2 SA the two gateways must generate a new set of phase-1 keys. This adds an extra layer of protection that PFS adds, which ensures if the phase-2 SA’s have expired, the keys used for new phase-2 SA’s have not been generated from the current phase-1 keying material.

    • IKE Lifetime (seconds): The phase-1 lifetime of the security association in seconds. Default is 86400 seconds (1 day). Whenever the time expires, a new phase-1 exchange is performed.

    • ESP Lifetime (seconds): The phase-2 lifetime of the security association in seconds. Default is 3600 seconds (1 hour). Whenever the value is exceeded, a re-key is initiated to provide a new IPsec encryption and authentication session keys.

    • Dead Peer Detection: A method to detect an unavailable Internet Key Exchange (IKE) peer. Select this option if you want the virtual router to query the liveliness of its IKE peer at regular intervals. It’s recommended to have the same configuration of DPD on both side of VPN connection.

  5. Click OK.

Updating and Removing a VPN Customer Gateway

You can update a customer gateway either with no VPN connection, or related VPN connection is in error state.

  1. Log in to the CloudStack UI as an administrator or end user.
  2. In the left navigation, choose Network.
  3. In the Select view, select VPN Customer Gateway.
  4. Select the VPN customer gateway you want to work with.
  5. To modify the required parameters, click the Edit VPN Customer Gateway button button to edit.
  6. To remove the VPN customer gateway, click the Delete VPN Customer Gateway button button to remove a VPN customer gateway.
  7. Click OK.
Creating a VPN gateway for the VPC
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC to which you want to deploy the VMs.

    The VPC page is displayed where all the tiers you created are listed in a diagram.

    For each tier, the following options are displayed:

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  5. Select Site-to-Site VPN.

    If you are creating the VPN gateway for the first time, selecting Site-to-Site VPN prompts you to create a VPN gateway.

  6. In the confirmation dialog, click Yes to confirm.

    Within a few moments, the VPN gateway is created. You will be prompted to view the details of the VPN gateway you have created. Click Yes to confirm.

    The following details are displayed in the VPN Gateway page:

    • IP Address
    • Account
    • Domain
Creating a VPN Connection

注解

CloudStack supports creating up to 8 VPN connections.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you create for the account are listed in the page.

  4. Click the Configure button of the VPC to which you want to deploy the VMs.

    The VPC page is displayed where all the tiers you created are listed in a diagram.

  5. Click the Settings icon.

    For each tier, the following options are displayed:

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  6. Select Site-to-Site VPN.

    The Site-to-Site VPN page is displayed.

  7. From the Select View drop-down, ensure that VPN Connection is selected.

  8. Click Create VPN Connection.

    The Create VPN Connection dialog is displayed:

    creating a VPN connection to the customer gateway.

  9. Select the desired customer gateway.

  10. Select Passive if you want to establish a connection between two VPC virtual routers.

    If you want to establish a connection between two VPC virtual routers, select Passive only on one of the VPC virtual routers, which waits for the other VPC virtual router to initiate the connection. Do not select Passive on the VPC virtual router that initiates the connection.

  11. Click OK to confirm.

    Within a few moments, the VPN Connection is displayed.

    The following information on the VPN connection is displayed:

    • IP Address
    • Gateway
    • State
    • IPSec Preshared Key
    • IKE Policy
    • ESP Policy
Site-to-Site VPN Connection Between VPC Networks

CloudStack provides you with the ability to establish a site-to-site VPN connection between CloudStack virtual routers. To achieve that, add a passive mode Site-to-Site VPN. With this functionality, users can deploy applications in multiple Availability Zones or VPCs, which can communicate with each other by using a secure Site-to-Site VPN Tunnel.

This feature is supported on all the hypervisors.

  1. Create two VPCs. For example, VPC A and VPC B.

    For more information, see “Configuring a Virtual Private Cloud”.

  2. Create VPN gateways on both the VPCs you created.

    For more information, see “Creating a VPN gateway for the VPC”.

  3. Create VPN customer gateway for both the VPCs.

    For more information, see “Creating and Updating a VPN Customer Gateway”.

  4. Enable a VPN connection on VPC A in passive mode.

    For more information, see “Creating a VPN Connection”.

    Ensure that the customer gateway is pointed to VPC B. The VPN connection is shown in the Disconnected state.

  5. Enable a VPN connection on VPC B.

    Ensure that the customer gateway is pointed to VPC A. Because virtual router of VPC A, in this case, is in passive mode and is waiting for the virtual router of VPC B to initiate the connection, VPC B virtual router should not be in passive mode.

    The VPN connection is shown in the Disconnected state.

    Creating VPN connection on both the VPCs initiates a VPN connection. Wait for few seconds. The default is 30 seconds for both the VPN connections to show the Connected state.

Restarting and Removing a VPN Connection
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC to which you want to deploy the VMs.

    The VPC page is displayed where all the tiers you created are listed in a diagram.

  5. Click the Settings icon.

    For each tier, the following options are displayed:

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  6. Select Site-to-Site VPN.

    The Site-to-Site VPN page is displayed.

  7. From the Select View drop-down, ensure that VPN Connection is selected.

    All the VPN connections you created are displayed.

  8. Select the VPN connection you want to work with.

    The Details tab is displayed.

  9. To remove a VPN connection, click the Delete VPN connection button button to remove a VPN connection

    To restart a VPN connection, click the Reset VPN connection button present in the Details tab. button to reset a VPN connection

About Inter-VLAN Routing (nTier Apps)

Inter-VLAN Routing (nTier Apps) is the capability to route network traffic between VLANs. This feature enables you to build Virtual Private Clouds (VPC), an isolated segment of your cloud, that can hold multi-tier applications. These tiers are deployed on different VLANs that can communicate with each other. You provision VLANs to the tiers your create, and VMs can be deployed on different tiers. The VLANs are connected to a virtual router, which facilitates communication between the VMs. In effect, you can segment VMs by means of VLANs into different networks that can host multi-tier applications, such as Web, Application, or Database. Such segmentation by means of VLANs logically separate application VMs for higher security and lower broadcasts, while remaining physically connected to the same device.

This feature is supported on XenServer, KVM, and VMware hypervisors.

The major advantages are:

  • The administrator can deploy a set of VLANs and allow users to deploy VMs on these VLANs. A guest VLAN is randomly alloted to an account from a pre-specified set of guest VLANs. All the VMs of a certain tier of an account reside on the guest VLAN allotted to that account.

    注解

    A VLAN allocated for an account cannot be shared between multiple accounts.

  • The administrator can allow users create their own VPC and deploy the application. In this scenario, the VMs that belong to the account are deployed on the VLANs allotted to that account.

  • Both administrators and users can create multiple VPCs. The guest network NIC is plugged to the VPC virtual router when the first VM is deployed in a tier.

  • The administrator can create the following gateways to send to or receive traffic from the VMs:

    • VPN Gateway: For more information, see “Creating a VPN gateway for the VPC”.
    • Public Gateway: The public gateway for a VPC is added to the virtual router when the virtual router is created for VPC. The public gateway is not exposed to the end users. You are not allowed to list it, nor allowed to create any static routes.
    • Private Gateway: For more information, see “Adding a Private Gateway to a VPC”.
  • Both administrators and users can create various possible destinations-gateway combinations. However, only one gateway of each type can be used in a deployment.

    For example:

    • VLANs and Public Gateway: For example, an application is deployed in the cloud, and the Web application VMs communicate with the Internet.
    • VLANs, VPN Gateway, and Public Gateway: For example, an application is deployed in the cloud; the Web application VMs communicate with the Internet; and the database VMs communicate with the on-premise devices.
  • The administrator can define Network Access Control List (ACL) on the virtual router to filter the traffic among the VLANs or between the Internet and a VLAN. You can define ACL based on CIDR, port range, protocol, type code (if ICMP protocol is selected) and Ingress/Egress type.

The following figure shows the possible deployment scenarios of a Inter-VLAN setup:

a multi-tier setup.

To set up a multi-tier Inter-VLAN deployment, see “Configuring a Virtual Private Cloud”.

Configuring a Virtual Private Cloud

About Virtual Private Clouds

CloudStack Virtual Private Cloud is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. You can launch VMs in the virtual network that can have private addresses in the range of your choice, for example: 10.0.0.0/16. You can define network tiers within your VPC network range, which in turn enables you to group similar kinds of instances based on IP address range.

For example, if a VPC has the private range 10.0.0.0/16, its guest networks can have the network ranges 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24, and so on.

Major Components of a VPC

A VPC is comprised of the following network components:

  • VPC: A VPC acts as a container for multiple isolated networks that can communicate with each other via its virtual router.
  • Network Tiers: Each tier acts as an isolated network with its own VLANs and CIDR list, where you can place groups of resources, such as VMs. The tiers are segmented by means of VLANs. The NIC of each tier acts as its gateway.
  • Virtual Router: A virtual router is automatically created and started when you create a VPC. The virtual router connect the tiers and direct traffic among the public gateway, the VPN gateways, and the NAT instances. For each tier, a corresponding NIC and IP exist in the virtual router. The virtual router provides DNS and DHCP services through its IP.
  • Public Gateway: The traffic to and from the Internet routed to the VPC through the public gateway. In a VPC, the public gateway is not exposed to the end user; therefore, static routes are not support for the public gateway.
  • Private Gateway: All the traffic to and from a private network routed to the VPC through the private gateway. For more information, see “Adding a Private Gateway to a VPC”.
  • VPN Gateway: The VPC side of a VPN connection.
  • Site-to-Site VPN Connection: A hardware-based VPN connection between your VPC and your datacenter, home network, or co-location facility. For more information, see “Setting Up a Site-to-Site VPN Connection”.
  • Customer Gateway: The customer side of a VPN Connection. For more information, see “Creating and Updating a VPN Customer Gateway”.
  • NAT Instance: An instance that provides Port Address Translation for instances to access the Internet via the public gateway. For more information, see “Enabling or Disabling Static NAT on a VPC”.
  • Network ACL: Network ACL is a group of Network ACL items. Network ACL items are nothing but numbered rules that are evaluated in order, starting with the lowest numbered rule. These rules determine whether traffic is allowed in or out of any tier associated with the network ACL. For more information, see “Configuring Network Access Control List”.
Network Architecture in a VPC

In a VPC, the following four basic options of network architectures are present:

  • VPC with a public gateway only
  • VPC with public and private gateways
  • VPC with public and private gateways and site-to-site VPN access
  • VPC with a private gateway only and site-to-site VPN access
Connectivity Options for a VPC

You can connect your VPC to:

  • The Internet through the public gateway.
  • The corporate datacenter by using a site-to-site VPN connection through the VPN gateway.
  • Both the Internet and your corporate datacenter by using both the public gateway and a VPN gateway.
VPC Network Considerations

Consider the following before you create a VPC:

  • A VPC, by default, is created in the enabled state.
  • A VPC can be created in Advance zone only, and can’t belong to more than one zone at a time.
  • The default number of VPCs an account can create is 20. However, you can change it by using the max.account.vpcs global parameter, which controls the maximum number of VPCs an account is allowed to create.
  • The default number of tiers an account can create within a VPC is 3. You can configure this number by using the vpc.max.networks parameter.
  • Each tier should have an unique CIDR in the VPC. Ensure that the tier’s CIDR should be within the VPC CIDR range.
  • A tier belongs to only one VPC.
  • All network tiers inside the VPC should belong to the same account.
  • When a VPC is created, by default, a SourceNAT IP is allocated to it. The Source NAT IP is released only when the VPC is removed.
  • A public IP can be used for only one purpose at a time. If the IP is a sourceNAT, it cannot be used for StaticNAT or port forwarding.
  • The instances can only have a private IP address that you provision. To communicate with the Internet, enable NAT to an instance that you launch in your VPC.
  • Only new networks can be added to a VPC. The maximum number of networks per VPC is limited by the value you specify in the vpc.max.networks parameter. The default value is three.
  • The load balancing service can be supported by only one tier inside the VPC.
  • If an IP address is assigned to a tier:
    • That IP can’t be used by more than one tier at a time in the VPC. For example, if you have tiers A and B, and a public IP1, you can create a port forwarding rule by using the IP either for A or B, but not for both.
    • That IP can’t be used for StaticNAT, load balancing, or port forwarding rules for another guest network inside the VPC.
  • Remote access VPN is not supported in VPC networks.
Adding a Virtual Private Cloud

When creating the VPC, you simply provide the zone and a set of IP addresses for the VPC network address space. You specify this set of addresses in the form of a Classless Inter-Domain Routing (CIDR) block.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

  4. Click Add VPC. The Add VPC page is displayed as follows:

    adding a vpc.

    Provide the following information:

    • Name: A short name for the VPC that you are creating.
    • Description: A brief description of the VPC.
    • Zone: Choose the zone where you want the VPC to be available.
    • Super CIDR for Guest Networks: Defines the CIDR range for all the tiers (guest networks) within a VPC. When you create a tier, ensure that its CIDR is within the Super CIDR value you enter. The CIDR must be RFC1918 compliant.
    • DNS domain for Guest Networks: If you want to assign a special domain name, specify the DNS suffix. This parameter is applied to all the tiers within the VPC. That implies, all the tiers you create in the VPC belong to the same DNS domain. If the parameter is not specified, a DNS domain name is generated automatically.
    • Public Load Balancer Provider: You have two options: VPC Virtual Router and Netscaler.
  5. Click OK.

Adding Tiers

Tiers are distinct locations within a VPC that act as isolated networks, which do not have access to other tiers by default. Tiers are set up on different VLANs that can communicate with each other by using a virtual router. Tiers provide inexpensive, low latency network connectivity to other tiers within the VPC.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPC that you have created for the account is listed in the page.

    注解

    The end users can see their own VPCs, while root and domain admin can see any VPC they are authorized to see.

  4. Click the Configure button of the VPC for which you want to set up tiers.

  5. Click Create network.

    The Add new tier dialog is displayed, as follows:

    adding a tier to a vpc.

    If you have already created tiers, the VPC diagram is displayed. Click Create Tier to add a new tier.

  6. Specify the following:

    All the fields are mandatory.

    • Name: A unique name for the tier you create.

    • Network Offering: The following default network offerings are listed: Internal LB, DefaultIsolatedNetworkOfferingForVpcNetworksNoLB, DefaultIsolatedNetworkOfferingForVpcNetworks

      In a VPC, only one tier can be created by using LB-enabled network offering.

    • Gateway: The gateway for the tier you create. Ensure that the gateway is within the Super CIDR range that you specified while creating the VPC, and is not overlapped with the CIDR of any existing tier within the VPC.

    • VLAN: The VLAN ID for the tier that the root admin creates.

      This option is only visible if the network offering you selected is VLAN-enabled.

      For more information, see “Assigning VLANs to Isolated Networks”.

    • Netmask: The netmask for the tier you create.

      For example, if the VPC CIDR is 10.0.0.0/16 and the network tier CIDR is 10.0.1.0/24, the gateway of the tier is 10.0.1.1, and the netmask of the tier is 255.255.255.0.

  7. Click OK.

  8. Continue with configuring access control list for the tier.

Configuring Network Access Control List

Define Network Access Control List (ACL) on the VPC virtual router to control incoming (ingress) and outgoing (egress) traffic between the VPC tiers, and the tiers and Internet. By default, all incoming traffic to the guest networks is blocked and all outgoing traffic from guest networks is allowed, once you add an ACL rule for outgoing traffic, then only outgoing traffic specified in this ACL rule is allowed, the rest is blocked. To open the ports, you must create a new network ACL. The network ACLs can be created for the tiers only if the NetworkACL service is supported.

About Network ACL Lists

In CloudStack terminology, Network ACL is a group of Network ACL items. Network ACL items are nothing but numbered rules that are evaluated in order, starting with the lowest numbered rule. These rules determine whether traffic is allowed in or out of any tier associated with the network ACL. You need to add the Network ACL items to the Network ACL, then associate the Network ACL with a tier. Network ACL is associated with a VPC and can be assigned to multiple VPC tiers within a VPC. A Tier is associated with a Network ACL at all the times. Each tier can be associated with only one ACL.

The default Network ACL is used when no ACL is associated. Default behavior is all the incoming traffic is blocked and outgoing traffic is allowed from the tiers. Default network ACL cannot be removed or modified. Contents of the default Network ACL is:

Rule Protocol Traffic type Action CIDR
1 All Ingress Deny 0.0.0.0/0
2 All Egress Deny 0.0.0.0/0
Creating ACL Lists
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC.

    For each tier, the following options are displayed:

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  5. Select Network ACL Lists.

    The following default rules are displayed in the Network ACLs page: default_allow, default_deny.

  6. Click Add ACL Lists, and specify the following:

    • ACL List Name: A name for the ACL list.
    • Description: A short description of the ACL list that can be displayed to users.
Creating an ACL Rule
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC.

  5. Select Network ACL Lists.

    In addition to the custom ACL lists you have created, the following default rules are displayed in the Network ACLs page: default_allow, default_deny.

  6. Select the desired ACL list.

  7. Select the ACL List Rules tab.

    To add an ACL rule, fill in the following fields to specify what kind of network traffic is allowed in the VPC.

    • Rule Number: The order in which the rules are evaluated.
    • CIDR: The CIDR acts as the Source CIDR for the Ingress rules, and Destination CIDR for the Egress rules. To accept traffic only from or to the IP addresses within a particular address block, enter a CIDR or a comma-separated list of CIDRs. The CIDR is the base IP address of the incoming traffic. For example, 192.168.0.0/22. To allow all CIDRs, set to 0.0.0.0/0.
    • Action: What action to be taken. Allow traffic or block.
    • Protocol: The networking protocol that sources use to send traffic to the tier. The TCP and UDP protocols are typically used for data exchange and end-user communications. The ICMP protocol is typically used to send error messages or network monitoring data. All supports all the traffic. Other option is Protocol Number.
    • Start Port, End Port (TCP, UDP only): A range of listening ports that are the destination for the incoming traffic. If you are opening a single port, use the same number in both fields.
    • Protocol Number: The protocol number associated with IPv4 or IPv6. For more information, see Protocol Numbers.
    • ICMP Type, ICMP Code (ICMP only): The type of message and error code that will be sent.
    • Traffic Type: The type of traffic: Incoming or outgoing.
  8. Click Add. The ACL rule is added.

    You can edit the tags assigned to the ACL rules and delete the ACL rules you have created. Click the appropriate button in the Details tab.

Creating a Tier with Custom ACL List
  1. Create a VPC.

  2. Create a custom ACL list.

  3. Add ACL rules to the ACL list.

  4. Create a tier in the VPC.

    Select the desired ACL list while creating a tier.

  5. Click OK.

Assigning a Custom ACL List to a Tier
  1. Create a VPC.

  2. Create a tier in the VPC.

  3. Associate the tier with the default ACL rule.

  4. Create a custom ACL list.

  5. Add ACL rules to the ACL list.

  6. Select the tier for which you want to assign the custom ACL.

  7. Click the Replace ACL List icon. button to replace an ACL list

    The Replace ACL List dialog is displayed.

  8. Select the desired ACL list.

  9. Click OK.

Adding a Private Gateway to a VPC

A private gateway can be added by the root admin only. The VPC private network has 1:1 relationship with the NIC of the physical network. You can configure multiple private gateways to a single VPC. No gateways with duplicated VLAN and IP are allowed in the same data center.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC to which you want to configure load balancing rules.

    The VPC page is displayed where all the tiers you created are listed in a diagram.

  5. Click the Settings icon.

    The following options are displayed.

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  6. Select Private Gateways.

    The Gateways page is displayed.

  7. Click Add new gateway:

    adding a private gateway for the VPC.

  8. Specify the following:

    • Physical Network: The physical network you have created in the zone.

    • IP Address: The IP address associated with the VPC gateway.

    • Gateway: The gateway through which the traffic is routed to and from the VPC.

    • Netmask: The netmask associated with the VPC gateway.

    • VLAN: The VLAN associated with the VPC gateway.

    • Source NAT: Select this option to enable the source NAT service on the VPC private gateway.

      See “Source NAT on Private Gateway”.

    • ACL: Controls both ingress and egress traffic on a VPC private gateway. By default, all the traffic is blocked.

      See “ACL on Private Gateway”.

    The new gateway appears in the list. You can repeat these steps to add more gateway for this VPC.

Source NAT on Private Gateway

You might want to deploy multiple VPCs with the same super CIDR and guest tier CIDR. Therefore, multiple guest VMs from different VPCs can have the same IPs to reach a enterprise data center through the private gateway. In such cases, a NAT service need to be configured on the private gateway to avoid IP conflicts. If Source NAT is enabled, the guest VMs in VPC reaches the enterprise network via private gateway IP address by using the NAT service.

The Source NAT service on a private gateway can be enabled while adding the private gateway. On deletion of a private gateway, source NAT rules specific to the private gateway are deleted.

To enable source NAT on existing private gateways, delete them and create afresh with source NAT.

ACL on Private Gateway

The traffic on the VPC private gateway is controlled by creating both ingress and egress network ACL rules. The ACLs contains both allow and deny rules. As per the rule, all the ingress traffic to the private gateway interface and all the egress traffic out from the private gateway interface are blocked.

You can change this default behaviour while creating a private gateway. Alternatively, you can do the following:

  1. In a VPC, identify the Private Gateway you want to work with.

  2. In the Private Gateway page, do either of the following:

    • Use the Quickview. See 3.
    • Use the Details tab. See 4 through .
  3. In the Quickview of the selected Private Gateway, click Replace ACL, select the ACL rule, then click OK

  4. Click the IP address of the Private Gateway you want to work with.

  5. In the Detail tab, click the Replace ACL button. button to replace an ACL list

    The Replace ACL dialog is displayed.

  6. select the ACL rule, then click OK.

    Wait for few seconds. You can see that the new ACL rule is displayed in the Details page.

Creating a Static Route

CloudStack enables you to specify routing for the VPN connection you create. You can enter one or CIDR addresses to indicate which traffic is to be routed back to the gateway.

  1. In a VPC, identify the Private Gateway you want to work with.

  2. In the Private Gateway page, click the IP address of the Private Gateway you want to work with.

  3. Select the Static Routes tab.

  4. Specify the CIDR of destination network.

  5. Click Add.

    Wait for few seconds until the new route is created.

Blacklisting Routes

CloudStack enables you to block a list of routes so that they are not assigned to any of the VPC private gateways. Specify the list of routes that you want to blacklist in the blacklisted.routes global parameter. Note that the parameter update affects only new static route creations. If you block an existing static route, it remains intact and continue functioning. You cannot add a static route if the route is blacklisted for the zone.

Deploying VMs to the Tier
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC to which you want to deploy the VMs.

    The VPC page is displayed where all the tiers you have created are listed.

  5. Click Virtual Machines tab of the tier to which you want to add a VM.

    adding a VM to a vpc.

    The Add Instance page is displayed.

    Follow the on-screen instruction to add an instance. For information on adding an instance, see the Installation Guide.

Deploying VMs to VPC Tier and Shared Networks

CloudStack allows you deploy VMs on a VPC tier and one or more shared networks. With this feature, VMs deployed in a multi-tier application can receive monitoring services via a shared network provided by a service provider.

  1. Log in to the CloudStack UI as an administrator.

  2. In the left navigation, choose Instances.

  3. Click Add Instance.

  4. Select a zone.

  5. Select a template or ISO, then follow the steps in the wizard.

  6. Ensure that the hardware you have allows starting the selected service offering.

  7. Under Networks, select the desired networks for the VM you are launching.

    You can deploy a VM to a VPC tier and multiple shared networks.

    adding a VM to a VPC tier and shared network.

  8. Click Next, review the configuration and click Launch.

    Your VM will be deployed to the selected VPC tier and shared network.

Acquiring a New IP Address for a VPC

When you acquire an IP address, all IP addresses are allocated to VPC, not to the guest networks within the VPC. The IPs are associated to the guest network only when the first port-forwarding, load balancing, or Static NAT rule is created for the IP or the network. IP can’t be associated to more than one network at a time.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC to which you want to deploy the VMs.

    The VPC page is displayed where all the tiers you created are listed in a diagram.

    The following options are displayed.

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  5. Select IP Addresses.

    The Public IP Addresses page is displayed.

  6. Click Acquire New IP, and click Yes in the confirmation dialog.

    You are prompted for confirmation because, typically, IP addresses are a limited resource. Within a few moments, the new IP address should appear with the state Allocated. You can now use the IP address in port forwarding, load balancing, and static NAT rules.

Releasing an IP Address Alloted to a VPC

The IP address is a limited resource. If you no longer need a particular IP, you can disassociate it from its VPC and return it to the pool of available addresses. An IP address can be released from its tier, only when all the networking ( port forwarding, load balancing, or StaticNAT ) rules are removed for this IP address. The released IP address will still belongs to the same VPC.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC whose IP you want to release.

    The VPC page is displayed where all the tiers you created are listed in a diagram.

    The following options are displayed.

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  5. Select Public IP Addresses.

    The IP Addresses page is displayed.

  6. Click the IP you want to release.

  7. In the Details tab, click the Release IP button button to release an IP.

Enabling or Disabling Static NAT on a VPC

A static NAT rule maps a public IP address to the private IP address of a VM in a VPC to allow Internet traffic to it. This section tells how to enable or disable static NAT for a particular IP address in a VPC.

If port forwarding rules are already in effect for an IP address, you cannot enable static NAT to that IP.

If a guest VM is part of more than one network, static NAT rules will function only if they are defined on the default network.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC to which you want to deploy the VMs.

    The VPC page is displayed where all the tiers you created are listed in a diagram.

    For each tier, the following options are displayed.

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  5. In the Router node, select Public IP Addresses.

    The IP Addresses page is displayed.

  6. Click the IP you want to work with.

  7. In the Details tab,click the Static NAT button. button to enable Static NAT. The button toggles between Enable and Disable, depending on whether static NAT is currently enabled for the IP address.

  8. If you are enabling static NAT, a dialog appears as follows:

    selecting a tier to apply staticNAT.

  9. Select the tier and the destination VM, then click Apply.

Adding Load Balancing Rules on a VPC

In a VPC, you can configure two types of load balancing: external LB and internal LB. External LB is nothing but a LB rule created to redirect the traffic received at a public IP of the VPC virtual router. The traffic is load balanced within a tier based on your configuration. Citrix NetScaler and VPC virtual router are supported for external LB. When you use internal LB service, traffic received at a tier is load balanced across different VMs within that tier. For example, traffic reached at Web tier is redirected to another VM in that tier. External load balancing devices are not supported for internal LB. The service is provided by a internal LB VM configured on the target tier.

Load Balancing Within a Tier (External LB)

A CloudStack user or administrator may create load balancing rules that balance traffic received at a public IP to one or more VMs that belong to a network tier that provides load balancing service in a VPC. A user creates a rule, specifies an algorithm, and assigns the rule to a set of VMs within a tier.

Enabling NetScaler as the LB Provider on a VPC Tier
  1. Add and enable Netscaler VPX in dedicated mode.

    Netscaler can be used in a VPC environment only if it is in dedicated mode.

  2. Create a network offering, as given in “Creating a Network Offering for External LB”.

  3. Create a VPC with Netscaler as the Public LB provider.

    For more information, see “Adding a Virtual Private Cloud”.

  4. For the VPC, acquire an IP.

  5. Create an external load balancing rule and apply, as given in Creating an External LB Rule.

Creating a Network Offering for External LB

To have external LB support on VPC, create a network offering as follows:

  1. Log in to the CloudStack UI as a user or admin.
  2. From the Select Offering drop-down, choose Network Offering.
  3. Click Add Network Offering.
  4. In the dialog, make the following choices:
    • Name: Any desired name for the network offering.
    • Description: A short description of the offering that can be displayed to users.
    • Network Rate: Allowed data transfer rate in MB per second.
    • Traffic Type: The type of network traffic that will be carried on the network.
    • Guest Type: Choose whether the guest network is isolated or shared.
    • Persistent: Indicate whether the guest network is persistent or not. The network that you can provision without having to deploy a VM on it is termed persistent network.
    • VPC: This option indicate whether the guest network is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. For more information on VPCs, see “About Virtual Private Clouds”.
    • Specify VLAN: (Isolated guest networks only) Indicate whether a VLAN should be specified when this offering is used.
    • Supported Services: Select Load Balancer. Use Netscaler or VpcVirtualRouter.
    • Load Balancer Type: Select Public LB from the drop-down.
    • LB Isolation: Select Dedicated if Netscaler is used as the external LB provider.
    • System Offering: Choose the system service offering that you want virtual routers to use in this network.
    • Conserve mode: Indicate whether to use conserve mode. In this mode, network resources are allocated only when the first virtual machine starts in the network.
  5. Click OK and the network offering is created.
Creating an External LB Rule
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC, for which you want to configure load balancing rules.

    The VPC page is displayed where all the tiers you created listed in a diagram.

    For each tier, the following options are displayed:

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  5. In the Router node, select Public IP Addresses.

    The IP Addresses page is displayed.

  6. Click the IP address for which you want to create the rule, then click the Configuration tab.

  7. In the Load Balancing node of the diagram, click View All.

  8. Select the tier to which you want to apply the rule.

  9. Specify the following:

    • Name: A name for the load balancer rule.
    • Public Port: The port that receives the incoming traffic to be balanced.
    • Private Port: The port that the VMs will use to receive the traffic.
    • Algorithm. Choose the load balancing algorithm you want CloudStack to use. CloudStack supports the following well-known algorithms:
      • Round-robin
      • Least connections
      • Source
    • Stickiness. (Optional) Click Configure and choose the algorithm for the stickiness policy. See Sticky Session Policies for Load Balancer Rules.
    • Add VMs: Click Add VMs, then select two or more VMs that will divide the load of incoming traffic, and click Apply.

The new load balancing rule appears in the list. You can repeat these steps to add more load balancing rules for this IP address.

Load Balancing Across Tiers

CloudStack supports sharing workload across different tiers within your VPC. Assume that multiple tiers are set up in your environment, such as Web tier and Application tier. Traffic to each tier is balanced on the VPC virtual router on the public side, as explained in “Adding Load Balancing Rules on a VPC”. If you want the traffic coming from the Web tier to the Application tier to be balanced, use the internal load balancing feature offered by CloudStack.

How Does Internal LB Work in VPC?

In this figure, a public LB rule is created for the public IP 72.52.125.10 with public port 80 and private port 81. The LB rule, created on the VPC virtual router, is applied on the traffic coming from the Internet to the VMs on the Web tier. On the Application tier two internal load balancing rules are created. An internal LB rule for the guest IP 10.10.10.4 with load balancer port 23 and instance port 25 is configured on the VM, InternalLBVM1. Another internal LB rule for the guest IP 10.10.10.4 with load balancer port 45 and instance port 46 is configured on the VM, InternalLBVM1. Another internal LB rule for the guest IP 10.10.10.6, with load balancer port 23 and instance port 25 is configured on the VM, InternalLBVM2.

Configuring internal LB for VPC

Guidelines
  • Internal LB and Public LB are mutually exclusive on a tier. If the tier has LB on the public side, then it can’t have the Internal LB.
  • Internal LB is supported just on VPC networks in CloudStack 4.2 release.
  • Only Internal LB VM can act as the Internal LB provider in CloudStack 4.2 release.
  • Network upgrade is not supported from the network offering with Internal LB to the network offering with Public LB.
  • Multiple tiers can have internal LB support in a VPC.
  • Only one tier can have Public LB support in a VPC.
Enabling Internal LB on a VPC Tier
  1. Create a network offering, as given in Creating a Network Offering for Internal LB.
  2. Create an internal load balancing rule and apply, as given in Creating an Internal LB Rule.
Creating a Network Offering for Internal LB

To have internal LB support on VPC, either use the default offering, DefaultIsolatedNetworkOfferingForVpcNetworksWithInternalLB, or create a network offering as follows:

  1. Log in to the CloudStack UI as a user or admin.
  2. From the Select Offering drop-down, choose Network Offering.
  3. Click Add Network Offering.
  4. In the dialog, make the following choices:
    • Name: Any desired name for the network offering.
    • Description: A short description of the offering that can be displayed to users.
    • Network Rate: Allowed data transfer rate in MB per second.
    • Traffic Type: The type of network traffic that will be carried on the network.
    • Guest Type: Choose whether the guest network is isolated or shared.
    • Persistent: Indicate whether the guest network is persistent or not. The network that you can provision without having to deploy a VM on it is termed persistent network.
    • VPC: This option indicate whether the guest network is Virtual Private Cloud-enabled. A Virtual Private Cloud (VPC) is a private, isolated part of CloudStack. A VPC can have its own virtual network topology that resembles a traditional physical network. For more information on VPCs, see “About Virtual Private Clouds”.
    • Specify VLAN: (Isolated guest networks only) Indicate whether a VLAN should be specified when this offering is used.
    • Supported Services: Select Load Balancer. Select InternalLbVM from the provider list.
    • Load Balancer Type: Select Internal LB from the drop-down.
    • System Offering: Choose the system service offering that you want virtual routers to use in this network.
    • Conserve mode: Indicate whether to use conserve mode. In this mode, network resources are allocated only when the first virtual machine starts in the network.
  5. Click OK and the network offering is created.
Creating an Internal LB Rule

When you create the Internal LB rule and applies to a VM, an Internal LB VM, which is responsible for load balancing, is created.

You can view the created Internal LB VM in the Instances page if you navigate to Infrastructure > Zones > <zone_ name> > <physical_network_name> > Network Service Providers > Internal LB VM. You can manage the Internal LB VMs as and when required from the location.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Locate the VPC for which you want to configure internal LB, then click Configure.

    The VPC page is displayed where all the tiers you created listed in a diagram.

  5. Locate the Tier for which you want to configure an internal LB rule, click Internal LB.

    In the Internal LB page, click Add Internal LB.

  6. In the dialog, specify the following:

    • Name: A name for the load balancer rule.

    • Description: A short description of the rule that can be displayed to users.

    • Source IP Address: (Optional) The source IP from which traffic originates. The IP is acquired from the CIDR of that particular tier on which you want to create the Internal LB rule. If not specified, the IP address is automatically allocated from the network CIDR.

      For every Source IP, a new Internal LB VM is created for load balancing.

    • Source Port: The port associated with the source IP. Traffic on this port is load balanced.

    • Instance Port: The port of the internal LB VM.

    • Algorithm. Choose the load balancing algorithm you want CloudStack to use. CloudStack supports the following well-known algorithms:

      • Round-robin
      • Least connections
      • Source
Adding a Port Forwarding Rule on a VPC
  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC to which you want to deploy the VMs.

    The VPC page is displayed where all the tiers you created are listed in a diagram.

    For each tier, the following options are displayed:

    • Internal LB
    • Public LB IP
    • Static NAT
    • Virtual Machines
    • CIDR

    The following router information is displayed:

    • Private Gateways
    • Public IP Addresses
    • Site-to-Site VPNs
    • Network ACL Lists
  5. In the Router node, select Public IP Addresses.

    The IP Addresses page is displayed.

  6. Click the IP address for which you want to create the rule, then click the Configuration tab.

  7. In the Port Forwarding node of the diagram, click View All.

  8. Select the tier to which you want to apply the rule.

  9. Specify the following:

    • Public Port: The port to which public traffic will be addressed on the IP address you acquired in the previous step.

    • Private Port: The port on which the instance is listening for forwarded public traffic.

    • Protocol: The communication protocol in use between the two ports.

      • TCP
      • UDP
    • Add VM: Click Add VM. Select the name of the instance to which this rule applies, and click Apply.

      You can test the rule by opening an SSH session to the instance.

Removing Tiers

You can remove a tier from a VPC. A removed tier cannot be revoked. When a tier is removed, only the resources of the tier are expunged. All the network rules (port forwarding, load balancing and staticNAT) and the IP addresses associated to the tier are removed. The IP address still be belonging to the same VPC.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPC that you have created for the account is listed in the page.

  4. Click the Configure button of the VPC for which you want to set up tiers.

    The Configure VPC page is displayed. Locate the tier you want to work with.

  5. Select the tier you want to remove.

  6. In the Network Details tab, click the Delete Network button. button to remove a tier

    Click Yes to confirm. Wait for some time for the tier to be removed.

Editing, Restarting, and Removing a Virtual Private Cloud

注解

Ensure that all the tiers are removed before you remove a VPC.

  1. Log in to the CloudStack UI as an administrator or end user.

  2. In the left navigation, choose Network.

  3. In the Select view, select VPC.

    All the VPCs that you have created for the account is listed in the page.

  4. Select the VPC you want to work with.

  5. In the Details tab, click the Remove VPC button button to remove a VPC

    You can remove the VPC by also using the remove button in the Quick View.

    You can edit the name and description of a VPC. To do that, select the VPC, then click the Edit button. button to edit.

    To restart a VPC, select the VPC, then click the Restart button. button to restart a VPC

Persistent Networks

The network that you can provision without having to deploy any VMs on it is called a persistent network. A persistent network can be part of a VPC or a non-VPC environment.

When you create other types of network, a network is only a database entry until the first VM is created on that network. When the first VM is created, a VLAN ID is assigned and the network is provisioned. Also, when the last VM is destroyed, the VLAN ID is released and the network is no longer available. With the addition of persistent network, you will have the ability to create a network in CloudStack in which physical devices can be deployed without having to run any VMs. Additionally, you can deploy physical devices on that network.

One of the advantages of having a persistent network is that you can create a VPC with a tier consisting of only physical devices. For example, you might create a VPC for a three-tier application, deploy VMs for Web and Application tier, and use physical machines for the Database tier. Another use case is that if you are providing services by using physical hardware, you can define the network as persistent and therefore even if all its VMs are destroyed the services will not be discontinued.

Persistent Network Considerations
  • Persistent network is designed for isolated networks.
  • All default network offerings are non-persistent.
  • A network offering cannot be editable because changing it affects the behavior of the existing networks that were created using this network offering.
  • When you create a guest network, the network offering that you select defines the network persistence. This in turn depends on whether persistent network is enabled in the selected network offering.
  • An existing network can be made persistent by changing its network offering to an offering that has the Persistent option enabled. While setting this property, even if the network has no running VMs, the network is provisioned.
  • An existing network can be made non-persistent by changing its network offering to an offering that has the Persistent option disabled. If the network has no running VMs, during the next network garbage collection run the network is shut down.
  • When the last VM on a network is destroyed, the network garbage collector checks if the network offering associated with the network is persistent, and shuts down the network only if it is non-persistent.
Creating a Persistent Guest Network

To create a persistent network, perform the following:

  1. Create a network offering with the Persistent option enabled.

    See “Creating a New Network Offering”.

  2. Select Network from the left navigation pane.

  3. Select the guest network that you want to offer this network service to.

  4. Click the Edit button.

  5. From the Network Offering drop-down, select the persistent network offering you have just created.

  6. Click OK.

Setup a Palo Alto Networks Firewall

Functionality Provided

This implementation enables the orchestration of a Palo Alto Networks Firewall from within CloudStack UI and API.

The following features are supported:

  • List/Add/Delete Palo Alto Networks service provider
  • List/Add/Delete Palo Alto Networks network service offering
  • List/Add/Delete Palo Alto Networks network using the above service offering
  • Add an instance to a Palo Alto Networks network
  • Source NAT management on network create and delete
  • List/Add/Delete Ingress Firewall rule
  • List/Add/Delete Egress Firewall rule (both ‘Allow’ and ‘Deny’ default rules supported)
  • List/Add/Delete Port Forwarding rule
  • List/Add/Delete Static NAT rule
  • Apply a Threat Profile to all firewall rules (more details in the Additional Features section)
  • Apply a Log Forwarding profile to all firewall rules (more details in the Additional Features section)
Initial Palo Alto Networks Firewall Configuration
Anatomy of the Palo Alto Networks Firewall
  • In ‘Network > Interfaces’ there is a list of physical interfaces as well as aggregated physical interfaces which are used for managing traffic in and out of the Palo Alto Networks Firewall device.
  • In ‘Network > Zones’ there is a list of the different configuration zones. This implementation will use two zones; a public (defaults to ‘untrust’) and private (defaults to ‘trust’) zone.
  • In ‘Network > Virtual Routers’ there is a list of VRs which handle traffic routing for the Palo Alto Firewall. We only use a single Virtual Router on the firewall and it is used to handle all the routing to the next network hop.
  • In ‘Objects > Security Profile Groups’ there is a list of profiles which can be applied to firewall rules. These profiles are used to better understand the types of traffic that is flowing through your network. Configured when you add the firewall provider to CloudStack.
  • In ‘Objects > Log Forwarding’ there is a list of profiles which can be applied to firewall rules. These profiles are used to better track the logs generated by the firewall. Configured when you add the firewall provider to CloudStack.
  • In ‘Policies > Security’ there is a list of firewall rules that are currently configured. You will not need to modify this section because it will be completely automated by CloudStack, but you can review the firewall rules which have been created here.
  • In ‘Policies > NAT’ there is a list of the different NAT rules. You will not need to modify this section because it will be completely automated by CloudStack, but you can review the different NAT rules that have been created here. Source NAT, Static NAT and Destination NAT (Port Forwarding) rules will show up in this list.
Configure the Public / Private Zones on the firewall

No manual configuration is required to setup these zones because CloudStack will configure them automatically when you add the Palo Alto Networks firewall device to CloudStack as a service provider. This implementation depends on two zones, one for the public side and one for the private side of the firewall.

  • The public zone (defaults to ‘untrust’) will contain all of the public interfaces and public IPs.
  • The private zone (defaults to ‘trust’) will contain all of the private interfaces and guest network gateways.

The NAT and firewall rules will be configured between these zones.

Configure the Public / Private Interfaces on the firewall

This implementation supports standard physical interfaces as well as grouped physical interfaces called aggregated interfaces. Both standard interfaces and aggregated interfaces are treated the same, so they can be used interchangeably. For this document, we will assume that we are using ‘ethernet1/1’ as the public interface and ‘ethernet1/2’ as the private interface. If aggregated interfaces where used, you would use something like ‘ae1’ and ‘ae2’ as the interfaces.

This implementation requires that the ‘Interface Type’ be set to ‘Layer3’ for both the public and private interfaces. If you want to be able to use the ‘Untagged’ VLAN tag for public traffic in CloudStack, you will need to enable support for it in the public ‘ethernet1/1’ interface (details below).

Steps to configure the Public Interface:

  1. Log into Palo Alto Networks Firewall
  2. Navigate to ‘Network > Interfaces’
  3. Click on ‘ethernet1/1’ (for aggregated ethernet, it will probably be called ‘ae1’)
  4. Select ‘Layer3’ from the ‘Interface Type’ list
  5. Click ‘Advanced’
  6. Check the ‘Untagged Subinterface’ check-box
  7. Click ‘OK’

Steps to configure the Private Interface:

  1. Click on ‘ethernet1/2’ (for aggregated ethernet, it will probably be called ‘ae2’)
  2. Select ‘Layer3’ from the ‘Interface Type’ list
  3. Click ‘OK’
Configure a Virtual Router on the firewall

The Virtual Router on the Palo Alto Networks Firewall is not to be confused with the Virtual Routers that CloudStack provisions. For this implementation, the Virtual Router on the Palo Alto Networks Firewall will ONLY handle the upstream routing from the Firewall to the next hop.

Steps to configure the Virtual Router:

  1. Log into Palo Alto Networks Firewall
  2. Navigate to ‘Network > Virtual Routers’
  3. Select the ‘default’ Virtual Router or Add a new Virtual Router if there are none in the list
    • If you added a new Virtual Router, you will need to give it a ‘Name’
  4. Navigate to ‘Static Routes > IPv4’
  5. ‘Add’ a new static route
    • Name: next_hop (you can name it anything you want)
    • Destination: 0.0.0.0/0 (send all traffic to this route)
    • Interface: ethernet1/1 (or whatever you set your public interface as)
    • Next Hop: (specify the gateway IP for the next hop in your network)
    • Click ‘OK’
  6. Click ‘OK’
Configure the default Public Subinterface

The current implementation of the Palo Alto Networks firewall integration uses CIDRs in the form of ‘w.x.y.z/32’ for the public IP addresses that CloudStack provisions. Because no broadcast or gateway IPs are in this single IP range, there is no way for the firewall to route the traffic for these IPs. To route the traffic for these IPs, we create a single subinterface on the public interface with an IP and a CIDR which encapsulates the CloudStack public IP range. This IP will need to be inside the subnet defined by the CloudStack public range netmask, but outside the CloudStack public IP range. The CIDR should reflect the same subnet defined by the CloudStack public range netmask. The name of the subinterface is determined by the VLAN configured for the public range in CloudStack.

To clarify this concept, we will use the following example.

Example CloudStack Public Range Configuration:

  • Gateway: 172.30.0.1
  • Netmask: 255.255.255.0
  • IP Range: 172.30.0.100 - 172.30.0.199
  • VLAN: Untagged

Configure the Public Subinterface:

  1. Log into Palo Alto Networks Firewall
  2. Navigate to ‘Network > Interfaces’
  3. Select the ‘ethernet1/1’ line (not clicking on the name)
  4. Click ‘Add Subinterface’ at the bottom of the window
  5. Enter ‘Interface Name’: ‘ethernet1/1’ . ‘9999’
    • 9999 is used if the CloudStack public range VLAN is ‘Untagged’
    • If the CloudStack public range VLAN is tagged (eg: 333), then the name will reflect that tag
  6. The ‘Tag’ is the VLAN tag that the traffic is sent to the next hop with, so set it accordingly. If you are passing ‘Untagged’ traffic from CloudStack to your next hop, leave it blank. If you want to pass tagged traffic from CloudStack, specify the tag.
  7. Select ‘default’ from the ‘Config > Virtual Router’ drop-down (assuming that is what your virtual router is called)
  8. Click the ‘IPv4’ tab
  9. Select ‘Static’ from the ‘Type’ radio options
  10. Click ‘Add’ in the ‘IP’ section
  11. Enter ‘172.30.0.254/24’ in the new line
    • The IP can be any IP outside the CloudStack public IP range, but inside the CloudStack public range netmask (it can NOT be the gateway IP)
    • The subnet defined by the CIDR should match the CloudStack public range netmask
  12. Click ‘OK’
Commit configuration on the Palo Alto Networks Firewall

In order for all the changes we just made to take effect, we need to commit the changes.

  1. Click the ‘Commit’ link in the top right corner of the window
  2. Click ‘OK’ in the commit window overlay
  3. Click ‘Close’ to the resulting commit status window after the commit finishes
Setup the Palo Alto Networks Firewall in CloudStack
Add the Palo Alto Networks Firewall as a Service Provider
  1. Navigate to ‘Infrastructure > Zones > ZONE_NAME > Physical Network > NETWORK_NAME (guest) > Configure; Network Service Providers’
  2. Click on ‘Palo Alto’ in the list
  3. Click ‘View Devices’
  4. Click ‘Add Palo Alto Device’
  5. Enter your configuration in the overlay. This example will reflect the details previously used in this guide.
    • IP Address: (the IP of the Palo Alto Networks Firewall)
    • Username: (the admin username for the firewall)
    • Password: (the admin password for the firewall)
    • Type: Palo Alto Firewall
    • Public Interface: ethernet1/1 (use what you setup earlier as the public interface if it is different from my examples)
    • Private Interface: ethernet1/2 (use what you setup earlier as the private interface if it is different from my examples)
    • Number of Retries: 2 (the default is fine)
    • Timeout: 300 (the default is fine)
    • Public Network: untrust (this is the public zone on the firewall and did not need to be configured)
    • Private Network: trust (this is the private zone on the firewall and did not need to be configured)
    • Virtual Router: default (this is the name of the Virtual Router we setup on the firewall)
    • Palo Alto Threat Profile: (not required. name of the ‘Security Profile Groups’ to apply. more details in the ‘Additional Features’ section)
    • Palo Alto Log Profile: (not required. name of the ‘Log Forwarding’ profile to apply. more details in the ‘Additional Features’ section)
    • Capacity: (not required)
    • Dedicated: (not required)
  6. Click ‘OK’
  7. Click on ‘Palo Alto’ in the breadcrumbs to go back one screen.
  8. Click on ‘Enable Provider’ button to enable or disable feature.
Add a Network Service Offering to use the new Provider

There are 6 ‘Supported Services’ that need to be configured in the network service offering for this functionality. They are DHCP, DNS, Firewall, Source NAT, Static NAT and Port Forwarding. For the other settings, there are probably additional configurations which will work, but I will just document a common case.

  1. Navigate to ‘Service Offerings’
  2. In the drop-down at the top, select ‘Network Offerings’
  3. Click ‘Add Network Offering’
    • Name: (name it whatever you want)
    • Description: (again, can be whatever you want)
    • Guest Type: Isolated
    • Supported Services:
      • DHCP: Provided by ‘VirtualRouter’
      • DNS: Provided by ‘VirtualRouter’
      • Firewall: Provided by ‘PaloAlto’
      • Source NAT: Provided by ‘PaloAlto’
      • Static NAT: Provided by ‘PaloAlto’
      • Port Forwarding: Provided by ‘PaloAlto’
    • System Offering for Router: System Offering For Software Router
    • Supported Source NAT Type: Per account (this is the only supported option)
    • Default egress policy: (both ‘Allow’ and ‘Deny’ are supported)
  4. Click ‘OK’
  5. Click on the newly created service offering
  6. Click ‘Enable network offering’ button to enable or disable feature.

When adding networks in CloudStack, select this network offering to use the Palo Alto Networks firewall.

Additional Features

In addition to the standard functionality exposed by CloudStack, we have added a couple additional features to this implementation. We did not add any new screens to CloudStack, but we have added a couple fields to the ‘Add Palo Alto Service Provider’ screen which will add functionality globally for the device.

Palo Alto Networks Threat Profile

This feature allows you to specify a ‘Security Profile Group’ to be applied to all of the firewall rules which are created on the Palo Alto Networks firewall device.

To create a ‘Security Profile Group’ on the Palo Alto Networks firewall, do the following:

  1. Log into the Palo Alto Networks firewall
  2. Navigate to ‘Objects > Security Profile Groups’
  3. Click ‘Add’ at the bottom of the page to add a new group
  4. Give the group a Name and specify the profiles you would like to include in the group
  5. Click ‘OK’
  6. Click the ‘Commit’ link in the top right of the screen and follow the on screen instructions

Once you have created a profile, you can reference it by Name in the ‘Palo Alto Threat Profile’ field in the ‘Add the Palo Alto Networks Firewall as a Service Provider’ step.

Palo Alto Networks Log Forwarding Profile

This feature allows you to specify a ‘Log Forwarding’ profile to better manage where the firewall logs are sent to. This is helpful for keeping track of issues that can arise on the firewall.

To create a ‘Log Forwarding’ profile on the Palo Alto Networks Firewall, do the following:

  1. Log into the Palo Alto Networks firewall
  2. Navigate to ‘Objects > Log Forwarding’
  3. Click ‘Add’ at the bottom of the page to add a new profile
  4. Give the profile a Name and specify the details you want for the traffic and threat settings
  5. Click ‘OK’
  6. Click the ‘Commit’ link in the top right of the screen and follow the on screen instructions

Once you have created a profile, you can reference it by Name in the ‘Palo Alto Log Profile’ field in the ‘Add the Palo Alto Networks Firewall as a Service Provider’ step.

Limitations
  • The implementation currently only supports a single public IP range in CloudStack
  • Usage tracking is not yet implemented

管理云

管理云

在云中使用Tags来组织资源。

标签是一类存储云中资源元数据的键值对。其主要用来分类资源,例如,可以将一个用户的虚拟机打上标签,以表明用户所在的城市。在这个例子中,键就是城市,而值可能是Toronto或是Tokyo。可以让cloudstack发现所有打上标签的资源。例如,发现指定城市里用户的虚拟机。

可以给用户虚拟机,磁盘卷,快照,来宾网络,模板,ISO镜像,防火墙规则,端口转发规则,公共IP地址,安全组,负载均衡规则,项目,VPC,网络访问列表或者静态路由器等等,都打上标签。但不能给远程登录VPN打上标签。

可以通过CloudStack的界面或者API来创建标签,删除标签或者列出标签。也可以为每一个资源定义多个标签。没有数量的限制。并且,每个标签可以达到255个字符的长度。用户可以定义自己拥有的资源的标签,而管理员可以定义云中所有资源的标签。

一个可选的输入参数,标签,存在于多个API中。下面的例子展示了如何应用这个新参数来查找带有 地域=加拿大或城市=多伦多 标签的所有磁盘卷。

command=listVolumes
   &listAll=true
   &tags[0].key=region
   &tags[0].value=canada
   &tags[1].key=city
   &tags[1].value=Toronto

下面的API命令具有以下输入参数:

  • listVirtualMachines
  • listVolumes
  • listSnapshots
  • listNetworks
  • listTemplates
  • listIsos
  • listFirewallRules
  • listPortForwardingRules
  • listPublicIpAddresses
  • listSecurityGroups
  • listLoadBalancerRules
  • listProjects
  • listVPCs
  • listNetworkACLs
  • listStaticRoutes

CPU Sockets报表

生产环境管理着包含一个多个物理CPU端口的不同类型的主机。CPU端口可以看作是一个测量单元,用来授权或制定云架构。生产环境提供UI和API支持来收集CPU端口统计,供订单使用。在架构选项中,有一个新的CPU端口项。可以查看生产环境管理的CPU端口统计数据,这些数据能够反映云的大小。CPU端口页会展示每一主机类型的主机数量及端口数。

  1. 登录到CloudStack用户界面。

  2. 在左侧导航栏中,点击基础架构

  3. 在CPU Sockets中,点击查看全部。

    显示CPU插槽页。此页会显示主机数量以及基于虚拟机类型的CPU插槽数量。

更改数据库配置

CloudStack管理服务器存储了数据库配置信息(如主机名,端口,凭证),这些信息在/etc/cloudstack/management/db.properties文件中,为使更改生效,编辑每个管理服务器上的此文件,然后重启管理服务器

更改数据库密码

可能会需要更改CloudStack使用的mysql账户密码。如果要更改,需要在MySQL中更改密码,同时要将加密的密码加入到/etc/cloudstack/management/db.properties文件中。

  1. 在更改密码之前,需要停止CloudStack管理服务器,如果部署了使用引擎,也需要关闭使用引擎。

    # service cloudstack-management stop
    # service cloudstack-usage stop
    
  2. 接下来,你将在MySQL服务器上更新CloudStack的用户密码。

    # mysql -u root -p
    

    在MySQL命令行下,你将更改密码并且刷新权限:

    update mysql.user set password=PASSWORD("newpassword123") where User='cloud';
    flush privileges;
    quit;
    
  3. 下一步是加密密码然后拷贝加密后的密码到CloudStack的数据库配置中(/etc/cloudstack/management/db.properties)。

    # java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar \ org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh \ input="newpassword123" password="`cat /etc/cloudstack/management/key`" \ verbose=false
    

文件加密类型

请注意这是给文件加密类型准备的。如果你使用web加密类型,那么你要使用password=”management_server_secret_key”

  1. 现在,你会在 /etc/cloudstack/management/db.properties 中更新心的密文。使用文本编辑器打开``/etc/cloudstack/management/db.properties`` ,然后更新这些参数:

    db.cloud.password=ENC(encrypted_password_from_above)
    db.usage.password=ENC(encrypted_password_from_above)
    
  2. 在复制新的密码过去之后,你可以启动CloudStack了(如果需要的话,还有用量引擎)。

    # service cloudstack-management start
    # service cloud-usage start
    

管理员告警信息

系统提供告警和事件用以帮助云的管理。告警通知管理员,一般用邮件发送,提醒管理员云中有错误发生。告警的行为可以进行配置。

事件会追踪云中所有用户和管理员的操作事件。比如,每个客户虚拟机的启动会建立一个对应的事件。每个时间都存储在管理节点的数据库中。

在以下情况,系统会发送邮件给管理员:

  • 管理节点集群中CPU,内存或者存储资源的可用量低。

  • 管理节点和主机之间的心跳检查丢失超过3分钟。

  • 主机集群中CPU,内存或者存储资源的可用量低。

将警告发送给外部的SNMP和Syslog管理器

除了在CloudStack UI中的仪表板上给管理员显示警告和发送电子邮件之外,CloudStack还可以发送同样的警告给外部的SNMP或者Syslog管理软件。如果你更愿意使用SNMP或者Syslog管理器来监视你的云的话,那这么做很有用。

可以发送的警告有:

以下是告警类型的列表。当前告警可以通过调用listAlerts得知。

MEMORY = 0 // Available Memory below configured threshold
CPU = 1 // Unallocated CPU below configured threshold
STORAGE =2 // Available Storage below configured threshold
STORAGE_ALLOCATED = 3 // Remaining unallocated Storage is below configured threshold
PUBLIC_IP = 4 // Number of unallocated virtual network public IPs is below configured threshold
PRIVATE_IP = 5 // Number of unallocated private IPs is below configured threshold
SECONDARY_STORAGE = 6 //  Available Secondary Storage in availability zone is below configured threshold
HOST = 7 // Host related alerts like host disconnected
USERVM = 8 // User VM stopped unexpectedly
DOMAIN_ROUTER = 9 // Domain Router VM stopped unexpectedly
CONSOLE_PROXY = 10 // Console Proxy VM stopped unexpectedly
ROUTING = 11 // Lost connection to default route (to the gateway)
STORAGE_MISC = 12 // Storage issue in system VMs
USAGE_SERVER = 13 // No usage server process running
MANAGMENT_NODE = 14 // Management network CIDR is not configured originally
DOMAIN_ROUTER_MIGRATE = 15 // Domain Router VM Migration was unsuccessful
CONSOLE_PROXY_MIGRATE = 16 // Console Proxy VM Migration was unsuccessful
USERVM_MIGRATE = 17 // User VM Migration was unsuccessful
VLAN = 18 // Number of unallocated VLANs is below configured threshold in availability zone
SSVM = 19 // SSVM stopped unexpectedly
USAGE_SERVER_RESULT = 20 // Usage job failed
STORAGE_DELETE = 21 // Failed to delete storage pool
UPDATE_RESOURCE_COUNT = 22 // Failed to update the resource count
USAGE_SANITY_RESULT = 23 // Usage Sanity Check failed
DIRECT_ATTACHED_PUBLIC_IP = 24 // Number of unallocated shared network IPs is low in availability zone
LOCAL_STORAGE = 25 // Remaining unallocated Local Storage is below configured threshold
RESOURCE_LIMIT_EXCEEDED = 26 //Generated when the resource limit exceeds the limit. Currently used for recurring snapshots only

通过调用API命令 listAlerts,你还可以显示最新的列表。

SNMP警告详情

支持SNMP v2。

每个SNMP陷阱报错以下信息:message、podId、dataCenterId、clusterId和generationTime。

Syslog警报详情

CloudStack为每个警告生成一个syslog信息。每个syslog信息包含下列格式的字段alertType、message、podId、dataCenterId和clusterId。如果任何字段没有有效值的话,它将不会包含在内。

Date severity_level Management_Server_IP_Address/Name  alertType:: value dataCenterId:: value  podId:: value  clusterId:: value  message:: value

例如:

Mar  4 10:13:47    WARN    localhost    alertType:: managementNode message:: Management server node 127.0.0.1 is up
配置SNMP和Syslog日志管理

要配置一个或多个SNMP管理器或者Syslog管理器来接收来自CloudStack 的警告:

  1. 对于SNMP管理器,一个安装在你的SNMP管理系统上安装的CloudStack MIB文件。它映射SNMP OIDs到陷阱类型,目的让用户更容易阅读。这个文件必须是公开的。关于如何安装这个文件的更多信息,参阅SNMP管理器提供的文档。

  2. 编辑 /etc/cloudstack/management/log4j-cloud.xml文件。

    # vi /etc/cloudstack/management/log4j-cloud.xml
    
  3. 使用下面给出的语法添加一个条目。选择你是要添加一个SNMP管理器还是一个Syslog管理器,然后按照适当的示例操作。要指定多个外部管理器,使用逗号(,)将IP地址和其他配置值隔开。

    注解

    SNMP或Syslog管理器推荐的最大值是20。

    下面的例子展示了如果配置两个IP分别为10.1.1.1和10.1.1.2的SNMP管理器。在实际使用中请替换成你的IP、ports和communities。不要改变其他值(name、threshold、class和layout值)。

    <appender name="SNMP" class="org.apache.cloudstack.alert.snmp.SnmpTrapAppender">
      <param name="Threshold" value="WARN"/>  <!-- Do not edit. The alert feature assumes WARN. -->
      <param name="SnmpManagerIpAddresses" value="10.1.1.1,10.1.1.2"/>
      <param name="SnmpManagerPorts" value="162,162"/>
      <param name="SnmpManagerCommunities" value="public,public"/>
      <layout class="org.apache.cloudstack.alert.snmp.SnmpEnhancedPatternLayout"> <!-- Do not edit -->
        <param name="PairDelimeter" value="//"/>
        <param name="KeyValueDelimeter" value="::"/>
      </layout>
    </appender>
    

    下面的例子展示了如果配置两个IP分别为10.1.1.1和10.1.1.2的Syslog管理器。在实际是使用中请替换成你的IP。你可以设置Facility为任何syslog-defined的值,如 LOCAL0 - LOCAL7。不要改变其他的值。

    <appender name="ALERTSYSLOG">
      <param name="Threshold" value="WARN"/>
      <param name="SyslogHosts" value="10.1.1.1,10.1.1.2"/>
      <param name="Facility" value="LOCAL6"/>
      <layout>
        <param name="ConversionPattern" value=""/>
      </layout>
    </appender>
    
  4. 如果你的云有多个管理服务器节点,在编辑每个节点中log4j-cloud.xml的时候,重复这些步骤。

  5. 当管理服务器正在运行的时候,你做了这些变更,等待一会让变更生效。

**排错:**如果一段之后,在配置的SNMP或者Syslog管理器中没有警告出现,那么log4j-cloud.xml中<appender>的语法可能有错误。检查并确定格式和设置都是正确的。

删除SNMP或Syslog日志管理

要移除一个外部SNMP管理器或者Syslog管理器以便它不再接收来自CloudStack 的警告,请删除``/etc/cloudstack/management/log4j-cloud.xml``文件中对应的条目。

自定义网络域名

根管理员在网络, 帐户,域, 资源域以及整个CloudStack级别可选择设置DNS后缀,域管理员可以在自己的域做这样的设置。要自定义域名并使其有效, 请按照下面的步骤操作。

  1. 在所需的范围内设置DNS后缀

    • 在网络级别中, DNS后缀可以通过UI在创建新的网络时设置, 这些在 “添加额外的来宾网络” 或CloudStack API的updateNetwork命令中都有描述.

    • 在帐户、域或者区域级别,DNS后缀可以由以下CloudStack API命令:createAccount、editAccount、createDomain、editDomain、createZone或editZone指定。

    • 在全局级别中,使用配置参数guest.domain.suffix。你也可以使用CloudStack API命令updateConfiguration。当更改了这个全局配置后,重启管理服务器的服务以便新的设置有效。

  2. 为了使你的新DNS后缀对已经存在的网络有效,你需要调用CloudStack API命令updateNetwork。对于DNS后缀已经更改后新建的网络这一步不是必需的。

你使用的网络域的源取决于下面的一些规则。

  • 对于所有的网络,如果网络域作为这个网络自己配置的一部分,那这个网络域的值会被使用。

  • 对于账户指定的网络,为这个账户指定的网域会被使用。如果没有指定,系统会按照域,区域和全局配置的顺序查找网域的值。

  • 对于域指定的网络, 为这个域指定的网络域名会被使用. 如果没有指定, 系统会区域和全局配置中按顺序查找网络域名的值.

  • 对于区域指定的网络, 为这个域指定的网络域名会被使用. 如果没有指定, 系统会在全局配置里查找网络域名的值.

停止和重启管理服务

超级管理员需要经常性的关闭和重启管理服务。

例如,修改全局配置参数之后,要求重启管理服务器。如果您有多个管理服务器节点,请全部重启,以便使新参数值在整个云中生效。

要停止管理服务,在管理服务节点所在的操作系统中运行如下命令:

# service cloudstack-management stop

启动管理服务:

# service cloudstack-management start

系统可靠性和可用性

系统可靠性与高可用性

管理服务器的HA

CloudStack管理服务器可以部署为多节点的配置,使得它不容易受到单个服务器故障影响。管理服务器(不同于MySQL数据库)本身是无状态的,可以被部署在负载均衡设备后面。

停止的所有管理服务不会影响主机的正常操作。所有来宾VM将继续工作。

当管理主机下线后,不能创建新的VMs、最终用户,管理UI、API、动态负载以及HA都将停止工作。

管理服务器负载均衡

CloudStack可以使用负载均衡器为多管理服务器提供一个虚拟IP。管理员负责创建管理服务器的负载均衡规则。应用程序需要跨多个持久性或stickiness的会话。下表列出了需要进行负载平衡的端口和是否有持久性要求。

即使不需要持久性,也使它是允许的。

源端口

目标端口

协议

持续请求

80或者443

8080 (或者 20400 with AJP)

HTTP (或者AJP)

支持

8250 8250 TCP

支持

8096 8096 HTTP

不支持

除了上面的设置,管理员还负责设置‘host’全局配置值,由管理服务器IP地址更改为负载均衡虚拟IP地址。如果‘host’值未设置为VIP的8250端口并且一台管理服务器崩溃时,用户界面依旧可用,但系统虚拟机将无法与管理服务器联系。

启用了HA的虚拟机

用户可以给指定的虚拟机开启高可用特性。默认情况下所有的虚拟路由虚拟机和弹性负载均衡虚拟机自动开启了高可用特性。当CloudStack检测到开启了高可用特性的虚拟机崩溃时将会在相同的可用资源与中自动重新启动该虚拟机。高可用特性不会跨资源域执行。CloudStack采用比较保守的方式重启虚拟机,以确使不会同时运行两个相同的实例。管理服务器会尝试在本集群的另一台主机上开启该虚拟机。

高可用特性只在使用iSCSI和NFS做主存储的时候才可以使用。不支持使用本地存储作为主存储的高可用。

主机的HA

用户可以给指定的虚拟机开启高可用特性。默认情况下所有的虚拟路由虚拟机和弹性负载均衡虚拟机自动开启了高可用特性。当CloudStack检测到开启了高可用特性的虚拟机崩溃时将会在相同的可用资源与中自动重新启动该虚拟机。高可用特性不会跨资源域执行。CloudStack采用比较保守的方式重启虚拟机,以确使不会同时运行两个相同的实例。管理服务器会尝试在本集群的另一台主机上开启该虚拟机。

高可用特性只在使用iSCSI和NFS做主存储的时候才可以使用。不支持使用本地存储作为主存储的高可用。

专用的HA主机

一台或更多台主机可以被设计为只有启用HA的VMs才能使用,这些VMs在主机出现问题的时候会重启。出于灾难恢复目的为所有启用了HA的VMs设置一个像专用HA主机这样的池是有用的:

  • 确定哪些VMs作为CloudStack高可用功能的一部分而重启是比较容易的。如果一个VM正运行在专用的HA主机上,那么它必须是一个启用了HA的,从失败的主机上迁移过来的VM。(有一个例外:它可能是管理员手工迁移过来的任何VM。)。

  • 出于其他目的,可能保留一些启用了HA的VMs在主机上不要重启。

当创建了主机之后,通过指定一个主机标签来设置专用HA选项。要允许管理员只给启用了HA的VMs制定专用主机,请设置全局配置变量ha.tag为想要的tag(比如, “ha_host”),并且重启管理服务器。当添加你想给启用HA的VMs配置专用主机(s )时,在主机标签区域中输入值。

注解

如果你设置ha.tag,请确认在你的云中至少有一台主机真的在使用该标签。如果在ha.tag中没有为云中的任何主机设置指定的标签,那么启用了HA的VMs在崩溃之后不会重启。

主存储故障和数据丢失

当主存储发生故障,hypervisor 立即停止该存储设备上存储的所有虚拟机。客户机被标记为当主存储重新上线时,HA根据实际情况尽快将重新启动。使用NFS时,hypervisor 可以允许虚拟机继续运行,这取决于问题的性质。例如,NFS挂起将导致客户虚拟机暂停,直至恢复存储连接。主存储没有被设计进行备份。在主存储中的单个卷,可以使用快照备份。

二级存储的故障和数据丢失

由于一个资源域只有一个二级存储服务器,二级存储的中断将会对系统的一些功能产生影响,但不影响正在运行的客户虚拟机。可能会让用户无法选择模版来创建虚拟机。用户也可能无法保存快照,检查或恢复已保存的快照。当二级存储恢复连接后,这些功能也就可以自动恢复。

二级存储的数据丢失将会影响最近添加的用户数据,包括模版、快照、和ISO镜像。二级存储应该进行定期备份。为每个资源域提供多个二级存储服务器能够增强系统的可扩展性。

数据库的高可用

为了确保存储CloudStack内部数据的数据库的高可用性,你可以设置数据库复制。这涉及到所有CloudStack主数据库和用量数据库。复制是指完全使用MySQL连接参数和双向复制。MySQL 5.1和5.5已测试通过。

如何设置数据库复制

CloudStack中的数据库复制是由MySQL复制功能提供的。设置复制的步骤可在MySQL的文档中找到(链接在下面提供)。它建议你设置双向复制,涉及两个数据库节点。在这个情形下,比如,你可能有node1和node2。

你同样可以设置链式复制,这涉及到多于两个节点。在这个情况下,你可以先设置node1和node2的双向复制。然后,设置node2和node3的单向复制。在设置node3和node4的单向复制,其他所有的节点依次类推。

参考文献:

配置数据库高可用

要控制数据库高可用特性,在/etc/cloudstack/management/db.properties文件中使用以下配置设置。

需求设置

确定你在 db.properties中使用了以下设置:

  • db.ha.enabled:如果你想使用复制功能,请设置为true。

    例如:db.ha.enabled=true

  • db.cloud.slaves:为云数据库设置多台slave主机,用逗号隔开。这是用于复制的节点清单。主节点不在列表中,因为在属性文件中的别处已经使用了它。

    例如:db.cloud.slaves=node2,node3,node4

  • db.usage.slaves:为用量数据库设置多台slave主机,用逗号隔开。这是用于复制的节点清单。主节点不在列表中,因为在属性文件中的别处已经使用了它。

    例如:db.usage.slaves=node2,node3,node4

可选的设置

必须在db.properties中提供以下设置,但是你不用改变默认值除非你希望做一些优化:

  • db.cloud.secondsBeforeRetryMaster:在master宕机之后,MySQL连接器重试连接到master之前所等待的秒数。默认是1小时。如果首先达到了db.cloud.queriesBeforeRetryMaster 的限制,重试可能更早发生。

    例如:db.cloud.secondsBeforeRetryMaster=3600

  • db.cloud.queriesBeforeRetryMaster:在master宕机之后,重新尝试连接到master之前向数据库查询的最小次数。默认值是5000。如果首先达到了db.cloud.secondsBeforeRetryMaster的限制,重试可能更早发生。

    例如:db.cloud.queriesBeforeRetryMaster=5000

  • db.cloud.initialTimeout:在重新尝试连接至master之前,MySQL连接器等待的初始时间。默认是3600。

    例如:db.cloud.initialTimeout=3600

数据库高可用性的限制

目前此功能的实现还存在下列限制。

  • Slave主机不能被CloudStack监控。你必须有单独的监控手段。

  • 数据库端的事件没有集成到CloudStack管理服务器事件系统。

  • 你必须定期的执行手动清除由数据库节点复制产生的二进制log文件。如果你不清理log文件,磁盘就会被占满。

调优

调优

本节提示如何提高云性能。

性能监控

终端用户和管理员都可以对主机和虚拟机进行性能监控。这允许用户监控他们的资源利用情况并决定在适当的时候选择更强大的服务方案和更大的磁盘。

增加管理服务器到最大内存

如果管理服务器用于高需求,默认JVM的最大内存分配可能不足。增加内存:

  1. 编辑Tomcat配置文件:

    /etc/cloudstack/management/tomcat6.conf
    
  2. 改变命令行参数 -XmxNNNm中的N为更高的值。

    例如,如果当前值为 -Xmx128m,则改为 -Xmx1024m或更高。

  3. 将新的设置生效,重启管理服务。

    # service cloudstack-management restart
    

欲了解更多管理内存问题的详细信息,请参阅”FAQ:内存” 在 Tomcat Wiki.

设置数据库缓冲池大小

为MySQL数据库提供足够的内存空间来缓存数据和索引是很重要的:

  1. 编辑MySQL配置文件:

    /etc/my.cnf
    
  2. 在 [mysqld]部分的datadir下面插入如下行。使用适合您情况的值。如果MySQL和管理服务器在同一台服务器上面我们建议设置缓冲池为内存的40%,如果MySQL为专用的服务器我们建议设置为内存的70%。下面的示例假设一台专用的服务器的内存为1024M。

    innodb_buffer_pool_size=700M
    
  3. 重启MySQL服务.

    # service mysqld restart
    

欲了解更多关于缓冲池的信息,请参阅”InnoDB缓冲池” `MySQL参考手册<http://dev.mysql.com/doc/refman/5.5/en/innodb-buffer-pool.html>`_.

设置和监控每个主机的总VM数限制

管理员应该监视每个集群中的虚拟机实例的总数,如果总量快达到hypervisor允许的最大虚拟机数量时,不再向此群集分配虚拟机。并且,要注意为主机预留一定的计算能力,以防止群集中有主机发生故障,因为发生故障的主机上的虚拟机需要重新部署在这些预留主机上。请咨询您所选择hypervisor的文档,了解此hypervisor能支持的最大虚拟机数量,并将此数值设置在CloudStack的全局设置里。监控每个群集里虚拟机的活跃程序,并将活跃虚拟机保持在一个安全线内。这样,CloudStack就能允许偶尔的主机故障。举个示例:如果集群里有N个主机,而你只能让其中任一主机的停机时间不超过特定时间。那么,你能在此集群部署的最多虚拟主机数量值为:(N-1) * (每主机最大虚拟量数量限值)。一旦群集中的虚拟机达到此数量,必须在CloudStack的用户界面中禁止向此群集分配新的虚拟机。

配置XenServer dom0的内存

配置XenServer为dom0分配更多的内存,可使XenServer处理更多的虚拟机。我们推荐为dom0设置的内存数值为2940 MB。至于如何操作,可以参阅如下URL: Citrix 知识库文章.。这篇文章同时适用于XenServer 5.6和6.0版本。

事件和故障排查

事件通知

事件本质上是与云环境相关的虚拟和物理资源的状态显著或有意义的变化。事件用于监控系统,使用率和计费系统,或者是用于辨别模式和做出正确商业决定的其他任何事件驱动工作流系统。在 CloudStack 中的事件可以是虚拟或物理资源的状态变化,用户(操作事件)执行的操作,或基于策略的事件(警告)。

事件日志

这里有两种类型的事件记录在&PRODUCT;事件日志。标准事件记录一个事件的成功或失败,并且可以用于鉴别哪些是已经失败的任务或进程。这里也记录长时间运行任务事件。异步任务的制定,启动,完成这些事件都被记录。长时间运行的同步和异步事件日志可用于获取挂起任务更多的状态信息,也可以用来识别任务是被挂起或还未开始。以下各节提供有关这些事件的详细信息。

通知

事件通知框架提供一种手段来管理管理服务器组件来发布和订阅 CloudStack 中的事件。事件通知是通过实现事件总线抽象的管理服务器的概念来实现。事件总线的管理服务器,允许 CloudStack 中的组件和扩展插件通过使用高级消息队列协议(AMQP)客户端订阅事件的介绍。在 CloudStack 中,事件总线默认通过一个使用Rabbit MQAMQP 客户端 的插件实现。AMQP 客户端推送发布事件到兼容 AMQP 服务器。因此,所有的 CloudStack 中的事件发布到 AMQP 服务器中交换 。

状态变化,资源状态变化的新事件作为事件通知架构的一部分被引入。每个资源,例如用户VM,卷VM,网卡,网络,公共IP,快照以及模板,使用机器状态和常规事件被关联起来作为状态变化的一部分。这意味着,一个资源状态的变化产生了一个状态变化事件,并且该事件被公布到相应的状态事件总线。所有的 CloudStack 事件(报警,动作事件,使用事件)和 资源状态变化事件的附加条目将被公布到事件总线。

使用案例

以下是一些使用场景:

  • 使用率或计费引擎:第三方云使用的解决方案可以实现一个插件,它可以连接到 CloudStack 中订阅 CloudStack 中的事件并产生使用率数据。使用率数据是它们的使用的软件消耗。

  • AMQP的插件可以存放消息队列中的所有事件,然后一个AMQP消息代理可以提供基于主题的通知订阅者。

  • AMQP的插件可以存放所有事件在一个消息队列中,然后AMQP打乱的 发布和订阅通知服务可以作为一个可插拔的服务,在CloudStack中该服务可以为事件通知提供丰富的API集,如主题为标题的主题和通知。此外,可插拔服务可以处理多租户,认证,授权issues.age代理,并提供基于主题的订阅通知。

配置

作为一个 CloudStack 管理员,执行下列一次性配置启用事件通知框架。在运行时不能修改控件行为。

  1. Create the folder /etc/cloudstack/management/META-INF/cloudstack/core

  2. Inside that folder, open spring-event-bus-context.xml.

  3. 按照以下实例顶一个叫做 “ eventNotificationBus ” 的实体:

    • 名字:为实体指定一个名字。

    • 服务器: RabbitMQ AMQP 的名字或IP地址

    • 端口 : RabbitMQ服务器运行端口.

    • 用户名: 用户名关联访问 RabbitMQ 服务器的账号。

    • 密码: 密码关联访问 RabbitMQ 服务器账号的用户名。

    • 交流:其中 CloudStack 中事件发布的 RabbitMQ 的服务器上的交流名称。

      下面给出一个实体实例:

      <beans xmlns="http://www.springframework.org/schema/beans"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:context="http://www.springframework.org/schema/context"
      xmlns:aop="http://www.springframework.org/schema/aop"
      xsi:schemaLocation="http://www.springframework.org/schema/beans
      http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
      http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
      http://www.springframework.org/schema/context
      http://www.springframework.org/schema/context/spring-context-3.0.xsd">
         <bean id="eventNotificationBus" class="org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus">
            <property name="name" value="eventNotificationBus"/>
            <property name="server" value="127.0.0.1"/>
            <property name="port" value="5672"/>
            <property name="username" value="guest"/>
            <property name="password" value="guest"/>
            <property name="exchange" value="cloudstack-events"/>
         </bean>
      </beans>
      

      The eventNotificationBus bean represents the org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus class.

      If you want to use encrypted values for the username and password, you have to include a bean to pass those as variables from a credentials file.

      A sample is given below

      <beans xmlns="http://www.springframework.org/schema/beans"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xmlns:context="http://www.springframework.org/schema/context"
             xmlns:aop="http://www.springframework.org/schema/aop"
             xsi:schemaLocation="http://www.springframework.org/schema/beans
              http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
              http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
              http://www.springframework.org/schema/context
              http://www.springframework.org/schema/context/spring-context-3.0.xsd"
      >
      
         <bean id="eventNotificationBus" class="org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus">
            <property name="name" value="eventNotificationBus"/>
            <property name="server" value="127.0.0.1"/>
            <property name="port" value="5672"/>
            <property name="username" value="${username}"/>
            <property name="password" value="${password}"/>
            <property name="exchange" value="cloudstack-events"/>
         </bean>
      
         <bean id="environmentVariablesConfiguration" class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig">
            <property name="algorithm" value="PBEWithMD5AndDES" />
            <property name="passwordEnvName" value="APP_ENCRYPTION_PASSWORD" />
         </bean>
      
         <bean id="configurationEncryptor" class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor">
            <property name="config" ref="environmentVariablesConfiguration" />
         </bean>
      
         <bean id="propertyConfigurer" class="org.jasypt.spring3.properties.EncryptablePropertyPlaceholderConfigurer">
            <constructor-arg ref="configurationEncryptor" />
            <property name="location" value="classpath:/cred.properties" />
         </bean>
      </beans>
      

      Create a new file in the same folder called cred.properties and the specify the values for username and password as jascrypt encrypted strings

      Sample, with guest as values for both fields:

      username=nh2XrM7jWHMG4VQK18iiBQ==
      password=nh2XrM7jWHMG4VQK18iiBQ==
      
  4. 重启管理服务器。

标准事件

事件日志记录三种类型的标准事件

  • 通知:当一个操作被成功执行产生事件。

  • 警告:以下状况产生该事件。

    • 监控模板下载的同时网络被断开

    • 模板下载被放弃

    • 当存储服务器上的问题会导致卷故障转移到镜像存储服务器。

  • 错误:当操作没有成功执行产生该事件

长事件运行任务事件

事件日志记录三种类型的标准事件

  • 通知:当一个操作被成功执行产生事件。

  • 警告:以下状况产生该事件。

    • 监控模板下载的同时网络被断开

    • 模板下载被放弃

    • 当存储服务器上的问题会导致卷故障转移到镜像存储服务器。

  • 错误:当操作没有成功执行产生该事件

事件记录查询

可以通过用户接口查询数据库日志。系统收集了以下列表事件:

  • 虚拟机创建,删除,以及持续管理操作

  • 虚拟路由器创建,删除,以及持续管理操作

  • 模板创建和删除

  • 网络/压力 负载规则创建和删除

  • 存储卷创建和和删除

  • 用户登录注销

删除和归档事件警告

CloudStack 提供你删除或归档那些再也不使用的现存警告和事件的能力。您可以定期删除或存档的任何那些您不能或不想从数据库来解决的警报或事件,

您可以通过快速查看或详情页面直接删除或归档个别警报或事件。如果你想同时删除多个警报或事件,您可以分别使用相应的快捷菜单。您可以按类别删除某个时期的警报或事件。例如,您可以选择诸如 ** USER.LOGOUT**,** VM.DESTROY**,** VM.AG.UPDATE**,** CONFIGURATION.VALUE.EDI** 等类别,依此类推。您还可以查看事件和警报归档或删除的数量。

为了支持删除或归档报警,增加了以下全局参数:

  • alert.purge.delay: The alerts older than specified number of days are purged. Set the value to 0 to never purge alerts automatically.
  • alert.purge.interval: The interval in seconds to wait before running the alert purge thread. The default is 86400 seconds (one day).

注解

不能同通过UI 或 API 来归档报警或者事件。它们都存放在数据库中用于升级或其他目的。

许可

参考以下几点:

  • root 管理 可以删除或归档一个或多个警告事件

  • 域关乎或终端用户可以删除或归档一个或多个警告事件

步骤
  1. 使用管理员登录到CloudStack管理界面。

  2. 在左侧导航栏中,点击事件。

  3. 请执行下列操作之一:

    • 归档事件,点击 归档事件,并指定事件类型和日期。

    • 归档事件,点击 删除事件,并指定事件类型和日期。

  4. 点击确定。

故障排查

使用服务器日志

为了方便诊断系统,CloudStack 管理服务器在目录/var/log/cloud/management/下记录了所有网站、中间层和数据库的活动。CloudStack 会记录各种出错信息。我们推荐使用下述命令从管理服务器日志中寻找有问题的输出日志:

注解

当你在拷贝和粘贴这个命令时,请确保所有的命令都在同一行里。有的文档拷贝工具会将这个命令分割为多行。

grep -i -E 'exception|unable|fail|invalid|leak|warn|error' /var/log/cloudstack/management/management-server.log

CloudStack处理请求时会生成一个任务ID。如果您发现了日志中的某个错误,然后想调试该问题,您可以在管理服务器日志中grep这个任务ID。例如,假设您发现了以下的ERROR信息:

2010-10-04 13:49:32,595 ERROR [cloud.vm.UserVmManagerImpl] (Job-Executor-11:job-1076) Unable to find any host for [User|i-8-42-VM-untagged]

注意到任务ID是1076。你可以追踪返回事件的相近任务1076按照以下grep:

grep "job-1076)" management-server.log

CloudStack代理服务器在 `/var/log/cloudstack/agent/`记录了它的活动。

在导出主存储时的数据丢失

症状

主存储的已有数据丢失。该主存储是用iSCSI卷导出的一个Linux NFS服务器输出。

原因

可能的原因是存储池之外的某个客户端挂载了该存储。如果发生了这种情况,LVM会被擦除,该卷上的所有数据都会丢失。

解决方案

配置LUN输出时,通过指定子网掩码来限制可以访问存储的IP地址范围。例如:

echo “/export 192.168.1.0/24(rw,async,no_root_squash,no_subtree_check)” > /etc/exports

根据你的部署需求,调整如上参数。

更多信息

请参考CloudStack安装指南的“辅助存储”章节中的导出过程。

恢复丢失的虚拟路由器

症状

虚拟路由器是运行着的,但主机失去连接。虚拟路由器不再按期望工作。

原因

虚拟路由器丢失或宕机。

解决方案

如果您确定虚拟路由器宕机了,或不再正常工作,请销毁它。您必须再建一个新的,此时备份路由器应保持运行(假定在使用冗余路由器配置的情况下)。

  • 强制停止虚拟路由器。请使用带参数forced=true的stopRouter API执行该步。

  • 在销毁虚拟路由器之前,请确保备份路由器正常运行。否则用户的网络连接将中断。

  • 使用destroyRouter API销毁该虚拟路由器。

使用restartNetwork API(参数cleanup=false)重建丢失的虚拟路由器。关于冗余虚拟路由器的配置,请参考创建新的网络方案。

关于更多的API语法信息,参见API参考`http://cloudstack.apache.org/docs/api/ <http://cloudstack.apache.org/docs/api/>`_。

维护模式没在vCenter中生效

症状

主机已经置为维护模式,但在vCenter中还是活动的。

原因

CloudStack管理员用户界面使用日程中的主机维护模式。该模式与vCenter的维护模式无关。

解决方案

请使用vCenter将主机置为维护模式。

无法从上传的vSphere模板部署虚拟机

症状

当试图创建一个虚拟机,虚拟机将无法部署。

原因

如果模板通过上传OVA文件创建,而OVA文件是使用vSphere Client创建的,可能OVA中包含ISO镜像。如果是的话,从模板部署虚拟机将失败。

解决方案

移除ISO并重新上传模板。

无法启动VMware的虚机

症状

虚机不能启动。可能出现以下错误:

  • 不能打开交换文件

  • 不能访问文件,因为文件被锁定

  • 不能访问虚机配置

原因

这是VMware机器的已知问题。为防止并发修改,ESX主机会锁定特定的关键虚机文件和文件系统。有时,虚机关机时没有解锁这些文件。当虚机再次开机时,由于不能访问这些关键文件,虚机就不能启动。

解决方案

参见:

VMware Knowledge Base Article

改变网络方案后负载均衡规则失效

症状

修改网络的网络方案后,负载均衡规则不再生效。

原因

负载均衡规则创建时使用的是包含外部负载均衡器,例如NetScaler的网络方案,后来改为使用CloudStack虚拟路由器的网络方案。

解决方案

针对每条已有的负载均衡规则,在虚拟路由器上创建相同的防火墙规则,以便规则继续生效。

故障排查网络传输

在下列故障排查步骤中检验你网络中出现的故障...

故障排查步骤
  1. 交换机上可以完成正确的配置VLAN通信。你可以辨别主机上的VLAN是否通讯通过提出标记接口,并在上述两个VLAN中使用ping命令。

    在*host1 (kvm1)*上

    kvm1 ~$ vconfig add eth0 64
    kvm1 ~$ ifconfig eth0.64 1.2.3.4 netmask 255.255.255.0 up
    kvm1 ~$ ping 1.2.3.5
    

    在*host2 (kvm2)*上

    kvm2 ~$ vconfig add eth0 64
    kvm2 ~$ ifconfig eth0.64 1.2.3.5 netmask 255.255.255.0 up
    kvm2 ~$ ping 1.2.3.4
    

    如果ping不通,运行 *tcpdump(8)*在所有VLAN上检查丢失的数据包。最终,如果交换机配置失败,CloudStack网络将无法工作,所以在处理下一部前要确定物理网络设备的问题。

  2. 确保 流量标签 已经设置在域中。

    流量标签需要在包括XenServer,KVM和VMwarel在内的所有类型的hypervisors设置。当你从*Add Zone Wizard*创建一个域时,你可以配置流量标签。

    _images/networking-zone-traffic-labels.png

    在一个已经存在的域总,你可以通过*Add Zone Wizard*修改流量标签。

    _images/networking-infra-traffic-labels.png

    列出正在使用的*CloudMonkey*

    acs-manager ~$ cloudmonkey list traffictypes physicalnetworkid=41cb7ff6-8eb2-4630-b577-1da25e0e1145
    count = 4
    traffictype:
    id = cd0915fe-a660-4a82-9df7-34aebf90003e
    kvmnetworklabel = cloudbr0
    physicalnetworkid = 41cb7ff6-8eb2-4630-b577-1da25e0e1145
    traffictype = Guest
    xennetworklabel = MGMT
    ========================================================
    id = f5524b8f-6605-41e4-a982-81a356b2a196
    kvmnetworklabel = cloudbr0
    physicalnetworkid = 41cb7ff6-8eb2-4630-b577-1da25e0e1145
    traffictype = Management
    xennetworklabel = MGMT
    ========================================================
    id = 266bad0e-7b68-4242-b3ad-f59739346cfd
    kvmnetworklabel = cloudbr0
    physicalnetworkid = 41cb7ff6-8eb2-4630-b577-1da25e0e1145
    traffictype = Public
    xennetworklabel = MGMT
    ========================================================
    id = a2baad4f-7ce7-45a8-9caf-a0b9240adf04
    kvmnetworklabel = cloudbr0
    physicalnetworkid = 41cb7ff6-8eb2-4630-b577-1da25e0e1145
    traffictype = Storage
    xennetworklabel = MGMT
    =========================================================
    
  3. KVM流量标签要求被命名为*”cloudbr0”, *”cloudbr2”, “cloudbrN” 等而且响应桥必须在KVM主机上。如果你以其他名字命名标记/桥,CloudStack(至少是较早版本)将会忽略它。CloudStack不能再KVM主机上创建物理桥,你需要在向CloudStackt添加主机前 **before**创建它们。

    kvm1 ~$ ifconfig cloudbr0
    cloudbr0  Link encap:Ethernet  HWaddr 00:0C:29:EF:7D:78
       inet addr:192.168.44.22  Bcast:192.168.44.255  Mask:255.255.255.0
       inet6 addr: fe80::20c:29ff:feef:7d78/64 Scope:Link
       UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
       RX packets:92435 errors:0 dropped:0 overruns:0 frame:0
       TX packets:50596 errors:0 dropped:0 overruns:0 carrier:0
       collisions:0 txqueuelen:0
       RX bytes:94985932 (90.5 MiB)  TX bytes:61635793 (58.7 MiB)
    
  4. 虚拟路由,SSVM,CPVM *public*接口将被桥接到主机的物理接口上。在下例中, *cloudbr0*是公共接口,CloudStack将创建虚拟接口桥。这个虚拟接口到物理接口映射式CloudStack用设置在域中的流量标签自动设置的。如果你提供争取的设置,但仍不能在网络上工作,在下一步调试前检查下交换层的设备。你可以在虚拟,物理和桥设备上使用tcpdump证实流量。

    kvm-host1 ~$ brctl show
    bridge name  bridge id           STP enabled interfaces
    breth0-64    8000.000c29ef7d78   no          eth0.64
                                                 vnet2
    cloud0       8000.fe00a9fe0219   no          vnet0
    cloudbr0     8000.000c29ef7d78   no          eth0
                                                 vnet1
                                                 vnet3
    virbr0       8000.5254008e321a   yes         virbr0-nic
    
    xenserver1 ~$ brctl show
    bridge name  bridge id           STP enabled interfaces
    xapi0    0000.e2b76d0a1149       no          vif1.0
    xenbr0   0000.000c299b54dc       no          eth0
                                                xapi1
                                                vif1.1
                                                vif1.2
    
  5. 在XenServer上预先创建标签。类似于KVM桥启动,流量标签必须在加入CloudStack的XenServer主机上提前创建。

    xenserver1 ~$ xe network-list
    uuid ( RO)                : aaa-bbb-ccc-ddd
              name-label ( RW): MGMT
        name-description ( RW):
                  bridge ( RO): xenbr0
    
  6. 网络将会从SSVM和CPVM实例上默认获取。它们的公共IP也将会直接由网络ping通。请注意一下这些测试仅在交换机或者流量标签已被成功配置在你的环境中实现。如果你的 SSVM/CPVM可以连接到Internet, 它非常不同于虚拟路由器(VR)也可以连接到Internet,建议可能是交换时的问题或者是错误分配了流量标签。确定SSVM/CPVM的问题前请先调试VR问题。

    root@s-1-VM:~# ping -c 3 google.com
    PING google.com (74.125.236.164): 56 data bytes
    64 bytes from 74.125.236.164: icmp_seq=0 ttl=55 time=26.932 ms
    64 bytes from 74.125.236.164: icmp_seq=1 ttl=55 time=29.156 ms
    64 bytes from 74.125.236.164: icmp_seq=2 ttl=55 time=25.000 ms
    --- google.com ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 25.000/27.029/29.156/1.698 ms
    
    root@v-2-VM:~# ping -c 3 google.com
    PING google.com (74.125.236.164): 56 data bytes
    64 bytes from 74.125.236.164: icmp_seq=0 ttl=55 time=32.125 ms
    64 bytes from 74.125.236.164: icmp_seq=1 ttl=55 time=26.324 ms
    64 bytes from 74.125.236.164: icmp_seq=2 ttl=55 time=37.001 ms
    --- google.com ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 26.324/31.817/37.001/4.364 ms
    
  7. 除非有些Egress规则,Virtual Router(VR)也是不能到达Internet。Egress规则仅控制VR自身的通讯与否。

    root@r-4-VM:~# ping -c 3 google.com
    PING google.com (74.125.236.164): 56 data bytes
    64 bytes from 74.125.236.164: icmp_seq=0 ttl=55 time=28.098 ms
    64 bytes from 74.125.236.164: icmp_seq=1 ttl=55 time=34.785 ms
    64 bytes from 74.125.236.164: icmp_seq=2 ttl=55 time=69.179 ms
    --- google.com ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 28.098/44.021/69.179/17.998 ms
    
  8. 尽管如此,Virtual Router(VR) Source NAT Pulic IP地址除非有近似的Ingress规则在此,要么**WONT** 达到。你可以添加 Ingress rules under Network, Guest Network, IP Address, Firewall 设置页。

    _images/networking-ingress-rule.png
  9. 默认的VM Instances不能够连接Internet。添加Egress规则后可允许连接。

    _images/networking-egress-rule.png
  10. 一些用户报告在SSVM,CPVM或者是Vir Router刷新IPTables规则(或改变路由)可以使Internet工作。这不是系统期望的行为并建议这样的网络设置是错误的。SSVM,CPVM或者是VR上没有要求IPtables/route改变。回去重新检查你所有的设置吧。

在海量的实例中,问题会出现在交换层,原因是L3的配置错误.

这些内容有Shanker Balan贡献,其原文发布在`Shapeblue’博客中<http://shankerbalan.net/blog/internet-not-working-on-cloudstack-vms/>`_