Claw - Cloudify Almighty Wrapper¶
claw
can help you take the pain out of (at least) some bootstrapping woes.Along the way,
claw
can help you with other Cloudify development-related aspects.claw
uses concepts derived from the system tests framework. Mainly, handler configurations which are used to describe different environments in which you may bootstrap on. It tries doing it in a way that will allow you to have many different configurations with as little repetition as possible.claw
was initially conceived to ease the day to day Cloudify development pains of a single developer. It hopes it can do the same for more than one developer.Using
claw
, bootstrapping a Cloudify manager looks like this:$ claw bootstrap openstack_datacentred1
To get started, follow the Installation page. Next, don’t miss out on
Bash Completion. Without it, working with claw
becomes somewhat
inconvenient. Next, read Bootstrap and Teardown.
The other sections are great too, but you can certainly get going without them in the beginning.
The code lives here.
Installation¶
Prerequisites¶
cloudify-system-tests should be installed in editable mode in the relevant virtualenv.
Installing The Code¶
$ pip install https://github.com/dankilman/claw/archive/master.tar.gz
Note
claw
is updated quite frequently at the moment, so you may want to
consider cloning the claw
repo, and installing it in editable mode.
This way, updates will be as easy as git pull
in the repo local
directory.
Setting Up The Environment¶
Choose a location that will serve as the base directory for all
claw
related configuration and generated files. For example:$ export CLAW_HOME=$HOME/claw $ mkdir -p $CLAW_HOME
Initialize
claw
in the base directory. While we runinit
in a specific directory, note that initialization is only performed once, i.e. the init configuration will be stored in~/.claw
and subsequentclaw
commands can be executed from any directory, not specifically the directory in whichinit
was performed.$ cd $CLAW_HOME $ claw init
The init
command created two files: suites.yaml
and blueprints.yaml
which are covered in their own sections. It also created a directory named
configurations
which is where generated manager blueprint configurations
will be placed and a directory named scripts
prepopulated with an example
script.
It will make sense to have the base directory managed by some version control
system (i.e. git
, privately, as these configuration files will probably
contain credentials, etc...)
The next sections go into details showing how claw
may be useful in
simplifying your day to day interactions with Cloudify.
Bash Completion¶
Working with claw
can get quite verbose at times due to long descriptive
configuration and blueprint names and many options that may be passed as
arguments.
This is why properly configuring your bash environment to autocomplete claw
commands is highly recommended. Really, highly recommended, don’t skip
this part even if you’re feeling lazy.
Virtualenvwrapper¶
If you use virtualenvwrapper
, a clean solution to have autocomplete only
available when running inside the Cloudify related virtualenv, would be to add
it to the virtualenv postactivate
script, like this:
$ workon VIRTUALENV_NAME
$ cdvirtualenv
$ ${EDITOR} bin/postactivate
Next, add the following to the postactivate
script:
eval "$(register-python-argcomplete claw)"
Plain Bash¶
If you don’t use virtualenvwrapper
, consider using it. It’s
great.
If you’re still not persuaded, put something like this in your ~/.bashrc
:
if command -v register-python-argcomplete > /dev/null 2>&1; then
eval "$(register-python-argcomplete claw)"
fi
Verify It Works¶
Open a new shell, activate
or workon
your virtualenv, type claw
,
hit tab twice (three times if you typed claw
with no space) and you should
be seeing something like this:
$ claw <TAB> <TAB>
--help cdconfiguration deploy generate-script status
-h cleanup generate init teardown
bootstrap cleanup-deployments generate-blueprint script undeploy
Bootstrap and Teardown¶
The main reason claw
was written in the first place, was to simplify the
process of bootstrapping during development.
You may have many environments in which you bootstrap on, aws
based environments,
openstack
based environments, etc...
Each environment has its own set of inputs
, such as different credentials,
different resource names, etc...
At the same time, some set of properties may be shared between environments
which means duplication. You get the drift.
Similar modifications may be required in the different manager blueprints, which suffer from the same problems.
At the same time, during development, you generally want to use the tip of the
master
or build branch of the cloudify-manager-blueprints
repository.
All these different constraints will likely cause you many headaches, sporadic failures due to some manual typing gone wrong and similar mishaps.
claw
can help you keep you sanity.
Configurations¶
Before we delve into how you would actually bootstrap using claw
, we need
to discuss the concept of configurations.
When CLAW_HOME
was initialized during claw init
, a file named
suites.yaml
was generated in it. The name suites.yaml
may be familiar
to you from cloudify-system-tests
, this is not coincidental.
claw
leverages the concept of handler_configurations
used by the system
tests framework to configure different environments. If you are not familiar
with the system tests suites.yaml
, that’s OK, this guide will try not
making the assumption of familiarity.
The sections in suites.yaml
are:
variables
manager_blueprint_override_templates
inputs_override_templates
handler_configuration_templates
handler_configurations
For now, we’ll focus on the handler_configurations
sections, and ignore the
others.
Bootstrap and Handler Configurations¶
Simplest Example¶
For this section we’ll use the following basic suites.yaml
:
handler_configurations:
some_openstack_env:
manager_blueprint: /path/to/my-manager-blueprint.yaml
inputs: /path/to/my-manager-blueprint-inputs.yaml
With this configuration in place, you can run (from any directory):
$ claw bootstrap some_openstack_env
To bootstrap a manager.
The command above created a directory at
$CLAW_HOME/configurations/some_openstack_env
.
This directory contains:
- A copy of the
inputs.yaml
supplied. - A directory named
manager-blueprint
which is a copy of the original manager blueprint directory (with the exception that the blueprint file was renamed tomanager-blueprint.yaml
) - A
handler_configuration.yaml
file that can be used to run system tests on the manager that was just bootstrapped. (withmanager_ip
properly configured)
In addition, cfy init
and cfy bootstrap
were executed in this directory
by claw
and .cloudify/config.yaml
was configured so that you can see
colors when running bootstrap/teardown and other workflows, which is nice.
Of course, this is not very useful and can be easily achieve directly from
cfy
:
$ cfy init
$ cfy bootstrap -p /path/to/my-manager-blueprint.yaml -i /path/to/my-manager-blueprint-inputs.yaml
And while this did not do all these things that claw
did previously, in
most cases, this may be enough. So let’s take it up a notch and start using
more advanced handler_configuration
features.
Inputs and Manager Blueprint Override¶
Now, we’ll build upon the previous example, making use of inputs_override
and manager_blueprint_override
:
handler_configurations:
some_openstack_env:
manager_blueprint: /path/to/my-needs-a-patch-manager-blueprint.yaml
inputs: /path/to/my-partially-filled-manager-blueprint-inputs.yaml
inputs_override:
keystone_username: MY_USERNAME
keystone_password: MY_PASSWORD
keystone_tenant_name: MY_TENANT_NAME
manager_blueprint_override:
node_templates.management_subnet.properties.subnet.dns_nameservers: [8.8.4.4, 8.8.8.8]
Suppose that the above handler configuration uses a manager blueprint that needs a
fix to the management network subnet dns configuration.
Its inputs file lacks a username, a password, and a tenant name. claw
enables us to override or even add properties to both the manager blueprint and the inputs file. This can be done by configuring, in addition to manager_blueprint
and inputs
properties, the manager_blueprint_override
and inputs_override
ones.
Similar to the previous section, running:
$ claw bootstrap some_openstack_env
will bootstrap the manager.
The new thing here, is that the generated inputs.yaml
file is not just a
copy of the original inputs file, but rather a merge of its content, overridden
by items specified in inputs_override
. Similarly, the copy of the manager
blueprint was modified so that the management_subnet
node template, has the
required dns_nameservers
property in place.
Variables¶
Variables let you keep values in one place and reference them in inputs and manager blueprint overrides.
We’ll modify the previous example and extend it to use variables:
variables:
username: MY_USERNAME
password: MY_PASSWORD
tenant: MY_TENANT_NAME
handler_configurations:
some_openstack_env:
manager_blueprint: /path/to/my-manager-blueprint.yaml
inputs: /path/to/my-partially-filled-manager-blueprint-inputs.yaml
inputs_override:
keystone_username: '{{username}}'
keystone_password: '{{password}}'
keystone_tenant_name: '{{tenant}}'
As you can see, variables are pretty straightforward to use. Inside a string,
use {{VARIABLE_NAME}}
to reference a variable. Variables can also be used
as part of a larger string. For example, if we have a variable named my_var
we can use it inside a string like this: some_value_{{my_var}}
In addition to defining your own variables and using them in handler
configurations, you can reference variables that are defined in the
suites.yaml
file that is located in the cloudify-system-tests
repository. For example, if the system tests suites.yaml
contains a
variable named datacentred_openstack_centos_7_image_id
, you can reference
it just the same, without it being defined in your suites.yaml
file:
handler_configurations:
some_openstack_env:
manager_blueprint: /path/to/my-manager-blueprint.yaml
inputs: /path/to/my-partially-filled-manager-blueprint-inputs.yaml
inputs_override:
image_id: '{{datacentred_openstack_centos_7_image_id}}'
System Tests Fields¶
As mentioned previously, when claw bootstrap
is called, it will generate
a file named handler-configuration.yaml
under
$CLAW_HOME/configurations/{CONFIGURATION_NAME}
that is suitable for use
when running system tests locally on the bootstrapped manager.
For this file to be fully suitable, it is up to you, to add the relevant fields
to the handler configuration. These fields are properties
and handler
.
The properties
field should be a name that is specified under the
handler_properties
section of the system tests suites.yaml
.
The handler
field should be a handler that matches the environment on
which the bootstrap is performed.
For example, a handler configuration for running tests on datacentred openstack might look like this:
handler_configurations:
some_openstack_env:
manager_blueprint: /path/to/my-manager-blueprint.yaml
inputs: /path/to/my-manager-blueprint-inputs.yaml
handler: openstack_handler
properties: datacentred_openstack_properties
YAML Anchors (&), Aliases (*) and Merges (<<)¶
While not specific to handler configurations, usage of YAML anchors, aliases and merges can greatly reduce repetition of complex configurations and improve reusability of different components in the handler configurations.
In the following example, we’ll see how YAML anchors, aliases and merges can be used in handler configurations.
We’ll start be giving an example of a somewhat complex annotated
suites.yaml
, and explain what’s going on afterwards.
# Under this section, we put templates that will be used in manager
# blueprint override sections
manager_blueprint_override_templates:
# For now ignore the key 'openstack_dns' and notice the
# anchor (&) 'openstack_dns_servers_blueprint_override'
openstack_dns: &openstack_dns_servers_blueprint_override
node_templates.management_subnet.properties.subnet.dns_nameservers: [8.8.4.4, 8.8.8.8]
# For now ignore the key 'openstack_influx_port' and notice the
# anchor (&) 'openstack_openinflux_port_blueprint_override'
openstack_influx_port: &openstack_openinflux_port_blueprint_override
# The [append] means that this dict (that contains port and
# remote_ip_prefix) will be added to the rules list in the overridden
# manager blueprint
node_templates.management_security_group.properties.rules[append]:
port: 8086
remote_ip_prefix: 0.0.0.0/0
# Under this section, we put templates that will be used in inputs
# override sections
inputs_override_templates:
# For now ignore the key 'datacentred_openstack_env' and notice the
# anchor (&) 'datacentred_openstack_env_inputs'
datacentred_openstack_env: &datacentred_openstack_env_inputs
keystone_username: MY_USERNAME
keystone_password: MY_PASSWORD
keystone_tenant_name: MY_TENTANT_NAME
keystone_url: MY_KEYSTONE_URL
external_network_name: MY_EXTERNAL_NETWORK_NAME
image_id: MY_IMAGE_ID
flavor_id: MY_FLAVOR_ID
region: MY_REGION
# Under this section, we put templates that will be used in handler
# configurations
handler_configuration_templates:
# Notice the anchor (&) 'openstack_handler_configuration'
# also notice that in this section, templates are specified as list
# instead of a dict like the previous template sections.
# It is not required that this section will be a list (i.e. it can be a
# dict as well), but it is required that the previous sections remain
# dicts
- &openstack_handler_configuration
handler: openstack_handler
inputs: ~/dev/cloudify/cloudify-manager-blueprints/openstack-manager-blueprint-inputs.yaml
manager_blueprint: ~/dev/cloudify/cloudify-manager-blueprints/openstack-manager-blueprint.yaml
handler_configurations:
# Notice the anchor (&) 'datacentred_handler_configuration'
datacentred_openstack_env_plain: &datacentred_handler_configuration
# This is the first place aliases (*) and merges (<<) are used in this
# file. We merge into the 'datacentred_openstack_env_plain'
# handler configuration, the content of the handler configuration
# template whose anchor (&) is 'datacentred_openstack_env_plain'.
# Note, that while this is the first place aliases are used here, this
# is simply how to example is built. There is nothing stopping you
# from using them in the templates sections to reference previously
# defined templates.
<<: *openstack_handler_configuration
# we continue populating the handler configuration with regular values
properties: datacentred_openstack_properties
# here we use an alias (*) directly to set the value of
# 'inputs_override' to be the dict specified by the
# 'datacentred_openstack_env_inputs' anchor (&)
inputs_override: *datacentred_openstack_env_inputs
# Defining a modified datacentred handler configuration
datacentred_openstack_env_with_modified_dns:
# Notice that we merge (<<) the previously defined handler
# configuration anchored (&) by 'datacentred_handler_configuration'
<<: *datacentred_handler_configuration
# the only modification we make in this handler configuration is
# setting 'manager_blueprint_override' to have the value of the
# manager blueprint template anchored (&) with
# 'openstack_dns_servers_blueprint_override'
manager_blueprint_override: *openstack_dns_servers_blueprint_override
# Defining another modified datacentred handler configuration
datacentred_openstack_env_with_modified_dns_and_openinflux:
# Notice that we merge (<<) the previously defined handler
# configuration anchored (&) by 'datacentred_handler_configuration'
<<: *datacentred_handler_configuration
manager_blueprint_override:
# In this handler configuration, we merge (<<) both templates
# that were defined the the manager blueprint templates sections
<<: *openstack_dns_servers_blueprint_override
<<: *openstack_openinflux_port_blueprint_override
Most of what is going on in the previous example, is inlined within the YAML as comments, so make sure you read through them to understand how it works.
One thing to mention is that even though it may look verbose, we now have 3 slightly different configurations all located close to each other with very little duplication. This enables us to bootstrap different (but similar) configurations as easy as:
$ claw bootstrap datacentred_openstack_env_plain
$ claw bootstrap datacentred_openstack_env_with_modified_dns
$ claw bootstrap datacentred_openstack_env_with_modified_dns_and_openinflux
(Probably not in parallel though, as they all share the same tenant and are likely to interfere with each other)
Inputs and Manager Blueprint Override as Command Line Arguments¶
We can now go back to the previous example, where we (not so smoothly) ignored
the keys in the inputs_override_templates
and
manager_blueprint_override_templates
.
What if we had many small override snippets in these sections? Obviously, we
can’t create a configuration for each combination, as there will be too many
pretty soon and the suites.yaml
file will become a mess to maintain.
For that, claw
accepts --inputs-override (-i)
and
--manager-blueprint-override (-b)
as flags to the claw bootstrap
command, where several overrides can be passed in a single claw bootstrap
invocation. The values are the key names in the inputs_override_templates
and manager_blueprint_override_templates
sections.
Building on our previous example, if we only had the
datacentred_openstack_env_plain
handler configuration, we could do:
$ claw bootstrap datacentred_openstack_env_plain -b openstack_dns -b openstack_influx_port
To override the manager blueprints with overrides from the openstack_dns
and openstack_influx_port
manager blueprint templates.
Similarly, if we had an inputs override template named my_dev_branches
and
we wanted to bootstrap with our dev branches override we could do something
like:
$ claw bootstrap datacentred_openstack_env_plain -i my_dev_branches
without having to add a new configuration only for the sake of overriding some branches.
Overrides Syntax¶
Internally, claw
uses and extends the
cosmo_tester.framework.utils:YamlPatcher
to implement the overriding logic.
First we’ll go over features that are provided by the original YamlPatcher
.
Next, we’ll show an override feature that only exists in claw
(for now).
For the following examples we’ll focus on manager blueprint overrides because they tend to get nested and require more advanced overrides, but there is nothing stopping you from applying the same methods to inputs override if your heart desires.
Path Based Overrides¶
Overrides are based on the path to the key/value.
Manager blueprint snippet:
node_templates:
management_vm: ...
management_subnet: ...
webui: ...
If we wanted to add a full node template to the previous example we’d have an override like this:
manager_blueprint_override_templates:
new_node_in_blueprint: &new_node
# You would usually have a single override under an override template,
# but there is nothing stopping you from having multiple overrides
# under the same template if this is what you need.
node_templates.my_new_node:
type: cloudify.nodes.Root
...
The resulting YAML will look something like:
node_templates:
management_vm: ...
management_subnet: ...
webui: ...
my_new_node:
type: cloudify.nodes.Root
...
(after applying the override using one of the methods described in this page)
Note
Overriding (or adding a value) that is not nested is still path based, only the path to the overridden key is simply the property name. This usually applies to inputs overrides as they are mostly not nested. (You can find examples of such of overrides in previous sections on this page)
Note
Overriding a nested path that doesn’t exist will simply create this path for you.
For example, based on this simple YAML:
node_templates:
empty_node: {}
An override like node_template.empty_node.some.nested.path: value
, will
result in a YAML similar to this:
node_templates:
empty_node:
some:
nested:
path: value
Note
If an element in a path contains a dot (.
), you can escape the dot
using backslash (\
).
For example, if we wanted to add a configure
override to some node
template lifecycle operation:
node_templates:
some_node:
interfaces:
cloudify.interfaces.lifecycle:
create: ...
We’d have something like:
manager_blueprint_override_templates:
configure_lifecycle_operation: &lifecycle_operation
node_templates.some_node.interfaces.cloudify\.interfaces\.lifecycle.configure:
implementation: ...
inputs: ...
And the resulting YAML will look something like:
node_templates:
some_node:
interfaces:
cloudify.interfaces.lifecycle:
create: ...
configure:
implementation: ..
inputs: ...
(after applying the override using one of the methods described in this page)
Overriding Values in Lists¶
To override a value of some list item, you can you the [SOME_INDEX]
directive.
For example, if we had this in a manager blueprint:
node_templates: some_node: relationships: - type: ... target: ... - type: some.relationship.type target: ...
And we wanted to change the type of the second relationship, we’d have an override similar to this:
manager_blueprint_override_templates:
change_rel_type: &rel_type
# note that indexing is zero-based (i.e. the second element is
# referenced by index 1)
node_templates.some_node.relationships[1].type: some.other.relationship.type
The resulting YAML will look something like this:
node_templates:
some_node:
relationships:
- type: ...
target: ...
- type: some.other.relationship.type
target: ...
(after applying the override using one of the methods described in this page)
If, on the other hand, we wanted to add a new relationship, we’d use the
[append]
directive:
manager_blueprint_override_templates:
append_rel: &append_rel
node_templates.some_node.relationships[append]:
type: ...
target: some_new_target_node
The resulting YAML will look something like this:
node_templates:
some_node:
relationships:
- type: ...
target: ...
- type: ...
target: ...
- type: ...
target: some_new_target_node
(after applying the override using one of the methods described in this page)
Function Based Overrides (Claw Feature Only)¶
There may be times when you need to do some advanced override that is not catered by the existing mechanism.
To enable this, claw
extends the system tests YamlPatcher
with an
ability to specify a function that will accept the current overridden value
as its first argument (or None
if no current value exists) and additional
optional arguments and keyword arguments.
We’ll implement a simple override function that add appends exclamation marks to the current value (we will also make it configurable)
# lives in some.example.module
def add_excitement(current_value,
excitement_count=3,
excitement_char='!'):
assert isinstance(current_value, basestring)
return '{0}{1}'.format(current_value,
excitement_char * excitement_count)
Example YAML:
node_templates:
management_vm:
properties:
property1: value1
property2: value2
property3: value3
To use the function we just created we’ll define an override that has this structure:
func: path.to.func.module:function_name
# the following two are optional
args: [1,2,3]
kwargs: {some_kwarg: value, some_kwarg2: 2}
Let’s apply this structure to override values in our example
manager_blueprint_override_templates:
change_props: &change_props_anchor
node_templates.management_vm.properties.property1:
func: some.example.module:add_excitement
# using the args syntax
node_templates.management_vm.properties.property2:
func: some.example.module:add_excitement
args: [5]
# using the kwargs syntax
node_templates.management_vm.properties.property3:
func: some.example.module:add_excitement
kwargs: {excitement_count: 2, excitement_char: ?}
The resulting YAML will look something like this:
node_templates:
management_vm:
properties:
property1: value1!!!
property2: value2!!!!!
property3: value3??
(after applying the override using one of the methods described in this page)
Note
claw
comes with 2 built-in override functions to filter values from
lists and dictionaries. They can be found at claw.patcher:filter_list
and claw.patcher:filter_dict
. It also comes with an override function
that reads values from environment variables. It can be found at
claw.patcher:env
.
Reset Configuration¶
claw bootstrap
accept a --reset
flag that will remove the current
configuration directory. Use with care.
Teardown¶
There is not much to say about tearing down an environment bootstrapped by
claw
.
If we bootstrapped an environment based on my_handler_configuration
, we can
perform teardown like this:
$ claw teardown my_handler_configuration
Caution
Internally, claw teardown
will pass --force
and
--ignore-deployments
to the underlying cfy teardown
command to save
you some typing. You should be aware of this to avoid unfortunate
accidental teardowns.
Deploy and Undeploy¶
The main use case for using claw
in the first place is to bootstrap
Cloudify management environments as painlessly as possible. But, once we have
that in place, why not leverage all this context we have.
One aspect in which this takes place, is to allow fast, repeatable deployments
- which means blueprint upload, deployment creation and install
workflow
execution; and undeployment - which means uninstall
workflow execution,
deployment deletion and blueprint deletion.
Similarly to the bootstrap process, on first look, it would appear doing this
doesn’t require any additional tool. cfy
already has everything in place:
$ cfy blueprints upload -p /path/to/blueprint.yaml -b my_blueprint
$ cfy deployments create -b my_blueprint -i /path/to/inputs.yaml
$ cfy executions start -w install
While this document is written, cfy install
is not integrated into cfy
yet, but with it, this process should be even simpler.
So, how can claw
simplify this process?
The answer lies in a configuration mechanism that is very similar in nature to the handler configuration mechanism that has been described in Bootstrap and Teardown.
Configurations¶
During claw init
, in addition to the generated suites.yaml
file,
a file named blueprints.yaml
is also generated. The structure of this file
is similar to that of suites.yaml
.
It has two sections: variables
, which should be familiar to you from
suites.yaml
and blueprints
, which logically, serves the same purpose as
handler_configurations
in suites.yaml
.
Deploy and Blueprint Configurations¶
In the following examples, we shall assume that a Cloudify management
environment was bootstrapped based on a configuration named
datacentred_openstack
.
Simplest Example¶
For this section we’ll use the following basic blueprints.yaml
:
blueprints:
openstack_nodecellar:
blueprint: ~/dev/cloudify/cloudify-nodecellar-example/openstack-blueprint.yaml
inputs: /path/to/nodecellar/openstack/inputs.yaml
With this blueprint configuration in place, you can run (from any directory):
$ claw deploy datacentred_openstack openstack_nodecellar
To deploy nodecellar on the datacentred_openstack
environment. (upload
blueprint, create deployment and execute install
workflow)
The command above created a directory at
$CLAW_HOME/configurations/datacentred_openstack/blueprints/openstack_nodecellar
.
This directory contains:
- A copy of the
inputs.yaml
supplied. - A directory named
blueprint
which is a copy of the original blueprint directory (with the exception that the blueprint file was renamed toblueprint.yaml
, if named differently) - A
blueprint-configuration.yaml
file that, at the moment, is not very useful and has the suppliedinputs
andblueprint
fields in it.
Inputs and Blueprint Override¶
Next, we’ll build upon the previous example, making use of inputs_override
:
openstack_nodecellar:
blueprint: ~/dev/cloudify/cloudify-nodecellar-example/openstack-blueprint.yaml
inputs: ~/dev/cloudify/cloudify-nodecellar-example/inputs/openstack.yaml.template
inputs_override:
image: MY_IMAGE_ID
flavor: MY_FLAVOR
agent_user: AGENT_USERNAME
The previous blueprint configuration uses the default openstack nodecellar
blueprint and the inputs template file that comes with it. In addition, it uses
inputs_override
to override the image
, flavor
and agent_user
inputs.
Similar to the previous section, running:
$ claw deploy datacentred_openstack openstack_nodecellar
will deploy nodecellar on the datacentred_openstack
environment.
Note that the generated inputs.yaml
file is not just a
copy of the original inputs file, but rather a merge of its content, overridden
by items specified in inputs_override
.
Note
blueprint_override
was not used in the previous example, but has the
same semantics as those described for manager_blueprint_override
in
Bootstrap and Teardown.
Variables¶
Variables behave in a similar manner to how they behave in suites.yaml
as described in Bootstrap and Teardown.
There are two things to note, though.
First, just as the handler configuration using variables in the user
suites.yaml
can reference variables defined in the system tests
suites.yaml
, blueprint configurations can use variables defined in
the system tests suites.yaml
, the user defined suites.yaml
and
variables defined directly in blueprints.yaml
.
In addition, the handler configuration properties
are exposed in the
variables used in blueprint configurations. For example, building upon the
previous section:
variables:
agent_user: ubuntu
blueprints:
openstack_nodecellar:
blueprint: ~/dev/cloudify/cloudify-nodecellar-example/openstack-blueprint.yaml
inputs: ~/dev/cloudify/cloudify-nodecellar-example/inputs/openstack.yaml.template
inputs_override:
image: '{{properties.ubuntu_trusty_image_id}}'
flavor: '{{properties.small_flavor_id}}'
agent_user: '{{agent_user}}'
The openstack_nodecellar
blueprint configuration uses the agent_user
variable defined in the same file and properties.ubuntu_trusty_image_id
and properties.small_flavor_id
that come from the properties defined
in the handler configuration. These are the same properties used by system
tests when they use self.env.ubuntu_trusty_image_id
for example.
The nice thing about using properties, is that they will contain correct values when switching between different environments as opposed to hard coded values or plain variable references.
Reset Configuration¶
claw deploy
accept a --reset
flag that will remove the current
configuration directory. Use with care.
Undeploy¶
To undeploy (execute uninstall
workflow, delete deployment and delete
blueprint), assuming the blueprint configuration is named
openstack_nodecellar
run:
$ claw undeploy datacentred_openstack openstack_nodecellar
To cancel currently running executions before starting the undeploy process,
pass the --cancel-executions
flag to the claw undeploy
command.
Caution
Internally, claw undeploy
will pass --ignore-live-nodes
to the
underlying cfy deployments delete
command to save
you some typing. You should be aware of this when using this command.
Cleanup¶
claw
exposes two different cleanup mechanisms. One mechanism is used to
undeploy
all deployments on a Cloudify manager, the other mechanism is
used to remove all IaaS resources in an environment.
Deployment Cleanup¶
To undeploy all deployments on a claw
based Cloudify manager
environment, run:
$ claw cleanup-deployments CONFIGURATION_NAME
This command will iterate all deployments on the Cloudify manager and for each
deployment the uninstall
workflow will be executed, the deployment will be
deleted and its blueprint will be deleted.
To cancel currently running executions of deployments before the undeployment
process, pass the --cancel-executions
to the claw cleanup-deployments
command.
Caution
Internally, claw cleanup-deployments
will pass --ignore-live-nodes
to the underlying cfy deployments delete
commands.
You should be aware of this when using this command.
IaaS Cleanup¶
To remove all IaaS resources on the IaaS used by a certain configuration, run:
$ claw cleanup CONFIGURATION_NAME
Caution
This command removes all IaaS resources it finds (with a few exceptions).
Don’t run it on a shared tenant/account.
Note
At the moment, the claw cleanup
command is only implemented for
openstack based environments.
Note
On openstack, claw cleanup
will not delete key pairs by default.
To make the claw cleanup
command delete key pairs, add
delete_keypairs: true
to the relevant handler configuration.
For example:
handler_configurations:
some_openstack_env:
...
delete_keypairs: true
Note
If a certain configuration is passed to claw cleanup
and it doesn’t
exist, it will be generated temporarily for cleanup purposes. As such
you can use the --inputs-override
and --manager-blueprint-override
flags as described in Bootstrap and Teardown.
Generate Files¶
There may be times when you don’t want claw
to do the actual bootstrap or
deployment process for you, but you do want it to generate the initial files
for you.
Maybe because the system test you are working on performs the bootstrap. Maybe you need to make some complex modifications that are not catered by the override mechanism. Maybe you just like having more control on the process.
For whatever reason it may be, claw
comes with two commands that will
generate manager and regular blueprints for you. These commands are
claw generate
and claw generate-blueprint
.
Note
Under the hood, when you run claw bootstrap
and claw deploy
,
claw
uses the same generate
, and generate-blueprint
commands.
Generate Manager Blueprints¶
To generate a manager blueprint based on a handler configuration, run:
$ claw generate CONFIGURATION_NAME
The previous command will generate all the files in a directory located at
$CLAW_HOME/configurations/CONFIGURATION_NAME
as described in
Bootstrap and Teardown.
claw generate
accepts the same flags as claw bootstrap
. These are
described in Bootstrap and Teardown.
Generate Blueprints¶
To generate a blueprint based on a blueprint configuration, within a handler configuration based environment, run:
$ claw generate-blueprint CONFIGURATION_NAME BLUEPRINT_NAME
The previous command will generate all the files in a directory located at
$CLAW_HOME/configurations/CONFIGURATION_NAME/blueprints/BLUEPRINT_NAME
as
described in Deploy and Undeploy.
Reset Configuration¶
All commands accept a --reset
flag that will remove the current
configuration directory. Use with care.
Scripts¶
The script mechanism allows you to execute python scripts through claw
and import a context object named cosmo
.
Executing Scripts¶
There are several ways to execute scripts:
Explicit path to the script:
$ claw script {CONFIGURATION_NAME} {PATH_TO_SCRIPT}
The above will execute the script located at
{PATH_TO_SCRIPT}
with the configurationCONFIGURATION_NAME
as itscosmo
(context).If a script is located under
$CLAW_HOME/scripts
(or any other scripts dir that is explicitly added to the~/.claw
,scripts
setting), it can be executed less verbosely by specifying the filename only without the full path, for example if there is a script namedmy_script.py
under$CLAW_HOME/scripts
, it can be executed by running:$ claw script {CONFIGURATION_NAME} my_script.py
If the script name ends with
.py
you can omit the.py
extension:$ claw script {CONFIGURATION_NAME} my_script
If the script name ends with
.py
you can also run it as if it were a built-inclaw
command (underscores_
are replaced with dashes-
):$ claw my-script {CONFIGURATION_NAME}
To enable running scripts directly,
claw
will execute a script if it’s path is supplied as the first argument, e.g.:$ claw {PATH_TO_SCRIPT}
If called like this, the active configuration represented by the
cosmo
context, will be of the configuration that wasclaw generate
-ed orclaw bootstrap
-ed most recently.Tip
By adding
#! /usr/bin/env claw
as the first line of an executable script, a script can be executed by calling it directly.$ {PATH_TO_SCRIPT}
Script Functions¶
When a script is executed, if no function name is supplied as an additional
argument, a function named script
is searched for, and executed if found.
If a function name is supplied, e.g.
$ claw script {CONFIGURATION_NAME} {SCRIPT_PATH} my_function
it will be executed instead (my_function
in the example above).
Script Function Arguments¶
Functions are executed by leveraging the argh
library. This library makes it
easy to pass additional configuration to the function with very little effort
in terms of argument parsing.
For example, consider the following script
#! /usr/bin/env claw
def script(first_name, last_name, age=35):
pass
Running the script with no arguments:
$ {PATH_TO_SCRIPT}
usage: claw [-h] [-a AGE] first-name last-name
claw: error: too few arguments
You can also run help:
$ {PATH_TO_SCRIPT} --help
usage: claw [-h] [-a AGE] first-name last-name
positional arguments:
first-name -
last-name -
optional arguments:
-h, --help show this help message and exit
-a AGE, --age AGE 35
As can be seen in the previous snippets, the argh
library will analyze the
function signature and determine that it expects two positional arguments and
one optional argument named age
.
If we wanted, we could add help descriptions to all the arguments
#! /usr/bin/env claw
import argh
@argh.arg('first-name', help='The first name')
@argh.arg('last-name', help='The last name')
@argh.arg('-a', '--age', help='The age')
def script(first_name, last_name, age=35):
pass
Which will then produce
$ {PATH_TO_SCRIPT} --help
usage: claw [-h] [-a AGE] first-name last-name
positional arguments:
first-name The first name
last-name The last name
optional arguments:
-h, --help show this help message and exit
-a AGE, --age AGE The age (default: 35)
Finally, to run this function:
$ {PATH_TO_SCRIPT} John Doe 72
All of the features presented above are exposed by the argh
library, but
it was worth mentioning them here because they could be quite useful.
You can read more about argh
in http://argh.readthedocs.org.
Cosmo¶
Until now, all we showed, was how to run scripts through claw
.
This ability on its own, is not very useful, as one could always run scripts
directly through the python
interpreter.
This is where the cosmo
object comes in. The cosmo
object,
serves as your entry point to... well, the cosmo. It encapulates different
aspects and utils of a Cloudify manager environment, specified by
CONFIGURATION_NAME
.
To use the cosmo
object, add the following to the script imports:
from claw import cosmo
Some useful things that the cosmo
holds:
cosmo.client
will return a configured Cloudify REST client.cosmo.ssh
will configure a fabric env to connect to the Cloudify manager.usage example:
with cosmo.ssh() as ssh: ssh.run('echo $HOME')
cosmo.inputs
will return the inputs used for bootstrapping.cosmo.handler_configuration
is the generated handler_configuration used when running system tests locally.To see other things exposed by
cosmo
take a look at theclaw.configuration:Configuration
class code.
Script Generation¶
To generate a stub script suitable for execution by claw
, run the following:
$ claw generate-script {PATH_TO_GENERATED_SCRIPT}
The above will create a template script with a script
function and a
cosmo
import already in place.
Note
claw init
generates a script named example-script.py
under
$CLAW_HOME/scripts
.
Contributed Scripts¶
The github repo https://github.com/dankilman/claw-scripts contains additional scripts that can simplify certain day to day Cloudify development related tasks.
To use these scripts, clone this repository somewhere and add a
PATH_TO_CLAW_SCRIPTS_REPO_DIR/scripts
entry to the scripts
list in
~/.claw
:
claw_home: ...
main_suites_yaml: ...
scripts:
- ...
- /path/to/claw-scripts/scripts
You are encouraged to open pull requests with your scripts if you find them useful enough for a wider audience.
Tips¶
Running System Tests¶
claw generate
and claw bootstrap
create a symlink to the last generated
configuration directory in $CLAW_HOME/configurations/_
.
With this in place, instead of exporting the HANDLER_CONFIGURATION
environment variable every time you run a system test, to some other
handler-configuration.yaml
file, you can export this hard coded environment
variable:
export HANDLER_CONFIGURATION=PATH_TO_CLAW_HOME/configurations/_/handler-configuration.yaml
Remember that claw bootstrap
will update handler-configuration.yaml
with the manager_ip
of the newly bootstrapped manager.
Current Configuration¶
As mentioned in the previous section, the last generated/bootstrapped
configuration will always be symlinked at $CLAW_HOME/configurations/_
.
As such, you can save some more typing by using _
instead of the
configuration name, in all commands that take the configuration name as their
first argument.
For example, if we recently bootstrapped an environment based on a handler
configuration named datacentred_openstack
, and we wish to run some script
with the datacentred_openstack
environment as the script’s cosmo
,
instead of writing:
$ claw script datacentred_openstack my_script
we can instead write:
$ claw script _ my_script
to get the same result. Much shorter.