Welcome to timmy’s documentation!¶
Contents:
Specification¶
OpenStack Ansible-like tool for parallel node operations: two-way data transfer, log collection, remote command execution
- The tool is based on https://etherpad.openstack.org/p/openstack-diagnostics
- Should work fine in environments deployed by Fuel versions: 4.x, 5.x, 6.x, 7.0, 8.0, 9.0, 9.1
- Operates non-destructively.
- Can be launched on any host within admin network, provided the fuel node IP is specified and access to Fuel and other nodes is possible via ssh from the local system.
- Parallel launch - only on the nodes that are ‘online’. Some filters for nodes are also available.
- Commands (from ./cmds directory) are separated according to roles (detected automatically) by the symlinks. Thus, the command list may depend on release, roles and OS. In addition, there can be some commands that run everywhere. There are also commands that are executed only on one node according to its role, using the first node of this type they encounter.
- Modular: possible to create a special package that contains only certain required commands.
- Collects log files from the nodes using filters
- Some archives are created - general.tar.bz2 and logs-*
- Checks are implemented to prevent filesystem overfilling due to log collection, appropriate error shown.
- Can be imported into other python scripts (ex. https://github.com/f3flight/timmy-customtest) and used as a transport and structure to access node parameters known to Fuel, run commands on nodes, collect outputs, etc. with ease.
General configuration¶
All default configuration values are defined in timmy/conf.py
. Timmy works with these values if no configuration file is provided.
If a configuration file is provided via -c | --config
option, it overlays the default configuration.
An example of a configuration file is config.yaml
.
Some of the parameters available in configuration file:
ssh_opts parameters to send to ssh command directly (recommended to leave at default), such as connection timeout, etc. See
timmy/conf.py
to review defaults.env_vars environment variables to pass to the commands and scripts - you can use these to expand variables in commands or scripts
fuel_ip the IP address of the master node in the environment
fuel_user username to use for accessing Nailgun API
fuel_pass password to access Nailgun API
fuel_tenant Fuel Keystone tenant to use when accessing Nailgun API
fuel_port port to use when connecting to Fuel Nailgun API
fuel_keystone_port port to use when getting a Keystone token to access Nailgun API
fuelclient True/False - whether to use fuelclient library to access Nailgun API
fuel_skip_proxy True/False - ignore
http(s)_proxy
environment variables when connecting to Nailgun APIrqdir the path to the directory containing rqfiles, scripts to execute, and filelists to pass to rsync
- rqfile - list of dicts:
- file - path to an rqfile containing actions and/or other configuration parameters
- default - should always be False, except when included default.yaml is used. This option is used to make logs_no_default work
logs_days how many past days of logs to collect. This option will set start parameter for each logs action if not defined in it.
logs_speed_limit True/False - enable speed limiting of log transfers (total transfer speed limit, not per-node)
logs_speed_default Mbit/s - used when autodetect fails
logs_speed Mbit/s - manually specify max bandwidth
logs_size_coefficient a float value used to check local free space; ‘logs size * coefficient’ must be > free space; values lower than 0.3 are not recommended and will likely cause local disk fillup during log collection
do_print_results print outputs of commands and scripts to stdout
clean True/False - erase previous results in outdir and archive_dir dir, if any
outdir directory to store output data. WARNING: this directory is WIPED by default at the beginning of data collection. Be careful with what you define here.
archive_dir directory to put resulting archives into
timeout timeout for SSH commands and scripts in seconds
Configuring actions¶
Actions can be configured in a separate yaml file (by default rq.yaml
is used) and / or defined in the main config file or passed via command line options -P
, -C
, -S
, -G
.
The following actions are available for definition:
put - a list of tuples / 2-element lists: [source, destination]. Passed to
scp
like soscp source <node-ip>:destination
. Wildcards supported for source.cmds - a list of dicts: {‘command-name’:’command-string’}. Example: {‘command-1’: ‘uptime’}. Command string is a bash string. Commands are executed in a sorted order of their names.
- scripts - a list of elements, each of which can be a string or a dict:
- string - represents a script filename located on a local system. If filename does not contain a path separator, the script is expected to be located inside
rqdir/scripts
. Otherwise the provided path is used to access the script. Example:'./my-test-script.sh'
- dict - use this option if you need to pass variables to your script. Script parameters are not supported, but you can use env variables instead. A dict should only contain one key which is the script filename (read above), and the value is a Bash space-separated variable assignment string. Example:
'./my-test-script.sh': 'var1=123 var2="HELLO WORLD"'
- LIMITATION: if you use a script with the same name more than once for a given node, the collected output will only contain the result of the last execution.
- INFO: Scripts are not copied to the destination system - script code is passed as stdin to bash -s executed via ssh or locally. Therefore passing parameters to scripts is not supported (unlike cmds where you can write any Bash string). You can use variables in your scripts instead. Scripts are executed in the following order: all scripts without variables, sorted by their full filename, then all scripts with variables, also sorted by full filename. Therefore if the order matters, it’s better to put all scripts into the same folder and name them according to the order in which you want them executed on the same node. Mind that scripts with variables are executed after all scripts without variables. If you need to mix scripts with variables and without and maintain order, just use dict structure for all scripts, and set null as the value for those which do not need variables.
- string - represents a script filename located on a local system. If filename does not contain a path separator, the script is expected to be located inside
files - a list of filenames to collect. passed to
scp
. Supports wildcards.filelists - a list of filelist filenames located on a local system. Filelist is a text file containing files and directories to collect, passed to rsync. Does not support wildcards. If the filename does not contain path separator, the filelist is expected to be located inside
rqdir/filelists
. Otherwise the provided path is used to read the filelist.- logs
- path - base path to scan for logs
- include - regexp string to match log files against for inclusion (if not set = include all)
- exclude - regexp string to match log files against. Excludes matched files from collection.
- start - date or datetime string to collect only files modified on or after the specified time. Format -
YYYY-MM-DD
orYYYY-MM-DD HH:MM:SS
orN
where N = integer number of days (meaning last N days).
Filtering nodes¶
- soft_filter - use to skip any operations on non-matching nodes
- hard_filter - same as above but also removes non-matching nodes from NodeManager.nodes dict - useful when using timmy as a module
- Nodes can be filtered by the following parameters defined inside soft_filter and/or hard_filter:
- roles - the list of roles, ex. [‘controller’,’compute’]
- online - enabled by default to skip non-accessible nodes
- status - the list of statuses. Default: [‘ready’, ‘discover’]
- ids - the list of ids, ex. [0,5,6]
- any other attribute of Node object which is a simple type (int, float, str, etc.) or a list containing simple types
Lists match any, meaning that if any element of the filter list matches node value (if value is a list - any element in it), the node passes.
Negative filters are possible by prefacing filter parameter with no_, example: no_id = [0] will filter out Fuel.
Negative lists also match any - if any match / collision found, the node is skipped.
You can combine any number of positive and negative filters as long as their names differ (since this is a dict).
You can use both positive and negative parameters to match the same node parameter (though it does not make much sense): roles = [‘controller’, ‘compute’] no_roles = [‘compute’] This will skip computes and run only on controllers. As already said, does not make much sense :)
Parameter-based configuration¶
It is possible to define special by_<parameter-name> dicts in config to (re)define node parameters based on other parameters. For example:
by_roles:
controller:
cmds: {'check-uptime': 'uptime'}
In this example for any controller node, cmds setting will be reset to the value above. For nodes without controller role, default (none) values will be used.
Negative matches are possible via no_ prefix:
by_roles:
no_fuel:
cmds: {'check-uptime': 'uptime'}
In this example uptime command will be executed on all nodes except Fuel server.
It is also possible to define a special once_by_<parameter-name> which works similarly, but will only result in attributes being assigned to a single (first in the list) matching node. Example:
once_by_roles:
controller:
cmds: {'check-uptime': 'uptime'}
Such configuration will result in uptime being executed on only one node with controller role, not on every controller.
rqfile format¶
rqfile
format is a bit different from config. The basic difference:
config:
scripts: [a ,b, c]
by_roles:
compute:
scripts: [d, e, f]
rqfile:
scripts:
__default: [a, b, c]
by_roles:
compute: [d, e, f]
The config and rqfile definitions presented above are equivalent. It is possible to define config in a config file using the config format, or in an rqfile using rqfile format, linking to the rqfile in config with rqfile
setting. It is also possible to define part here and part there. Mixing identical parameters in both places is not recommended - the results may be unpredictable (such a scenario has not been thoroughly tested). In general, rqfile is good for fewer settings with more parameter-based variations (by_
), and main config for more different settings with less such variations.
Configuration application order¶
Configuration is assembled and applied in a specific order:
default configuration is initialized. See
timmy/conf.py
for details.command line parameters, if defined, are used to modify the configuration.
rqfile, if defined (default -
rq.yaml
), is converted and injected into the configuration. At this stage the configuration is in its final form.- for every node, configuration is applied, except
once_by_
directives: - first the top-level attributes are set
- then
by_<attribute-name>
parameters are iterated to override settings and append(accumulate) actions
- for every node, configuration is applied, except
finally
once_by_`<attribute-name>
parameters are applied - only for one matching node for any set of matching values. This is useful, for example, if you want a specific file or command from only a single node matching a specific role, like runningnova list
only on one controller.
Once you are done with the configuration, you might want to familiarize yourself with Usage.
Usage¶
NOTICE: Even though Timmy uses nice and ionice to limit impact on the cloud, you should still expect 1 core utilization both locally (where Timmy is launched) and on each node where commands are executed or logs collected. Additionally, if logs are collected, local disk (log destination directory) may get utilized significantly.
The easiest way to launch timmy would be running the timmy.py
script.
However, you need to configure it first.
Basically, the timmy.py
is a simple wrapper that launches cli.py
.
Full reference for command line interface
Basic parameters:
--only-logs
only collect logs (skip files, filelists, commands and scripts)-l
,--logs
also collect logs (logs are not collected by default due to their size)-e
,--env
filter by environment ID-R
,--role
filter by role-c
,--config
use custom configuration file to overwrite defaults. Seeconfig.yaml
as an example-j
,--nodes-json
use json file instead of polling Fuel (to generate json file usefuel node --json
) - speeds up initialization-o
,--dest-file
the name/path for output archive, default isgeneral.tar.gz
and put into/tmp/timmy/archives
.-v
,--verbose
verbose(INFO) logging-d
,--debug
debug(DEBUG) logging
Shell Mode - a mode of execution which makes the following changes:
- rqfile (
rq.yaml
by default) is skipped - Fuel node is skipped
- outputs of commands (specified with
-C
options) and scripts (specified with-S
) are printed on screen - any actions (cmds, scripts, files, filelists, put, except logs) and Parameter Based configuration defined in config are ignored.
The following parameters (“actions”) are available, the usage of any of them enables Shell Mode:
-C <command>
- Bash command (string) to execute on nodes. Using multiple-C
statements will produce the same result as using one with several commands separated by;
(traditional Shell syntax), but for each-C
statement a new SSH connection is established-S <script>
- name of the Bash script file to execute on nodes (if you do not have a path separator in the filename, you need to put the file intoscripts
folder inside a path specified byrqdir
config parameter, defaults torq
. If a path separator is present, the given filename will be used directly as provided)-P <file/path> <dest>
- upload local data to nodes (wildcards supported). You must specify 2 values for each-P
switch.-G <file/path>
- download (collect) data from nodes
Examples¶
timmy
- run according to the default configuration and default actions. Default actions are defined inrq.yaml
(/usr/share/timmy/rq.yaml
). Logs are not collected.timmy -l
- run default actions and also collect logs (default log setup applied - defaults are hardcoded intimmy/conf.py
). Such execution is similar to Fuel’s “diagnostic snapshot” action, but will finish faster and collect less logs.timmy --only-logs
- only collect logs, no actions performed (default log setup, as above)timmy -C 'uptime; free -m'
- check uptime and memory on all nodestimmy -G /etc/nova/nova.conf
- get nova.conf from all nodestimmy -R controller -P package.deb '' -C 'dpkg -i package.deb' -C 'rm package.deb' -C 'dpkg -l | grep [p]ackage'
- push a package to all nodes, install it, remove the file and check that it is installedtimmy -с myconf.yaml
- use a custom config file and run the program according to it. Custom config can specify any actions, log setup, and other settings. See configuration doc for more details.
Using custom configuration file¶
If you want to perform a set of actions on the nodes without writing a long command line (or if you want to use the options only available in config), you may want to set up config file instead. An example config structure would be:
rqdir: './pacemaker-debug' # a folder which should contain any filelists and/or scripts if they are defined later, should contain folders 'filelists' and/or 'scripts'
rqfile: null # explicitly undefine rqfile to skip default filelists and scripts
hard_filter:
roles: # only execute on Fuel and controllers
- fuel
- controller
cmds: # some commands to run on all nodes (after filtering). cmds syntax is {name: value, ...}. cmds are executed in alphabetical order.
01-my-first-command: 'uptime'
02-disk-check: 'df -h'
and-also-ram: 'free -m'
logs:
exclude: '.*' # exclude all logs by default
by_roles:
controller:
scripts: # I use script here to not overwrite the cmds we have already defined for all nodes
- pacemaker-debug.sh # the name of the file inside 'scripts' folder inside 'rqdir' path, which will be executed (by default) on all nodes
files:
- '/etc/coros*' # get all files from /etc/coros* wildcard path
fuel:
logs:
include: 'crmd|lrmd|corosync|pacemaker' # only get logs which names match (re.search is used) this regexp
Then you would run timmy -l -c my-config.yaml
to execute timmy with such config.
Instead of putting all structure in a config file you can move actions (cmds, files, filelists, scripts, logs) to an rqfile, and specify rqfile
path in config (although in this example the config-way is more compact). rqfile
structure is a bit different:
cmds: # top-level elements are node parameters, __default will be assigned to all nodes
__default:
- 01-my-first-command: 'uptime'
- 02-disk-check: 'df -h'
- and-also-ram: 'free -m'
scripts:
by_roles: # all non "__default" keys should match, "by_<parameter>"
controller:
- pacemaker-debug.sh
files:
by_roles:
controller:
- '/etc/coros*'
logs:
by_roles:
fuel:
include: 'crmd|lrmd|corosync|pacemaker'
__default:
exclude: '.*'
Then the config should look like this:
rqdir: './pacemaker-debug'
rqfile: './pacemaker-rq.yaml'
hard_filter:
roles:
- fuel
- controller
And you run timmy -l -c my-config.yaml
.
Back to Index.
CLI¶
Tools module¶
tools module
Exit Codes¶
- 2 - SIGINT (Keyboard Interrupt) caught.
- 100 - not enough free space for logs. Decrease logs coefficient via CLI or config or free up space.
- 101 - rqdir configuration parameter points to a non-existing directory.
- 102 - could not load YAML file - I/O Error.
- 103 - could not load YAML file - Value Error, see log for details.
- 104 - could not load YAML file - Parser Error - incorrectly formatted YAML.
- 105 - could not retrieve information about nodes by any available means.
- 106 - fuel_ip configuration parameter not defined.
- 107 - could not load JSON file - I/O Error.
- 108 - could not load JSON file - Value Error, see log for details.
- 109 - subprocess (one of the node execution processes) exited with a Python exception.
- 110 - unable to create a directory.
- 111 - ip address must be defined for Node instance.
- 112 - one of the two parameters fuel_user or fuel_pass specified without the other.
- 113 - unhandled Python exception occured in main process.