Welcome to Benchmarking Suite’s documentation!¶
The Benchmarking Suite is an all-in-one solution for benchmarking cloud services simulating different typical application behaviours and comparing the results on different cloud providers. It wraps a set of representative, de-facto standard and widely used third-party benchmarking tools and relies on them for the workload simulation and performance measurement.
The Benchmarking Suite automates the benchmarking process managing the allocation and de-allocation of necessary resources, the installation and execution of the benchmarking tools and the storage of data.
It has been designed to be extendible and allow an easy integration of new third-party benchmarking tools and cloud services. Data collected and stored during the tests execution is homogenized and aggregated on different higher-level metrics (e.g. average value) allowing performance comparisons among different providers and/or different dates.
The Benchmarking Suite development has been funded by two European reasearch and innovation projects: ARTIST [1] and CloudPerfect [2].
License¶
The Benchmarking Suite is an open source product released under the Apache License v2.0 [3].
Topics¶
Quick Start¶
Install¶
The Benchmarking Suite is package and distributed through PyPI [1].
Important
The Benchmarking Suite requires Python 3.5+. If it is not the default version in you system, it is recommended to create a virtualenv:
virtualenv -p /usr/bin/python3.5 benchmarking-suite source benchsuite/bin/activate
Let’s start by installing the command line tool and the standard library:
$ pip install benchsuite.stdlib benchsuite.cli
This will make available the benchsuite
bash command and will copy the standard benchmark tests configuration into the default configuration location (located under ~/.config/benchmarking-suite/benchmarks
).
Configure¶
Before executing a benchmark, we have to configure at least one Service Provider. The benchsuite.stdlib
provides some template (located under ~/.config/benchmarking-suite/providers
).
For instance, for Amazon EC2 we can start from the template and complete it:
cp ~/.config/benchmarking-suite/providers/amazon.conf.example my-amazon.conf
Open and edit my-amazon.conf
[provider]
class = benchsuite.provider.libcloud.LibcloudComputeProvider
type = ec2
access_id = <your access_id>
secret_key = <your secret_key>
region = us-west-1
ex_security_group_ids = <id of the security group>
ex_subnet = <id of the subnet>
[ubuntu_micro]
image = ami-73f7da13
size = t2.micro
key_name = <your keypair name>
key_path = <path to your private key file>
vm_user = ubuntu
platform = ubuntu_16
In this case we will provide this file directly to the command line tool, but we can also configure our own configuration directory, put all our service providers and benchmarking tests configuration there and refer to them by name (see XXX seciton).
Run!¶
Now you can execute your first benchmark test:
python -m benchsuite.cli exec --provider my-amazon.conf --service ubuntu_micro --tool ycsb-mongodb --workload WorkloadA
Go REST¶
Enable the REST server is very simple:
pip install benchsuite.rest
benchsuite-rest start
tail -f benchsuite-rest.log
References¶
[1] | https://python.org/pypi/benchsuite.core/ |
Architecture¶
Software Modules¶
The Benchmarking Suite is distributed in four modules:
benchsuite.core |
the core library (all other modules depend on it) with the definition of types and the fundamental framework for the extension of the Benchmarking Suite |
benchsuite.stdlib |
a collection of benchmark tests configuration files and support for some Cloud Providers |
benchsuite.cli |
a bash command line tool to manage tests and results |
benchsuite.rest |
an HTTP server and a REST API to interact with the Benchmarking Suite |
Model¶
The Benchmarking Suite is designed to be very generic and extensible. The main concept is the BenchmarkExecution that is the execution of a Benchmark in one ExecutionEnvironment provided by a ServiceProvider.
With this concepts, it is easy to model, for instance, the execution of the YCSB benchmark on a Virtual Machine provided by Amazon AWS.
Since it is frequent to execute multiple tests against the same Service Provider, the Benchmarking Suite has the concept of BenchmarkingSession. that can include one or more executions of the same provider.
![https://yuml.me/diagram/scruffy;dir:TB/class/[BenchmarkingSession]-1..*[BenchmarkingExecution],[BenchmarkingSession]-1[ServiceProvider],[BenchmarkingExecution]-[ExecutionEnvironment],[ExecutionEnvironment]-1[ServiceProvider].png](https://yuml.me/diagram/scruffy;dir:TB/class/[BenchmarkingSession]-1..*[BenchmarkingExecution],[BenchmarkingSession]-1[ServiceProvider],[BenchmarkingExecution]-[ExecutionEnvironment],[ExecutionEnvironment]-1[ServiceProvider].png)
By default, all the executions of the same session share the same ExecutionEnvironment.
Configuration¶
Benchsuite StdLib¶
Tool | Version | Workload | Centos | Ubuntu |
---|---|---|---|---|
CFD | ||||
Idle | – | all | x | x |
DaCapo | 9.12 | all | x | x |
DaCapo (Fixed Time) | 9.12 | all | x | x |
Filebench | 1.4.9.1 | webserver | x | x |
webproxy | x | x | ||
videoserver | x | x | ||
varmail | x | x | ||
fileserver | x | x | ||
YCSB-Mysql | 0.12.0 | all | x | x |
YCSB-MongoDB | 0.11.0 | all | x | x |
Executions¶
Single Step Execution¶
The single step execution executes one or more benchmarks
Step-by-Step Execution¶
The Step-by-Step execution allows to
This is of particular interest if, during the execution of the benchmarks, it is needed to run other tools like profilers
Command line tool¶
Install¶
The Benchmarking Suite Command line tool can be installed with:
pip install benchsuite.cli
If the installation was successful, the benchsuite
command should be in your path.
Usage¶
This is an autogenerated documentation from the Python argparse
options.
usage: benchsuite [-h] [--verbose] [--quiet] [--config CONFIG]
{new-session,list-sessions,list-providers,list-benchmarks,destroy-session,new-exec,prepare-exec,run-exec,list-execs,collect-exec,multiexec}
...
Named Arguments¶
–verbose, -v | print more information (3 levels) |
–quiet, -q | suppress normal output Default: False |
–config, -c | foo help |
Sub-commands:¶
new-session¶
create-env help
benchsuite new-session [-h] --provider PROVIDER --service-type SERVICE_TYPE
–provider, -p | The name for the service provider configuration or the filepath of the provider configuration file |
–service-type, -s | |
The name of one of the service types defined in the provider configuration |
new-exec¶
a help
benchsuite new-exec [-h] session tool workload
session | bar help |
tool | bar help |
workload | bar help |
run-exec¶
a help
benchsuite run-exec [-h] [--async] id
id | bar help |
–async | bar help Default: False |
collect-exec¶
collects the outputs of an execution
benchsuite collect-exec [-h] id
id | the execution id |
multiexec¶
Execute multiple tests in a single benchmarking session
benchsuite multiexec [-h] --provider PROVIDER [--service-type SERVICE_TYPE]
tests [tests ...]
tests | one or more tests in the format <tool>[:<workload>]. If workload is omitted, all workloads defined for that tool will be executed |
–provider, -p | The name for the service provider configuration or the filepath of the provider configuration file |
–service-type, -s | |
The name of one of the service types defined in the provider configuration. If not specified, all service types will be used |
REST Server¶
This documentation is autogenerated from the Swagger API Specification using sphinx-swaggerdoc.
A better documentation for the REST API can be found directly in the REST Server:
- Launch the server
- Open http://localhost:5000/api/v1/
default¶
benchmarks¶
GET /benchmarks/
Parameters
Name | Position | Description | Type |
---|---|---|---|
X-Fields | header | An optional fields mask | string |
providers¶
GET /providers/
Parameters
Name | Position | Description | Type |
---|---|---|---|
X-Fields | header | An optional fields mask | string |
sessions¶
GET /sessions/{session_id}/executions/
Parameters
Name | Position | Description | Type |
---|---|---|---|
X-Fields | header | An optional fields mask | string |
session_id | path | string |
POST /sessions/{session_id}/executions/
Parameters
Name | Position | Description | Type |
---|---|---|---|
payload | body | ||
X-Fields | header | An optional fields mask | string |
session_id | path | string |
GET /sessions/
Parameters
Name | Position | Description | Type |
---|---|---|---|
X-Fields | header | An optional fields mask | string |
POST /sessions/
Parameters
Name | Position | Description | Type |
---|---|---|---|
payload | body | ||
X-Fields | header | An optional fields mask | string |
GET /sessions/{session_id}
Parameters
Name | Position | Description | Type |
---|---|---|---|
X-Fields | header | An optional fields mask | string |
session_id | path | The id of the session | string |
DELETE /sessions/{session_id}
Parameters
Name | Position | Description | Type |
---|---|---|---|
session_id | path | The id of the session | string |
executions¶
POST /executions/{exec_id}/run
Parameters
Name | Position | Description | Type |
---|---|---|---|
X-Fields | header | An optional fields mask | string |
exec_id | path | string |
GET /executions/
Parameters
Name | Position | Description | Type |
---|---|---|---|
X-Fields | header | An optional fields mask | string |
GET /executions/{exec_id}
Parameters
Name | Position | Description | Type |
---|---|---|---|
X-Fields | header | An optional fields mask | string |
exec_id | path | string |
POST /executions/{exec_id}/prepare
Parameters
Name | Position | Description | Type |
---|---|---|---|
X-Fields | header | An optional fields mask | string |
exec_id | path | string |
Docker¶
The Benchmarking Suite is also distributed in two different Docker containers. They are available at https://hub.docker.com/r/benchsuite/
API Reference¶
-
class
benchsuite.core.controller.
BenchmarkingController
(config_folder=None, storage_dir=None)¶ The main class to control the benchmarking
-
list_sessions
()¶ Return type: Dict
[str
,BenchmarkingSession
]Lists the sessions :return: Session list
-
-
class
benchsuite.core.model.benchmark.
Benchmark
(name, workload)¶ A Benchmark
Changelog¶
Benchmarking Suite v2.0.0¶
This is a major release version of the Benchmarking Suite that introduces several changes and improvements with respect to the Benchmarking Suite 1.x versions.
In the Core library:
- a complete refactoring of the code to improve the parameterization and modularization
- introduction of benchmarking sessions
In the StdLib library:
- for Benchmarks:
- NEW CFD Benchmark
- Updated Filebench and YCSB tools versions
- for Cloud Providers:
- NEW FIWARE FILAB connector
- Updated Amazon EC2 to work with VPCs
The Cli and REST modules are completely new and the previous implmentation have been abandoned.
Development¶
This section explains the development, integration and distritbuion process of the Benchmarking Suite. Intended readers are developers.
Continuous Integration¶
TBD
Release Steps¶
Checklist to release the Benchmarking Suite
For each module to release:
increase the version number in the
__init__py
filecreate the source distribution package and upload on PYPI (remove the
-r pypitest
to upload on the official PYPI)python setup.py sdist upload -r pypitest
commit and push everything on GitHub
create a tag on GitHub
Docker¶
Docker containers are built automatically from Dockerfiles in the benchsuite-docker repository.
To create a new tag of Docker images, create a tag in the Git repository that starts with “v” (e.g. “v2.0”, “v1.2.3”, “v1.2.3-beta1”, ...)
FAQs¶
How to clear all stored benchmarking sessions?¶
Sessions are stored in a file in ~/.local/share/benchmarking-suite/sessions.dat
. Deleting that file, all sessions will be removed. This should be an extreme solution, the more correct way to delete a session is to use the destroy-session command.
How to clear all stored executions?¶
Executions are stored along with the sessions. See previous question: How to clear all stored benchmarking sessions?.
Contacts¶
Main contact person for the Benchmarking Suite is:
Person: | Gabriele Giammatteo |
---|---|
Company: | Research and Development Laboratory Engineering Ingegneria Informatica S.p.A. |
Address: | via Riccardo Morandi, 32 00148 Rome, Italy |
e-mail: | gabriele.giammatteo@eng.it |
For bugs, features and other code-related requests the issue tracker can be used at: https://github.com/benchmarking-suite/benchsuite-core/issues