_images/cloud-inquisitor_logo.png

Contents

This directory has several resources that will help you implement Cloud Inquisitor and contribute to the project.

Getting started

Quick Start Guide

This tutorial will walk you through installing and configuring Cloud Inquisitor. The tool currently runs on Amazon Web Services (AWS) but it has been designed to be platform-independent.

This tutorial assumes you are familiar with AWS & that you have an AWS account. You’ll need to retrieve your Access Key ID and Secret Access Key from the web-based console. You can also use AWS STS tokens as well.

It’s highly recommended you first use the quickstart to build Cloud Inquisitor. However if you want to explore additional options please see additional options.

Installing Cloud Inquisitor

Getting Cloud Inquisitor (cinq) up and running involves the following steps:

  1. Configure AWS credentials and variables for cinq.
  2. Use Packer to build your cinq AMI.
  3. Launch your AMI, login, and add your accounts!

Build Requirements

  • Packer > v1.1
  • AWS Credentials - API Keys or an AWS instance role with appropriate permissions.
  • An existing VPC and subnet for the packer build instance and eventually the cinq instance to live (can be the same if you wish). TL;DR: cinq will not create the vpcs/subnets for you
1. Setting Up
  • Export your AWS key credentials into the local terminal that you intend to execute packer from.

Unix-based Systems

export AWS_ACCESS_KEY_ID=AKIAxxxxxxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxx

Windows

set AWS_ACCESS_KEY_ID=AKIAxxxxxxxxxxxxxxx
set AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxx
  • Clone the Cloud Inquisitor (cinq) repo:

    git clone https://github.com/RiotGames/cloud-inquistor
    
  • In the packer directory, copy variables/variables.json.sample to your own variables file:

    cd packer && cp variables/variables.json.sample variables/mycinqenv.json
    
  • Edit your json file and provide your parameters as per the code block below.

The example file variables/variables.json.sample provides sample variables needed to configure cinq. A full list of parameters is available in the build.json file. For detailed build information please see additional options.

NOTE: You will need to change some of these items as they are environnment-specific.

  • The easiest way to get cinq up and running is to ensure you’ve properly configured all of the values in the sample file and most importantly the app_db_setup_local is set to True. The will install a local mysql-server on the instance itself and get the database setup for you.

    {
        "ec2_vpc_id":                   "vpc-xxxxxxxx",       (This is the VPC for the packer BUILD instance)
        "ec2_subnet_id":                "subnet-xxxxxxxx",    (This is the subnet for the packer BUILD instance)
        "ec2_source_ami":               "ami-0a00ce72",       (This is an Ubuntu 16 AMI but you can use your own custom AMI ID)
        "ec2_instance_type":            "m4.large",
        "ec2_region":                   "us-west-2",
        "ec2_ssh_username":             "ubuntu",
        "ec2_security_groups":          "sg-xxxxxxxx",        (Ensure that you have SSH open from your workstation or packer build will fail)
        "ec2_public_ip_enable":         "False",              (If you don't have VPN or DirectConnect to your VPC, set this to True)
        "app_kms_account_name":         "my_account_name",    (Optional: for using KMS encrypted userdata for your DB URI)
        "app_use_user_data":            "False",              (Set to True if you want to use KMS encrypted userdata for your DB URI)
        "app_apt_upgrade":              "True",
        "app_log_level":                "INFO",
        "app_db_uri":                   "mysql://cinq:changeme@localhost:3306/cinq",  (This points to your database (See Notes))
        "app_db_user":                  "cinq",
        "app_db_pw":                    "changeme",
        "app_db_setup_local":           "True",               (Easiest way to get cinq running, set to False if you want to use external DB)
        "git_branch":                   "master"
    }
    
  • Save this file!

For more advanced and secure options, please see the additional options documentation. For example:

  • If you want to use a remote database as opposed to the local database provided by cinq, check out the databases section
  • If you don’t wish to keep database credentials in flat configuration files on the instance, you may use KMS to encrypt these variables and pass them to the cinq instance via AWS userdata databases section
2. Building an Image

All the files required to build the image are in the packer subdirectory. Remember to check that your AWS Credentials have been properly exported or the next command will likely hang and time out.

  • Execute the following command from the packer directory in the cinq repo to have packer build your custom AMI.:

    packer build -only ami -var-file variables/mycinqenv.json build.json
    

Assuming your variables are correct and you have the proper AWS permissions, packer should create an AMI. If steps fail, try executing packer with the -debug flag and step through the build process to identify where it is breaking. If it succeeds you should see something like this. Ensure you note the AMI ID that was created as you’ll use it in the next step

==> ami: Waiting for the instance to stop...
==> ami: Creating the AMI: cloud-inquisitor @master 2017-11-22 18-22-37
**    ami: AMI: ami-xxxxxxxx **
==> ami: Waiting for AMI to become ready...
==> ami: Adding tags to AMI (ami-xxxxxxxx)...
==> ami: Tagging snapshot: snap-xxxxxxxxxx
==> ami: Creating AMI tags
    ami: Adding tag: "Accounting": "yourteam.accounting.tag"
    ami: Adding tag: "Name": "Cloud Inquisitor System Image"
    ami: Adding tag: "SourceAmi": "ami-0a00ce72"
    ami: Adding tag: "GitBranch": "master"
    ami: Adding tag: "Owner": "teamname@yourcompany.com"
==> ami: Creating snapshot tags
==> ami: Terminating the source AWS instance...
==> ami: Cleaning up any extra volumes...
==> ami: No volumes to clean up, skipping
==> ami: Deleting temporary keypair...
Build 'ami' finished.
3. Launching your AMI

Cloud Inquisitor is designed to run from a security/audit AWS account and to be able to operate on multiple AWS accounts, using STS AssumeRole. See the following diagram to understand how cinq operates and where the various IAM elements need to be configured:

_images/cinq_operation.png

To ensure this is possible, you will need to create an Instance Profile so it can use AssumeRole in the target accounts it is auditing. Below is a sample of the instance profile you should create:

  • Create an IAM policy https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html (within the AWS Console) as follows

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "CinqInstancePolicy",
                "Effect": "Allow",
                "Action": [
                    "ses:SendEmail",
                    "ses:SendRawEmail",
                    "sts:AssumeRole",
                    "sqs:SendMessage*",
                    "sqs:DeleteMessage*",
                    "sqs:GetQueue*",
                    "sqs:ListQueues",
                    "sqs:PurgeQueue",
                    "sqs:ReceiveMessage",
                    "cloudwatch:PutMetricData",
                    "cloudwatch:GetMetricStatistics",
                    "cloudwatch:ListMetrics",
                    "ec2:DescribeTags"
                    ],
                "Resource": [
                    "*"
                ]
            }
        ]
    }
    

Sample screenshot of what you should see when creating the policy:

_images/cinq_iam_policy_create.png
  • Create an IAM Role (within the AWS Console) and bind the above policy (that you have just created) to it
  • (Optional) For each account cinq is auditing you will need to setup a trust role for EACH target account (including the one you are running cinq from):

On the target account, create an IAM role called cinq_role and attach the AWS managed ReadOnly policy along with the following custom policy:

{
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Resource": [
                "*"
            ],
            "Action": [
                "cloudtrail:*",
                "ec2:CreateTags",
                "ec2:CreateFlowLogs",
                "ec2:DeleteTags",
                "ec2:DeleteVolume",
                "ec2:StopInstances",
                "ec2:TerminateInstances",
                "iam:AttachRolePolicy",
                "iam:CreatePolicy*",
                "iam:CreateRole",
                "iam:DeletePolicy*",
                "iam:DeleteRolePolicy",
                "iam:DetachRolePolicy",
                "iam:PassRole",
                "iam:PutRolePolicy",
                "iam:SetDefaultPolicyVersion",
                "iam:UpdateAssumeRolePolicy",
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:PutLogEvents",
                "s3:CreateBucket",
                "s3:PutBucketPolicy",
                "sns:CreateTopic",
                "sns:SetTopicAttributes",
                "sns:Subscribe",
                "sqs:Get*",
                "sqs:List*",
                "sqs:SetQueueAttributes",
                "sqs:Get*",
                "sqs:List*",
                "sqs:SetQueueAttributes"
            ]
        }
      ],
    "Version": "2012-10-17"
}

Sample screenshot of what you should see when creating the role (be sure to select EC2 for the type):

_images/cinq_iam_role_create.png

Trust Policy:

Note: Ensure you have the correct source AWS Account ID (that is running CINQ) and the Instance Profile Name (not the Role name) populated here.

{
"Version": "2012-10-17",
"Statement": [
{
     "Sid": "",
     "Effect": "Allow",
     "Principal": {
     "AWS": [
         "arn:aws:iam::<accountid-running-cinq>:role/<instanceprofilename>"
         ],
         "Service": "ec2.amazonaws.com"
     },
     "Action": "sts:AssumeRole"
     }
   ]
  }

You can now launch this AMI within the EC2 (Launch Instance) section of your AWS Console. When launching your AMI ensure the following:

  1. Ensure you use the Instance Profile to launch your cinq instance
  2. Configure your Security Groups should be open on 22/443 so that you can connect to both the Cloud Inquisitor UI and the instance itself for troubleshooting.
  3. ssh into the instance and grab the password from /var/log/supervisor/cinq-api-stdout---supervisor-*****.log
  4. Connect to https://<yourinstanceip> and login (username :: admin)!

You can now add new accounts under the Accounts tab in the Cloud Inquisitor UI. Please check out the user guide for further details on how to use the UI, review results and update configuration.

Local Development

Cloud Inquisitor can be developed locally using Packer ,`Docker <https://www.docker.com>`_ or a standalone local development environment setup script provided by us.

Local Development in Packer

Starting cinq-backend

nginx should already be configured to serve the front-end and forward backend requests to flask.

You can run cloud-inquisitor to see a list of project tasks (e.g., runserver, db reload, auth config).

1. Start the Cloud Inquisitor cinq runserver target for development mode, run_api_server is the production target.

cloud-inquisitor runserver

2. Start scheduler to fetch aws data and other scheduled tasks.

cloud-inquisitor run_scheduler

3. (Alternative) Run using supervisor.

# service supervisor start
# supervisorctl stop cinq cinq-scheduler
# supervisorctl start cinq cinq-scheduler

4. Get generated local admin password.

grep password: /opt/cinq-backend/logs/apiserver.log*
  1. Browse to https://localhost.

If using a virtual machine, you may need to map 127.0.0.1:443 to the guest’s port 443. You may also need to manually browse to https://localhost/login.

SSO and AWS Key Management Service

Local development configuration defaults to built in authentication. Further customization is required to work with features such as Single-Sign On (SSO) and AWS Key Management Services.

Testing the build

Once you have a successful AMI built you should launch a new EC2 Instance based off the AMI to ensure that everything was installed correctly.

  • Launch EC2 from within your AWS IAM Console & then select the AMI you have just created.
  • Create from AMI
  • As soon as the instance is up, connect to it via ssh.
  • Check the status of the Cloud Inquisitor processes in supervisor by running sudo supervisorctl, which should return:
bash
# supervisorctl status
cinq                   RUNNING    pid 4393, uptime 18 days, 18:55:44
cinq-scheduler         RUNNING    pid 22707, uptime 13 days, 0:23:28

If both processes are not in the RUNNING state, then review the following:

  • the /var/log/supervisor/cloudinquisitor* file for errors. There is most likely an issue in your JSON variables file, so you should do a direct comparison with the production /opt/cinq-backend/settings/production.py file.
  • the packer build output for errors (the variables json file can sometimes be the issue due to incorrect database details/credentials).

Once you have verified that both processes are running, you should terminate the scheduler since having multiple schedulers running at the same time can cause various issues. To do this from the shell:

  • supervisorctl stop cinq-scheduler

or from within the _supervisorctl_ prompt

  • supervisor> stop cinq-scheduler

which results in

bash supervisor> status
cinq                   RUNNING    pid 1168, uptime 0:04:52
cinq-scheduler         STOPPED    Oct 13 05:32 PM

Once you have verified that everything is running as expected you can terminate the EC2 Instance.

AWS Deployment - AutoScalingGroup launch configurations

Once you have tested that the image is good, update the Launch Configuration for the ASG following the steps below.

Creating Launch Configuration
  1. Log into the AWS Console and go to the EC2 section.
  2. Click Launch Configurations in the Auto Scaling section.
  3. Locate the currently active Launch Configuration, right click it and choose Copy launch configuration. To identify the currently active Launch Configuration you can look at the details for the Auto Scaling Group itself.
  4. On the first screen, click Edit AMI and paste the AMI ID you got from the packer build (or search by ami name).
  5. Once you select the new AMI, the console will ask you to confirm that you want to proceed with the new AMI, select Yes, I want to continue with this AMI and click Next.
  6. On the instance type page, simply click Next: Configure details without modifying anything. The correct instance type will be pre-selected.
  7. On the Details page you want to modify the Name attribute of the launch configuration. Name should follow the standard cloud-inquisitor-<year>-<month>-<day>_<index> with index being an increasing number that resets each day. So the first launch configuration for a specific day is _1. Ideally you shouldn’t have to make multiple revisions in a single day, but this lets us easily revert to a previous version if we need to. You should ensure that the IAM role is correctly set to cloud-inquisitorInstanceProfile.
  8. After changing the launch configuration name, click the Next buttons until you reach the Review page. Make sure all the changes you made are reflected on the Review page and then hit Create launch configuration. Once you click create it will ask you to select the key-pair, select an appropriate key-pair and click the Create button. Our base AMI have the InfraSec SSH keys baked into it, so you should not need to worry too much about the key-pair, but its still a good idea to use a key-pair the entire team has access to, just in case.
Updating AutoScalingGroup
  1. Click on Auto Scaling Groups in the EC2 Dashboard.
  2. Locate the ASG you want to update, right click it and select Edit.
  3. From the Launch Configuration drop down box, locate the configuration you created in the previous step.
  4. Click Save.
  5. With the ASG selected, click on the Instances tab in the details pane.
  6. Click on the instance ID to be taken to the details page for the EC2 instance.
  7. Right click EC2 Instance and select terminate. This will trigger the ASG to launch a new instance from the updated launch configuration on the new AMI. This process takens 3-5 minutes during which time Cloud Inquisitor will be unavailable.
  8. Go back to the ASG details page for the Cloud Inquisitor ASG, and by clicking the Refresh icon monitor that a new instance is being launched and goes into InService status. Once the new instance is in service, verify that you are able to log into the UI at https://cloudinquisitor.<your_domain>/ or whatever the relevant URL is.
Connect to new instance and upgrade DB
ssh -i <ssh key> ubuntu@<instance ip>
sudo supervisorctl stop all
cd /opt/cloudinquisitor-backend/
export CINQ_SETTINGS=/opt/cinq-backend/settings/production.py
sudo -u www-data -E cloud-inquisitor db upgrade
sudo -u www-data -E cloud-inquisitor setup --headless
sudo supervisorctl start all
# You can review the logs in /var/log/inquisitor-backend/logs
# Browse to the Cloud Inquisitor UI and update the config to enable new features.

Local Development in Docker

Note: The instructions here are NOT for production. They are strictly for local development.

  • All docker-compose commands should be run from inside /path/to/clound-inquisitor/docker.
  • After following the Initial Setup, you can bring the whole system up with docker-compose up -d.
Requirements
  • docker >= 17.12.0
  • docker-compose >= 1.18.0
  • Internet Access
  • A smile
Containers
  • base: Used as the base for the API and Schedulers. It never actually runs a service.
  • db: Mysql database mounted on host locally 127.0.0.1:3306:3306.
  • api: The API mounted on host locally 127.0.0.1:5000:5000.
  • frontend: The frontend mounted on host locally 127.0.0.1:3000:3000
  • scheduler: The standalone scheduler. No ports exposed.
  • nginx: Acts as a proxy into the system mounted on host locally 127.0.0.1:8080:80 and 127.0.0.1:8443:443
Requirements
  • User in AWS has trust permission for Cloud Inquisitor InstanceProfile. Setting this up can be found in the QuickStart
  • Copy /path/to/cloud-inquisitor/docker/files/dev-backend-config.json.sample to /path/to/cloud-inquisitor/docker/files/dev-backend-config.json * All full uppercase phrases need to be changed
AWS API Key Configuration

There are two ways to get AWS API keys into the container.

  1. You can set the access_key and secret_key in dev-backend-config.json. If using STS tokens, you need to provide the session_token as well.
  2. Set access_key, secret_key, and session_token to null in dev-backend-config.json. Then set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. If using STS tokens, you need to also set AWS_SESSION_TOKEN.
Initial Setup
  1. Build all images:
docker-compose build
  1. Start db:
docker-compose up -d db
  1. Setup database for Cloud Inquisitor and start the API server (The db server takes a few moments to start the first time you turn it on. This command will fail if the db is not ready):
docker-compose run base bash -c "source /env/bin/activate &&  cloud-inquisitor db upgrade"
docker-compose up -d api
  1. Retrieve your admin password:
docker-compose logs api | grep admin
  1. Start the frontend:
docker-compose up -d frontend
  1. Start nginx:
docker-compose up -d nginx

The frontend will now be available on https://localhost:8443

  1. After starting nginx, log in with admin creds and add your AWS account through the UI
Accounts -> Button on the bottom right corner -> "+" button -> Fill in form

After adding the account, configure the IAM Role you will use

Config -> role_name
  1. Start the standalone scheduler:
docker-compose up -d scheduler
  1. Log out of the admin account and log back in to see your results
Making Changes to the Cloud-Inquistor core

Do not make changes to the code running on the container; container storage is not persistent when you do not mounting your local filesystem. Instead make your changes on your local file system then run:

docker-compose restart api
docker-compose restart scheduler

As part of their startup, the API and Scheduler reinstall cloud-inquisitor and this will propagate your changes

Making Changes to the frontend

Do not make changes to the code running on the container; container storage is not persistent when you do not mounting your local filesystem. Uncomment the volumes and the path for mounting the fontend. The frontend setup in your docker-compose.yml should look similar to:

frontend:
  build:
    context: ./../
    dockerfile: ./docker/frontend-Dockerfile
  ports:
    - "127.0.0.1:3000:3000"
  volumes:
    - "./relative/path/to/cinq-frontend:/frontend"
   # Change the above path to cinq-frontend to fit your directory structure

This will mount your local frontend onto the container. The container automatically looks for changes to the frontend code and will recompile the front end. This takes a few seconds since gulp is rebuilding the entire frontend for us.

Making Changes to plugins

Do not make changes to the code running on the container; container storage is not persistent when you do not mounting your local filesystem. Mount your plugins on /plugins/plugin-name inside the container. The setup in your docker-compose.yml should look similar to:

api:
  image: cinq_base
  ports:
    - "127.0.0.1:5000:5000"
  environment:
    - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
    - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
    - AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN}
  volumes:
    - "./files/dev-backend-config.json:/usr/local/etc/cloud-inquisitor/config.json"
    - "./files/logging.json:/usr/local/etc/cloud-inquisitor/logging.json"
    - "./../:/cloud-inquisitor"
    - "./../../cinq-auditor-iam:/plugins/cinq-auditor-iam"
    - "./../../cinq-auditor-example2:/plugins/cinq-auditor-example2"
    # Change the above path to the plugin to fit your directory structure

  command: >
    bash -c " source /env/bin/activate;

    cd /cloud-inquisitor/backend
    && python setup.py clean --all install > /dev/null;

    if [ -d /plugins ]; then
       cd /plugins;
       for plugin in `ls /plugins`; do
           (cd $$plugin && python3 setup.py clean --all install)
       done;
    fi

    && cloud-inquisitor runserver -h 0.0.0.0 -p 5000;"
  depends_on:
    - base
    - db

This will mount your local plugin onto the container. The container automatically looks for plugins when it starts to and will install them to the virtual env. This takes a few seconds.

The easiest way to propagate the changes is to just restart the container:

docker-compose restart api
docker-compose restart scheduler
Limitations
  • All communication between containers is HTTP
  • nginx uses a self signed cert
  • Do NOT use this in production. We have not hardened the containers. Some processes may run as root.
Tips
  • All docker-compose commands should be run from inside /path/to/cloud-inquisitor/docker.
  • After following the [initial setup](#initial-setup), you can bring the whole system up with docker-compose up -d.
  • By default, the database persists and the volume located at /path/to/cloud-inquisitor/docker/database
  • To stop all services run docker-compose down
  • To stop an individual service run docker-compose kill <db|api|scheduler|frontend|nginx>
  • To view logs run docker-compose logs
    • You can view individual logs by running docker-compose logs <db|api|scheduler|frontend|nginx>
    • You can follow the logs by adding the -f flag
  • Don’t forget to save your admin password
  • Changing the docker-compose.yml commmand requires you to kill the container and bring it back up

Standalone Local Development Environment Setup Script

Note:

  • The instructions here are NOT for production. They are strictly for local development.
  • We strongly recommend that you setup this in a fresh VM, as it might change your system configurations and break stuff.
  • If you already have a running MySQL instance, you need to remove the root password or the setup will fail.
Introduction

This script is written to meet the need for people who want to setup the environment in a simple and straight forward approach

Requirements
  • Ubuntu 16.04 or higher (We tested this script on Ubuntu 16.04.4 x64 and Ubuntu 18.04 x64)
  • Internet Access
  • Be able to sudo
How to use this script
  1. Clone Cloud inquisitor (https://github.com/RiotGames/cloud-inquisitor.git) to the host you want to use as the dev instance
  2. Go to the directory and sudo make setup_localdev. Do not run under root account.
  3. Wait till the script finishes
  4. Setup your AWS credentials (Optional. See section below)
  5. Run /opt/cinq/cinq-venv/bin/cloud-inquisitor runserver and save the admin credential in the output. You will need this to login the user interface
  6. Now you have a working local dev instance for Cinq development. The project will be saved under /opt/cinq with its virtualenv located at /opt/cinq/cinq-venv
Set up your AWS credentials

You have 2 approaches to setup your AWS credentials

Option 1: Open ~/.cinq/config.json and fill the fields below. Note if you are not using temporary credentials, the session_token row needs to be deleted:

{
    ...

    "aws_api": {
        "instance_role_arn": "FILL_INSTANCE_ROLE_ARN",
        "access_key": "YOUR_AWS_ACCESS_KEY",
        "secret_key": "YOUR_AWS_SECRET_ACCESS_KEY",
        "session_token": "YOUR_AWS_SESSION_TOKEN"
    },

    ...
}

Option 2: Same as Option 1, but don’t fill access_key, secret_key and session_token. Instead, you put your credentials at places where will be looked up by boto3 library. (http://boto3.readthedocs.io/en/latest/guide/configuration.html#configuring-credentials)

Notes

Below are things you might need to pay attention to

  • You probably need to set the working directory to /opt/cinq if you plan to use an IDE to do the development
  • If you’d like to develop other plugins, you need to clone it from GitHub then install it as well (go to where setup.py is located, then /opt/cinq/cinq-venv/bin/pip3 install -e .)
  • /opt/cinq/cinq-venv/bin/cloud-inquisitor scheduler will run the scheduler for you

User Guide

This document is intended to be a user guide to inform on how to use the Cloud Inquisitor UI.

Dashboard

By default, the front-end dasbhoard shows:

  • EC2 Instances that are running or stopped and which instances have a public IP.
  • Percentage of required tags compliance per account.

Below is a sample screenshot showing what the dashboard looks like:

_images/cinq_dashboard.png

Browse

On the left-hand side of the UI, you are able to directly examine raw data:

  • EC2 Instances - shows all the EC2 Instance data that Cloud Inquisitor possess, which should represent all EBS volumes in use in your AWS infrastructure
  • EBS Volumes - shows all the EBS Volume data that Cloud Inquisitor possess, which should represent all EBS volumes in use in your AWS infrastructure
  • DNS - shows all the dns data that Cloud Inquisitor possess (shown below, first screenshot)
  • Search - this gives you the ability to search for instances across the Cloud Inquisitor database. The search page has help functionality within the page as shown below (second screenshot)
_images/cinq_dns_collector.png _images/cinq_search.png

Administration

When logged in as a user with the Admin role, you will see an extra list of sections in the side-menu

  • Accounts
  • Config
  • Users
  • Roles
  • Emails
  • Audit Log
  • Logs
Accounts

In this section you can review the current accounts that Cloud Inquisitor is auditing and modify accordingly. For example, to add a new account, click the floating menu button in the bottom right hand side of the window and select the “+” as shown below:

_images/cinq_account_add.png
Config

In this section you can modify the configuration of both the core platform, as well as all the plugins you have installed. Some plugins may require extensive configuration before you will be able to use them, while others will have usable defaults and not require much configuration.

Below is a list of the configuration options for the core system

Default
Option Description Default Value
auth_system Controls the currently enabled authentication system. Local Authentication
ignored_aws_regions_regexp Regular expression of AWS region names to NOT include in the list of regions to audit (^cn-|GLOBAL|-gov|eu-north-1)
jwt_key_file_path Path to the SSL certificate used to sign JWT tokens ssl/private.key
role_name Name of the AWS Role to assume in remote accounts cinq_role
scheduler Name of the scheduler system to use StandaloneScheduler
Logging
Option Description Default Value
enable_syslog_forwarding Also send application logs to a syslog server Disabled
log_keep_days Number of days to keep logs in database 31
log_level Minimum severity of logs to store INFO
remote_syslog_server_addr Hostname or IP address of syslog server 127.0.0.1
remote_syslog_server_port Port to send syslog data to 514
API
Option Description Default Value
host Address for the API to listen on. Note this should be kept as localhost / 127.0.0.1 unless the API server is running on a separate machine from nginx 127.0.0.1
port Port to run the API backend on 5000
workers Number of HTTP workers for gunicorn to run 6

Project Overview

Backend

This project provides two of the three pieces needed for the Cloud Inquisitor system, namely the API backend service and the scheduler process responsible for fetching and auditing accounts. The code is built to be completely modular using pkg_resource entry points for loading modules as needed. This allows you to easily build third-party modules without updating the original codebase.

API Server

The API server provides a RESTful interface for the frontend web client.

Authentication

The backend service uses a JWT token based form of authentication, requiring the client to send an Authorization HTTP header with each request. Currently the only supported method of federated authentication is the OneLogin based SAML workflow.

There is also the option to disable the SAML based authentication in which case no authentication is required and all users of the system will have administrative privileges. This mode should only be used for local development, however for testing SAML based authentications we have a OneLogin application configured that will redirect to http://localhost based URL’s and is the preferred method for local development to ensure proper testing of the SAML code.

More information can be found at:

Auditors

Auditors are plugins which will alert and potentially take action based on data collected.

Cloudtrail

The CloudTrail auditor will ensure that CloudTrail has been enabled for all accounts configured in the system. The system will automatically create an S3 bucket and SNS topics for log delivery notifications. However, you must ensure that the proper access has been granted to the accounts attempting to log to a remote S3 bucket. SNS subscriptions will need to be confirmed through an external tool such as the CloudTrail app.

More information such as configuration options here.

Domain Hijacking

The domain hijacking auditor will attempt to identify misconfigured DNS entries that would potentially result in third parties being able to take over legitimate DNS names and serve malicious content from a real location.

This auditor will fetch information from AWS Route53, CloudFlare, and our internal F5 based DNS servers and validate the records found against our known owned S3 buckets, Elastic BeanStalks, and CloudFront CDN distributions.

More information such as configuration options here.

IAM

The IAM roles and policy auditor will audit, and if enabled, manage the default Riot IAM policies and roles.

More information such as configuration options here.

Required Tags

Cloud Inquisitor audits EC2 instances and S3 Buckets for tagging compliance and shutdowns or terminates resources if they are not brought into compliance after a pre-defined amount of time.

More information such as configuration options here.

Note: This is currently being extended to include all taggable AWS objects.

Default Schedule for Resources that can be Shutdown
Age Action
0 days Alert the AWS account owner via email.
21 days Alert the AWS account owner, warning that shutdown of instance(s) will happen in one week
27 days Alert the AWS account owner, warning shutdown of instance(s) will happen in one day
28 days Shutdown instance(s) and notify AWS account owner
112 days Terminate the instance and notify AWS account owner
Default Schedule for Resources that can only be terminated (S3, ECS, RDS…)
Age Action
0 days Alert the AWS account owner via email.
7 days Alert the AWS account owner, warning termination of resource(s) will happen in two weeks
14 days Alert the AWS account owner, warning shutdown of resources(s) will happen in one week
20 days One day prior to removal, a final notice will be sent to the AWS account owner
21 days Delete* the resource and notify AWS account owner

* For some AWS resources that may take a long time to delete (such as S3 buckets with terabytes of data) a lifecycle policy will be applied to delete the data in the bucket prior to actually deleting the bucket.

S3 Bucket Auditor

S3 Buckets have a few quirks when compared to EC2 instances that must be handled differently. * They cannot be shutdown, only deleted * They cannot be deleted if any objects or versions of objects exist in the bucket * API Calls to delete objects or versions in the bucket are blocking client-side, which makes deleting a large number of objects from a bucket (100GB+) unreliable

Because of this, we have decided to delete contents of a bucket by using lifecycle policies. The steps the auditor takes when deleting buckets are:

  1. Check to see if the bucket has any objects/versions. If it’s empty, delete the bucket .
  2. If the bucket is not empty, iterate through the lifecycle policies to see if our policy is applied.
  3. If the lifecycle policy does not exist, apply the lifecycle policy to delete data .
  4. If a bucket policy to prevent s3:PutObject and s3:GetObject does not exist on the bucket, apply that policy.
  5. If a lifecycle policy to delete version markers does not exist, apply the policy to delete version markers.

This covers a few different edge cases, most notably it allows the auditor to continuously run against the same bucket with re-applying the same policies, even if the bucket contains terabytes of data. Applying bucket policies to prevent s3:PutObject and s3:GetObject prevents objects from being added to the bucket after the lifecycle policy has been applied, which would lead to the bucket never being deleted.

The default expiration time of objects for the lifecycle policy is three days. If this bucket is being used as a static website or part of any critical service, this gives the service owners immediate visibility into the actions that will be soon be taken (bucket deletion) without permanently deleting the content. Although at this point the bucket is non-compliant and should be deleted, being able to reverse a live service issue caused by the tool is more important than immediately and irrecoverably deleting data.

*If a bucket is tagged properly after the lifecycle policy has already been applied and the bucket has been marked for deletion, the auditor will not remove the policies on the bucket. The bucket policy and lifecycle policy must be removed manually.*

At this point in time, the policy itself is not checked to ensure that it matches the one that we apply. This allows a user to create a policy with a name that matches our policy, and it would prevent their bucket from being deleted. At this time we treat it as an edge case similar to enabling EC2 instance protection, but plan to fix it in the future.

Collectors

Collectors are plugins which only job is to fetch information from the AWS API and update the local database state.

AWS

The base AWS collector queries all regions for every account collecting information for all regions in each AWS account.

A more detailed description is available here.

DNS

The DNS collector gathers and collates all related DNS information, with which the relevant DNS auditors can analyse for potential security issues.

A more detailed description is available here.

Frontend

This project provides the web frontend for the Cloud Inquisitor system, and is built using AngularJS and angularjs-material (Google Material Design) elements, with a few jQuery based libraries as well.

Building

The code is built using npm and gulp.

To get started building a working frontend, you need to first ensure you have NodeJS and npm installed and then run the following commands:

bash
cd $Cloud-Inquisitor-REPO/frontend
npm install
node_modules/.bin/gulp

This will result in production-ready (minified) HTML and Javascript code which can be found in the dist folder.

Additional Options

Databases

cinq is currently designed to run with MySQL Version 5.7.17. We recommend you stick to this version.

If you do not wish to use a local MySQL DB that the cinq install gives you, in your variables file, simply set the following in your variables before you run the packer build to disable the install and setup of the local DB and point to the database you’d like to use

"app_setup_local_db":     "False"
"app_db_uri":             "mysql://<user>:<pass>@<hostname>:3306/<yourdb>"

Once the AMI is created and you’ve logged in you’ll need to initialize the database. In order to do so execute the following commands

# source /path/to/pyenv/bin/activate
# export INQUISITOR_SETTINGS=/path/to/cinq-backend/settings/production.py
# cd /path/to/cinq-backend
# cloud-inquisitor db upgrade
# python3 manage.py setup --headless

You may receive some warnings but these commands should succeed. Then if you restart supervisor you should be good to go

# supervisorctl restart all

You can look in /path/to/cinq-backend/logs/ to see if you have any configuration errors.

KMS and UserData

You may not wish to keep database credentials in flat configuration files on the instance. You can KMS encrypt these variables and pass them to the cinq instance via AWS userdata. In your variables file use the following

"app_use_user_data":                       "True",
"app_kms_account_name":                    "aws-account-name",

When you launch the AMI packer created, you can encrypt the APP_DB_URI setting

$ aws kms encrypt --key-id arn:aws:kms:us-west-2:<account_id>:key/xxxxxxxx-74f8-4c0c-be86-a6173f2eeef9 --plaintext APP_DB_URI="mysql://<user>:<pass>@<hostname>:3306/<yourdb>"

It will return a response with a field of CipherTextBlob that you can paste into your UserData field when you launch the AMI.

To verify your cinq instance is using KMS, your production settings in /path/to/cinq-backend/settings/production.py should contain:

USE_USER_DATA = True
KMS_ACCOUNT_NAME = '<account_name>'
USER_DATA_URL = 'http://169.254.169.254/latest/user-data'
Authentication Systems

Cinq supports built-in authentication system (default), as well as federation authentication with OneLogin IdP via SAML. It’s possible that other IdPs can be used but this has not been tested.

Edit your /path/to/cinq-backend/settings/settings.json file and provide the required values:

# source /path/to/pyvenv/bin/activate
# cd /path/to/cinq-backend
# cloud-inquisitor auth -a OneLoginSAML
  cloud_inquisitor.plugins.commands.auth Disabled Local Authentication
  cloud_inquisitor.plugins.commands.auth Enabled OneLoginSAML

Verify that your configuration is correct and the active system

# cloud-inquisitor auth -l

cloud_inquisitor.plugins.commands.auth --- List of available auth systems ---
cloud_inquisitor.plugins.commands.auth Local Authentication
cloud_inquisitor.plugins.commands.auth OneLoginSAML (active)
cloud_inquisitor.plugins.commands.auth --- End list of Auth Systems ---

To switch back to local Auth simply execute

# cloud-inquisitor auth -a "Local Authentication"
Additional Customization

In the packer directory, the build.json contains other parameters that you can modify at your discretion.

Packer Settings
  • aws_access_key - Access Key ID to use. Default: AWS_ACCESS_KEY_ID environment variable
  • aws_secret_key - Secret Key ID to use. Default: AWS_SECRET_ACCESS_KEY environment variable
  • ec2_vpc_id - ID of the VPC to launch the build instance into or default VPC if left blank. Default: vpc-4a254c2f
  • ec2_subnet_id - ID of the subnet to launch the build instance into or default subnet if left blank. Default: subnet-e7307482
  • ec2_source_ami - AMI to use as base image. Default: ami-34d32354
  • ec2_region - EC2 Region to build AMI in. Default: us-west-2
  • ec2_ssh_username - Username to SSH as for AMI builds. Default: ubuntu
  • ec2_security_groups - Comma-separated list of EC2 Security Groups to apply to the instance on launch. Default: sg-0c0aa368,sg-de1db4ba
  • ec2_instance_profile - Name of an IAM Instance profile to launch the instance with. Default: CinqInstanceProfile
Installer Settings
  • git_branch - Specify the branch to build Default: master
  • tmp_base - Base folder for temporary files during installation, will be created if missing. Must be writable by the default ssh user. Default: /tmp/packer
  • install_base - Base root folder to install to. Default: /opt
  • pyenv_dir - Subdirectory for the Python virtualenv: Default : pyenv
  • frontend_dir - Subdirectory of install_base for frontend code. Default: cinq-frontend
  • backend_dir - Subdirectory of install_base for backend code. Default: cinq-backend
  • app_apt_upgrade - Run apt-get upgrade as part of the build process. Default: True
Common Settings
  • app_debug - Run Flask in debug mode. Default: False
Frontend Settings
  • app_frontend_api_path - Absolute path for API location. Default: /api/v1
  • app_frontend_login_url - Absolute path for SAML Login redirect URL. Default: /saml/login
Backend Settings
  • app_db_uri - IMPORTANT: Database connection URI. Example: mysql://cinq:changeme@localhost:3306/cinq
  • app_db_setup_local - This tells the builder to install and configure a local mysql database. Default - null
  • app_db_user - Mysql username. Default - null
  • app_db_pw - Mysql password. Default - null
  • app_api_host - Hostname of the API backend. Default: 127.0.0.1
  • app_api_port - Port of the API backend. Default: 5000
  • app_api_workers - Number of worker threads for API backend. Default: 10
  • app_ssl_enabled - Enable SSL on frontend and backend. Default: True
  • app_ssl_cert_data - Base64 encoded SSL public key data, used if not using self-signed certificates. Default: None
  • app_ssl_key_data - Base64 encoded SSL private key data, used if not using self-signed certificates. Default: None
  • app_use_user_data - Tells cinq to read variables from encrypted user-data
  • app_kms_account_name - Provides an account name for kms.
  • app_user_data_url - URL where user data is access. Default: http://169.254.169.254/latest/user-data
FYI

The vast majority of these settings should be left at their default values unless you fell you must change them to get cinq running.

Contributing Guidelines

We would love contributions to Cloud Inqusitor - this document will help you get started quickly.

Submitting changes

  • Code should be accompanied by tests and documentation.
  • Code should follow the existing style. We try to follow PEP8.
  • Please write clear and useful commit messages. Here are three blog posts on how to do it right:
  • We would prefer one branch per feature or fix; please keep branches small and on topic.
  • Send a pull request to the develop branch. See the GitHub pull request docs for further information.

Additional resources

External resources