Welcome to KQueen documentation!¶
kqueen package¶
Models¶
Kubernetes API¶
Server¶
Helpers¶
Serializers¶
Configuration¶
Sample configuration files are located in the config/
directory. The default
configuration file is config/dev.py
.
To define a different configuration file, set the KQUEEN_CONFIG_FILE
environment variable. To override the values defined in the configuration file,
set the environment variable matching the KQUEEN_<config_parameter_name>
pattern.
Name | Default | Description |
---|---|---|
CONFIG_FILE | config/dev.py | Configuration file to load during startup |
DEBUG | False | Debug mode for flask and all loggers |
SECRET_KEY | None | This key is used for server-side encryption (cookies, secret database fields) and must be at least 16 characters in length. |
ETCD_HOST | localhost | Hostname address of the etcd server |
ETCD_PORT | 4001 | Port for etcd server |
ETCD_PREFIX | /kqueen | Prefix URL for objects in etcd |
JWT_DEFAULT_REALM | Login Required | The default realm |
JWT_AUTH_URL_RULE | /api/v1/auth | Authentication endpoint returning token. |
JWT_EXPIRATION_DELTA | timedelta(hours=1) | JWT token lifetime. |
JENKINS_ANCHOR_PARAMETER | STACK_NAME | This parameter is used to match Jenkins builds with clusters. |
JENKINS_API_URL | None | REST API for Jenkins |
JENKINS_PASSWORD | None | Optional. The default Jenkins password. It can be overridden by another value specified in the request. |
JENKINS_PROVISION_JOB_CTX | {} | Dictionary for predefined Jenkins job context |
JENKINS_PROVISION_JOB_NAME | deploy-aws-k8s_ha_calico_sm | Name of the Jenkins job used to deploy a cluster. |
JENKINS_USERNAME | None | Optional. The default Jenkins username. It can be overridden by another value specified in the request. |
CLUSTER_ERROR_STATE | Error | Caption for a cluster in error state. |
CLUSTER_OK_STATE | OK | Caption for a cluster in OK state. |
CLUSTER_PROVISIONING_STATE | Deploying | Caption for a cluster in provisioning state. |
CLUSTER_DEPROVISIONING_STATE | Destroying | Caption for a cluster in deprovisioning (deleting) state. |
CLUSTER_UNKNOWN_STATE | Unknown | Caption for a cluster with unknown state. |
CLUSTER_STATE_ON_LIST | True | Update the state of clusters on cluster list. This can be disabled for organizations with a large number of clusters in the deploy state. |
PROVISIONER_ERROR_STATE | Error | Caption for errored provisioner. |
PROVISIONER_OK_STATE | OK | Caption for working provisioner. |
PROVISIONER_UNKNOWN_STATE | Not Reachable | Caption for unknown provisioner. |
PROVISIONER_ENGINE_WHITELIST | None | Enable only engines in the list. |
PROMETHEUS_WHITELIST | 127.0.0.0/8 | Addresses allowed to access metrics endpoint without token |
Default user access configuration¶
Default CRUD (Create, Read, Update, Delete) model for KQueen user roles.
Superadmin view¶
CRUD
all organizations.CRUD/Manage
all members.CRUD/Manage
all member roles.- Full admin rights.
Admin view¶
- Invite/remove members in own organization (Email/LDAP).
CRD
all provisioners.CRUD
all clusters.- Collect Prometheus metrics.
- Full user rights.
User view¶
- Login.
R
organization members.R
provisioners.CRUD
self clusters.
Before you provision a Kubernetes cluster, you may need to deploy and configure the following external services:
- NGINX proxy server, to configure SSL support, domain naming, and to define certificates.
- Mail server, to enable members management. For example, to enable a user to invite other users by email. You can use the KQueen predefined mail service or run a new one.
- LDAP server, to enable members management through LDAP. For example, to enable a user to invite other users from defined LDAP server. You can use the KQueen predefined LDAP service or run a new one.
- Prometheus server, to extend the monitoring in KQueen. You can either use a
predefined Prometheus server, defined in the
docker-compose.production.yml
file or use an external one. For an external one, you must include the rules fromkqueen/prod/prometheus
to the existing Prometheus service.
Set up the NGINX server¶
Open the
.env
file for editing.Configure the variables the Nginx section:
# Domain name for service. Should be equal with name in generated ssl-certificate NGINX_VHOSTNAME=demo.kqueen.net # Directory path for certificates in container.Finally it look like $NGINX_SSL_CERTIFICATE_DIR/$NGINX_VHOSTNAME NGINX_SSL_CERTIFICATE_DIR=/mnt/letsencrypt
Open the
docker-compose.production.yml
file for editing.Verify the proxy service configuration. Pay attention to the following variables:
VHOSTNAME
, the domain name for the KQueen service. This domain name must be the same as the domain name in the generated certificates. By default,NGINX_VHOSTNAME
from the.env
file is used.SSL_CERTIFICATE_DIR
, the mapped directory for certificates forwarding into a docker container. By default, the$NGINX_SSL_CERTIFICATE_DIR/$NGINX_VHOSTNAME
variable from the.env
file is used.SSL_CERTIFICATE_PATH
, the path for the cert+key certificate. The default is$SSL_CERTIFICATE_DIR/fullchain.cer
.SSL_CERTIFICATE_KEY_PATH
, the path for the certificate key. The default is$SSL_CERTIFICATE_DIR/$VHOSTNAME.key
.SSL_TRUSTED_CERTIFICATE_PATH
, the path for the certificate. The default is$SSL_CERTIFICATE_DIR/ca.cer
.
Note
Verify that the local certificates have the same name as the ones defined in the variables.
Map the volumes with certificates. The destination path must be the same as
SSL_CERTIFICATE_DIR
. For example:volumes: - /your/local/cert/storage/kqueen/certs/:${NGINX_SSL_CERTIFICATE_DIR}/${NGINX_VHOSTNAME}:ro
Build the proxy service image:
docker-compose -f docker-compose.production.yml build --no-cache
Rerun the production services:
docker-compose -f docker-compose.yml -f docker-compose.production.yml up --force-recreate
Set up the mail server¶
Open the
docker-compose.production.yml
file for editing.Define the mail service. For example:
mail: image: modularitycontainers/postfix volumes: - /var/spool/postfix:/var/spool/postfix - /var/mail:/var/spool/mail environment: MYHOSTNAME: 'mail'
Configure the following variables in the KQueen UI service section:
KQUEENUI_MAIL_SERVER: mail KQUEENUI_MAIL_PORT: 10025
Once done, you should be able to invite members through email notifications. The email will contain an activation link for the KQueen service with a possibility to set a password. Users with superadmin rights can also manage member roles.
Note
Volume-mapping for mail containers is an additional feature that enables storing the mailing history and forwards an additional postfix mail configuration. You must properly configure it on the local machine. Otherwise, you can run the mail server without volume mapping.
Set up the LDAP server¶
Note
If you are using an external LDAP server, skip steps 1 and 2.
Open the
docker-compose.auth.yml
file for editing.Define the LDAP service. For example:
services: ldap: image: osixia/openldap command: - --loglevel - debug ports: - 127.0.0.1:389:389 - 127.0.0.1:636:636 environment: LDAP_ADMIN_PASSWORD: 'heslo123' phpldapadmin: image: osixia/phpldapadmin:latest container_name: phpldapadmin environment: PHPLDAPADMIN_LDAP_HOSTS: 'ldap' PHPLDAPADMIN_HTTPS: 'false' ports: - 127.0.0.1:8081:80 depends_on: - ldap
Open the
docker-compose.production.yml
file for editing.Configure the following variables in the KQueen UI service section:
KQUEEN_LDAP_URI: 'ldap://ldap' KQUEEN_LDAP_DN: 'cn=admin,dc=example,dc=org' KQUEEN_LDAP_PASSWORD: 'secret' KQUEEN_AUTH_MODULES: 'local,ldap' KQUEENUI_LDAP_AUTH_NOTIFY: 'False'
Note
KQUEEN_LDAP_DN
andKQUEEN_LDAP_PASSWORD
are user credentials with a read-only access for LDAP search.KQUEEN_AUTH_MODULES
is the list of enabled authentication methods.- Set
KQUEENUI_LDAP_AUTH_NOTIFY
toTrue
to enable additional email notifications for LDAP users.
Once done, you should be able to invite members through LDAP. Define cn
as
the username for a new member.
Note
dc
for invited users is predefined in KQUEEN_LDAP_DN
.
Users with superadmin rights can also manage member roles.
Set up metrics collecting¶
Open the
docker-compose.production.yml
file for editing.Configure the Prometheus service IP address, port, and volumes. For example:
prometheus: image: prom/prometheus restart: always ports: - 127.0.0.1:9090:9090 volumes: - ./prod/prometheus/:/etc/prometheus/:Z - /mnt/storage/kqueen/prometheus/:/prometheus/ links: - api - etcd
Define the Prometheus scraper IP address in the KQueen API service section:
KQUEEN_PROMETHEUS_WHITELIST: '172.16.238.0/24'
The metrics can be obtained using the KQueen API or Prometheus API:
Kqueen API:
TOKEN=$(curl -s -H "Content-Type: application/json" --data '{"username":"admin","password":"default"}' -X POST <<kqueen_api_host>>:5000/api/v1/auth | jq -r '.access_token'); echo $TOKEN curl -H "Authorization: Bearer $TOKEN" <<kqueen_api_host>>:5000/metrics/
Prometheus API:
Add the scraper IP address to the
PROMETHEUS_WHITELIST
configuration.Run the following command:
curl <<prometheus_host>>:<<prometheus_port>>/metrics
All application metrics are exported to the /metrics API endpoint. Any external Prometheus instance can then scrape this metric.
Provision a Kubernetes cluster¶
You can provision a Kubernetes cluster using various community of engines, such as Google Kubernetes engine or Azure Kubernetes Service.
Provision a Kubernetes cluster using Google Kubernetes Engine¶
- Log in to Google Kubernetes Engine (https://console.cloud.google.com).
- Select your Project.
- Navigate to
API’s & Services
->Credentials
tab and clickCreate credentials
. - From
Service Account key
, select your service account. - Select
JSON
as the key format. - Download the JSON snippet.
- Log in to the KQueen web UI.
- From the
Create Provisioner
page, selectGoogle Kubernetes Engine
. - Insert the downloaded JSON snippet that contains the service account key and submit the provisioner creation.
- Click
Deploy Cluster
. - Select the defined GCE provisioner.
- Specify the cluster requirements.
- Click
Submit
. - To track the cluster status, navigate to the KQueen main dashboard.
Provision a Kubernetes cluster using Openstack Kubespray Engine¶
- Log in to Openstack Horizon Dashboard.
- Select your Project.
- Navigate to Project`->
Compute
->Access & Security
->API Access
tab and clickDownload OpenStack RC File
. - Log in to the KQueen web UI.
- From the
Create Provisioner
page, selectOpenstack Kubespray Engine
. - Specify provisioner requirements with the Openstack RC File, downloaded earlier.
- Click
Deploy Cluster
. - Select the defined Openstack provisioner.
- Specify the cluster requirements, and pay attention on the following cases:
SSH key name
is name of SSH key pair. You should choose from existing pairs or create a new one.Image name
image must be one of the Kubespray supported Linux distributions.Flavor
must be at leastm1.small
(2GB RAM, dual-core CPU).SSH username
is login username for nodes. It depends on the defined image.Comma separated list of nameservers
check, that it contains required dns-servers to resolve Openstack url’s (like authentication url).
- Click
Submit
. - To track the cluster status, navigate to the KQueen main dashboard.
Note
The Openstack configuration should correspond to the Kubespray deployment requirements.
Note
Currently, the cluster can not be downscaled to lower nodes count than the count of master nodes.
Note
Make sure the default security group allows VMs to connect with each other and access to the Internet (if current Node config does not contain full Kubespray requirements setup).
Note
The following network kube_pods_subnet: 10.233.64.0/18 must be unused in your network infrastructure. IP addresses will be assigned from this range to individual pods.
Provision a Kubernetes cluster using Azure Kubernetes Service¶
Log in to Azure Kubernetes Service (https://portal.azure.com).
Create an Azure Active Directory Application as described in the official Microsoft Documentation.
Copy the
Application ID
,Application Secret
,Tenant ID
(Directory ID), andSubscription ID
to use in step 8.Set the
Owner
role to your Application in the Subscription settings to enable the creation of Kubernetes clusters.Navigate to the
Resource groups
tab and create a resource group. Copy theResource group name
to use in step 8.From the
Resource groups
->your_group
->Access Control (IAM)
tab, verify that the Application has theOwner
role in the resource group.Log in to the KQueen web UI.
From the
Create provisioner
tab, select the AKS engine and set the following:- Set the
Client ID
as Application ID from step 3. - Set the
Resource group name
as Resource group name from step 5. - Set the
Secret
as Application Secret from step 3. - Set the
Subscription ID
as Subscription ID from step 3. - Set the
Tenant ID
as Tenant(Directory) ID from step 3.
- Set the
In the KQueen web UI, click
Deploy Cluster
.Select the AKS provisioner.
Specify the cluster requirements.
Specify the public SSH key to connect to AKS VMs.
Note
For an SSH access to created VMs, assign a public IP address to a VM as described in How to connect to Azure AKS Kubernetes node VM by SSH. Once done, run the
ssh azureuser@<<public_ip>> -i .ssh/your_defined_id_rsa
command.Click
Submit
.To track the cluster status, navigate to the KQueen main dashboard.
Note
The Admin Console in the Azure portal is supported only in Internet Explorer and Microsoft Edge and may fail to operate in other browsers due to Microsoft issues.
Note
The AKS creates a separate resource during the creation of a Kubernetes cluster and uses the defined resource group as a prefix. This may affect your billing. For example:
Your Resource Group : Kqueen
Additional cluster-generated Resource Group: MC_Kqueen_44a37a65-1dff-4ef8-97ca-87fa3b8aee62_eastus
For more information, see Issues, and https://docs.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks.
Manually add an existing Kubernetes cluster to KQueen¶
- Log in to the KQueen web UI.
- Click
Create Provisioner
. - Enter the cluster name.
- In the
Engine
drop-down list, selectManual Engine
and clickSubmit
. - Click
Deploy Cluster
. - Select the predefined manual provisioner and attach a valid Kubernetes configuration file.
- Click
Submit
.
The Kubernetes cluster will be attached in a read-only mode.
Backup and recover Etcd¶
Etcd is the only stateful component of KQueen. To recover etcd in case of a failure, follow the procedure described in Disaster recovery.
Note
The v2
etcd keys are used for the deployment.
Example of the etcd backup workflow:
# Backup etcd to directory /root/backup/ (etcd data stored in /var/lib/etcd/default)
etcdctl backup --data-dir /var/lib/etcd/default --backup-dir /root/backup/
Example of the etcd recovery workflow:
# Move data to etcd directory
mv -v /root/backup/* /var/lib/etcd/default/
# Start new etcd with these two extra parameters (among the other)
# for example: etcd --force-new-cluster
KQueen CLI API examples¶
This section lists the available KQueen API operations that you can perform through the command-line interface and provides examples.
To obtain a bearer token and authenticate to KQueen:
$ TOKEN=$(curl -s -H "Content-Type: application/json" --data '{ "username": "admin", "password": "default" }' -X POST 127.0.0.1:5000/api/v1/auth | jq -r '.access_token')
$ echo $TOKEN
To list the organizations:
$ curl -s -H "Authorization: Bearer $TOKEN" 127.0.0.1:5000/api/v1/organizations | jq
[
{
"created_at": "2018-05-03T14:08:35",
"id": "22d8df64-4ac9-4be0-89a7-c45ea0fc85da",
"name": "DemoOrg",
"namespace": "demoorg"
}
]
To check the clusters:
$ curl -s -H "Authorization: Bearer $TOKEN" 127.0.0.1:5000/api/v1/clusters | jq
[]
To list the part of clusters:
$ curl -s -H "Authorization: Bearer $TOKEN" 127.0.0.1:5000/api/v1/clusters?limit=5\&offset=10 | jq
{
"items": [],
"total": 0
}
To get sorted clusters list:
$ curl -s -H "Authorization: Bearer $TOKEN" 127.0.0.1:5000/api/v1/clusters?sortby=name\&order=desc | jq
[]
To list the provisioners:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/provisioners/engines
...
To list users:
$ curl -s -H "Authorization: Bearer $TOKEN" 127.0.0.1:5000/api/v1/users | jq
[
{
"active": true,
"created_at": "2018-05-03T14:08:35",
"email": "admin@kqueen.net",
"id": "09587e34-812d-4efc-af17-fbfd7315674c",
"organization": {
"created_at": "2018-05-03T14:08:35",
"id": "22d8df64-4ac9-4be0-89a7-c45ea0fc85da",
"name": "DemoOrg",
"namespace": "demoorg"
},
"password": "$2b$12$DQvL0Wsqr10DJovkNXvqXeZeAImoqmPXQHZF2nsZ0ICcB6WNBlwtS",
"role": "superadmin",
"username": "admin"
}
]
To create a new organization:
The following command uses testorganization
as an example.
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data '{ "name": "testorganization", "namespace": "testorganization" }' -X POST 127.0.0.1:5000/api/v1/organizations | jq
{
"created_at": "2018-05-03T14:10:09",
"id": "bebf0186-e2df-40a7-9b89-a2b77a7275d9",
"name": "testorganization",
"namespace": "testorganization"
}
To add a new user and password to the new organization:
The following example shows how to add the testusername
user name and
testpassword
password to the newly created testorganization
organization.
$ ORG_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/organizations | jq -r '.[] | select (.name == "testorganization").id')
$ echo $ORG_ID
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data "{ \"username\": \"testusername\", \"organization\": \"Organization:$ORG_ID\", \"role\": \"superadmin\", \"active\": true, \"password\": \"testpassword\" }" -X POST 127.0.0.1:5000/api/v1/users | jq
{
"active": true,
"created_at": "2018-05-03T14:10:33",
"id": "c2782be5-8b87-4322-82b0-6b726bc4952d",
"organization": {
"created_at": "2018-05-03T14:10:09",
"id": "bebf0186-e2df-40a7-9b89-a2b77a7275d9",
"name": "testorganization",
"namespace": "testorganization"
},
"password": "$2b$12$gYhVf23WXplWSZH8FjaiB.9SzwsRHAelipx2bLF407E0zAOGnmfNC",
"role": "superadmin",
"username": "testusername"
}
To switch to a particular user:
The following example shows how to switch to the testusername
user.
$ TOKEN=$(curl -s -H "Content-Type: application/json" --data '{ "username": "testusername", "password": "testpassword" }' -X POST 127.0.0.1:5000/api/v1/auth | jq -r '.access_token')
$ echo $TOKEN
To add a new Azure Managed Kubernetes Service provisioner:
The following example shows how to add a new Azure Managed Kubernetes Service
provisioner created by the testusername
user.
$ USER_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/users | jq -r '.[] | select (.username == "testusername").id')
$ echo $USER_ID
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data "{ \"name\": \"testprovisioner\", \"engine\": \"kqueen.engines.AksEngine\", \"owner\": \"User:$USER_ID\", \"parameters\": { \"client_id\": \"testclient_id\", \"resource_group_name\": \"testresource_group_name\", \"secret\": \"testsecret\", \"subscription_id\": \"testsubscription_id\", \"tenant\": \"testtenant\" } }" -X POST 127.0.0.1:5000/api/v1/provisioners | jq
{
"created_at": "2018-05-03T14:11:08",
"engine": "kqueen.engines.AksEngine",
"id": "052397f1-b813-49ac-acc8-812c9e00b709",
"name": "testprovisioner",
"owner": {
"active": true,
"created_at": "2018-05-03T14:10:33",
"id": "c2782be5-8b87-4322-82b0-6b726bc4952d",
"organization": {
"created_at": "2018-05-03T14:10:09",
"id": "bebf0186-e2df-40a7-9b89-a2b77a7275d9",
"name": "testorganization",
"namespace": "testorganization"
},
"password": "$2b$12$gYhVf23WXplWSZH8FjaiB.9SzwsRHAelipx2bLF407E0zAOGnmfNC",
"role": "superadmin",
"username": "testusername"
},
"parameters": {
"client_id": "testclient_id",
"resource_group_name": "testresource_group_name",
"secret": "testsecret",
"subscription_id": "testsubscription_id",
"tenant": "testtenant"
},
"state": "OK",
"verbose_name": "Azure Managed Kubernetes Service"
}
To deploy a new Kubernetes cluster using Azure Managed Kubernetes Service provisioner:
The following example shows how to deploy a new Kubernetes cluster using the
Azure Managed Kubernetes Service provisioner testprovisioner
created by
the testusername
user.
$ PROVISIONER_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/provisioners | jq -r '.[] | select (.name == "testprovisioner").id')
$ echo $USER_ID
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data "{ \"name\": \"testcluster\", \"owner\": \"User:$USER_ID\", \"provisioner\": \"Provisioner:$PROVISIONER_ID\", \"metadata\": { \"location\": \"eastus\", \"ssh_key\": \"testssh_key\", \"vm_size\": \"Standard_D1_v2\" } }" -X POST 127.0.0.1:5000/api/v1/clusters | jq
...
To check the clusters:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/clusters | jq
...
Google Cloud CLI operations with KQueen¶
In KQueen, you can perform a number of Google Cloud operations using the command-line interface (CLI). For example, you can create a Google Cloud account, create a new Kubernetes cluster (GKE), download the kubeconfig and use it to push an application into Kubernetes.
- Create a Google Cloud account and configure the gcloud tool to access your GCE:
https://cloud.google.com/sdk/gcloud/reference/init
- Log in to Google Cloud:
$ cd /tmp/
$ gcloud init
- Create new project in GCE:
$ GCE_PROJECT_NAME="My Test Project"
$ GCE_PROJECT_ID="my-test-project-`date +%F`"
$ gcloud projects create "$GCE_PROJECT_ID" --name "$GCE_PROJECT_NAME"
Create in progress for [https://cloudresourcemanager.googleapis.com/v1/projects/my-test-project-2018-05-11].
Waiting for [operations/cp.7874714971502871878] to finish...done.
- Set new project as default:
$ gcloud config set project $GCE_PROJECT_ID
Enable Billing for the project:
$ GCE_ACCOUNT_ID=$(gcloud beta billing accounts list –filter=”My Billing Account” –format=”value(ACCOUNT_ID)”) $ gcloud billing projects link “$GCE_PROJECT_ID” –billing-account=”$GCE_ACCOUNT_ID”
Create Service Account and assign proper rights to it:
$ GCE_SERVICE_ACCOUNT_DISPLAY_NAME="My_Service_Account"
$ GCE_SERVICE_ACCOUNT_NAME="my-service-account"
$ gcloud iam service-accounts create "$GCE_SERVICE_ACCOUNT_NAME" --display-name "$GCE_SERVICE_ACCOUNT_DISPLAY_NAME"
- Generate keys for new service account:
Note
Keep the generated key.json file in a safe location.
$ GCE_SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list --filter="$GCE_SERVICE_ACCOUNT_NAME" --format="value(email)")
$ gcloud iam service-accounts keys create --iam-account $GCE_SERVICE_ACCOUNT_EMAIL key.json
- Assign a role to the service account and bind it to the project:
$ GCE_EMAIL="your_email_address@gmail.com"
$ gcloud iam service-accounts add-iam-policy-binding "$GCE_SERVICE_ACCOUNT_EMAIL" --member="user:$GCE_EMAIL" --role="roles/owner"
$ gcloud projects add-iam-policy-binding $GCE_PROJECT_ID --member="serviceAccount:$GCE_SERVICE_ACCOUNT_EMAIL" --role="roles/container.clusterAdmin"
$ gcloud projects add-iam-policy-binding $GCE_PROJECT_ID --member="serviceAccount:$GCE_SERVICE_ACCOUNT_EMAIL" --role="roles/iam.serviceAccountActor"
- Start KQueen and obtain bearer token:
$ git clone https://github.com/Mirantis/kqueen.git
$ cd kqueen
$ docker-compose -f docker-compose.yml -f docker-compose.demo.yml rm -f # Make sure you are starting from scratch
$ docker-compose -f docker-compose.yml -f docker-compose.demo.yml up
$ TOKEN=$(curl -s -H "Content-Type: application/json" --data '{ "username": "admin", "password": "default" }' -X POST 127.0.0.1:5000/api/v1/auth | jq -r '.access_token')
$ echo $TOKEN
- Create new organization “testorganization” with new user / password “testusername / testpassword”:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data '{ "name": "testorganization", "namespace": "testorganization" }' -X POST 127.0.0.1:5000/api/v1/organizations | jq
$ ORG_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/organizations | jq -r '.[] | select (.name == "testorganization").id')
$ echo $ORG_ID
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data "{ \"username\": \"testusername\", \"organization\": \"Organization:$ORG_ID\", \"role\": \"superadmin\", \"active\": true, \"password\": \"testpassword\" }" -X POST 127.0.0.1:5000/api/v1/users | jq
- Switch to new user “testusername” and add new Google Cloud Kubernetes Service provisioner:
$ TOKEN=$(curl -s -H "Content-Type: application/json" --data '{ "username": "testusername", "password": "testpassword" }' -X POST 127.0.0.1:5000/api/v1/auth | jq -r '.access_token')
$ echo $TOKEN
$ USER_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/users | jq -r '.[] | select (.username == "testusername").id')
$ echo $USER_ID
$ SERVICE_ACCOUNT_INFO=$(cat ../key.json)
$ echo $SERVICE_ACCOUNT_INFO
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data "{ \"name\": \"testprovisioner\", \"engine\": \"kqueen.engines.GceEngine\", \"owner\": \"User:$USER_ID\", \"parameters\": { \"project\": \"$GCE_PROJECT_ID\", \"service_account_info\": $SERVICE_ACCOUNT_INFO } }" -X POST 127.0.0.1:5000/api/v1/provisioners | jq
- Deploy Kubernetes cluster using the GKE provisioner:
$ PROVISIONER_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/provisioners | jq -r '.[] | select (.name == "testprovisioner").id')
$ echo $PROVISIONER_ID
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data "{ \"name\": \"testcluster\", \"owner\": \"User:$USER_ID\", \"provisioner\": \"Provisioner:$PROVISIONER_ID\", \"metadata\": { \"machine_type\": \"n1-standard-1\", \"node_count\": 1, \"zone\": \"us-central1-a\" } }" -X POST 127.0.0.1:5000/api/v1/clusters | jq
- Check the status of the cluster by query KQueen API (run this command multiple times):
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/clusters
- Download kubeconfig from KQueen “testcluster”:
$ CLUSTER_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/clusters | jq -r '.[] | select (.name == "testcluster").id')
$ echo $CLUSTER_ID
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/clusters/$CLUSTER_ID/kubeconfig > kubeconfig.conf
- Use kubeconfig and check kubernetes:
$ export KUBECONFIG=$PWD/kubeconfig.conf
$ kubectl get nodes
$ kubectl get componentstatuses
$ kubectl get namespaces
- Install Helm to install application easily:
$ curl -s $(curl -s https://github.com/kubernetes/helm | awk -F \" "/linux-amd64/ { print \$2 }") | tar xvzf - -C /tmp/ linux-amd64/helm
$ sudo mv /tmp/linux-amd64/helm /usr/local/bin/
$ kubectl create serviceaccount tiller --namespace kube-system
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller
$ sleep 30
$ helm repo update
- You can easily install the apps to utilize the cluster using Helm:
$ helm install stable/kubernetes-dashboard --name=my-kubernetes-dashboard --namespace monitoring --set ingress.enabled=true,rbac.clusterAdminRole=true
Azure CLI operations with KQueen¶
In KQueen, you can perform a number of Azure operations using the command-line interface (CLI). For example, you can create an Azure account, create a new Kubernetes cluster (AKS), download the Kubernetes configuration file and use it to push an application into Kubernetes.
- Log in to your Azure portal:
$ az login
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code CBMZ4QPTE to authenticate.
[
{
"cloudName": "AzureCloud",
"id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx2",
"isDefault": true,
"name": "Pay-As-You-Go",
"state": "Enabled",
"tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxd",
"user": {
"name": "kxxxxxxxxxxxxxxxxxxxxxxxm",
"type": "user"
}
}
]
- Create a new Resource Group:
$ RESOURCE_GROUP_NAME="kqueen-demo-rg"
$ az group create --name "$RESOURCE_GROUP_NAME" --location westeurope
{
"id": "/subscriptions/8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx2/resourceGroups/kqueen-demo-rg",
"location": "westeurope",
"managedBy": null,
"name": "kqueen-demo-rg",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null
}
- Create a service principal that will store permissions to manage resources in the a specified subscription:
$ EMAIL=$(az account list | jq -r '.[].user.name')
$ SUBSCRIPTION_ID=$(az account list | jq -r ".[] | select (.user.name == \"$EMAIL\").id")
$ SECRET="my_password"
$ SERVICE_PRINCIPAL_NAME="kqueen-demo-sp"
$ az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/$SUBSCRIPTION_ID" --name "$SERVICE_PRINCIPAL_NAME" --password "$SECRET"
- Obtain the Azure parameters to create a provisioner in KQueen:
$ CLIENT_ID="$(az ad sp list --display-name "$SERVICE_PRINCIPAL_NAME" | jq -r '.[].appId')"
$ TENANT_ID=$(az ad sp list --display-name "$SERVICE_PRINCIPAL_NAME" | jq -r '.[].additionalProperties.appOwnerTenantId')
In this procedure we use “testorganization” and “testuser / testpassword” as examples.
- Start KQueen and obtain a bearer token:
$ git clone https://github.com/Mirantis/kqueen.git
$ cd kqueen
$ docker-compose -f docker-compose.yml -f docker-compose.demo.yml rm -f # Make sure you are starting from scratch
$ docker-compose -f docker-compose.yml -f docker-compose.demo.yml up
$ TOKEN=$(curl -s -H "Content-Type: application/json" --data '{ "username": "admin", "password": "default" }' -X POST 127.0.0.1:5000/api/v1/auth | jq -r '.access_token')
$ echo $TOKEN
- Create a new organization and add a user:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data '{ "name": "testorganization", "namespace": "testorganization" }' -X POST 127.0.0.1:5000/api/v1/organizations | jq
$ ORG_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/organizations | jq -r '.[] | select (.name == "testorganization").id')
$ echo $ORG_ID
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data "{ \"username\": \"testusername\", \"organization\": \"Organization:$ORG_ID\", \"role\": \"superadmin\", \"active\": true, \"password\": \"testpassword\" }" -X POST 127.0.0.1:5000/api/v1/users | jq
- Switch to the newly created user and add a new Azure Managed Kubernetes Service provisioner:
$ TOKEN=$(curl -s -H "Content-Type: application/json" --data '{ "username": "testusername", "password": "testpassword" }' -X POST 127.0.0.1:5000/api/v1/auth | jq -r '.access_token')
$ echo $TOKEN
$ USER_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/users | jq -r '.[] | select (.username == "testusername").id')
$ echo $USER_ID
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data "{ \"name\": \"testprovisioner\", \"engine\": \"kqueen.engines.AksEngine\", \"owner\": \"User:$USER_ID\", \"parameters\": { \"client_id\": \"$CLIENT_ID\", \"resource_group_name\": \"$RESOURCE_GROUP_NAME\", \"secret\": \"$SECRET\", \"subscription_id\": \"$SUBSCRIPTION_ID\", \"tenant\": \"$TENANT_ID\" } }" -X POST 127.0.0.1:5000/api/v1/provisioners | jq
- Deploy a Kubernetes cluster using the AKS provisioner:
$ PROVISIONER_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/provisioners | jq -r '.[] | select (.name == "testprovisioner").id')
$ echo $PROVISIONER_ID
$ SSH_KEY="$HOME/.ssh/id_rsa.pub"
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" --data "{ \"name\": \"testcluster\", \"owner\": \"User:$USER_ID\", \"provisioner\": \"Provisioner:$PROVISIONER_ID\", \"metadata\": { \"location\": \"westeurope\", \"node_count\": 1, \"ssh_key\": \"`cat $SSH_KEY`\", \"vm_size\": \"Standard_D1_v2\" } }" -X POST 127.0.0.1:5000/api/v1/clusters | jq
- Check the status of the cluster by query KQueen API (run this command multiple times):
$ CLUSTER_ID=$(curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/clusters | jq -r '.[] | select (.name == "testcluster").id')
$ echo $CLUSTER_ID
$ watch "curl -s -H \"Authorization: Bearer $TOKEN\" -H 'Content-Type: application/json' 127.0.0.1:5000/api/v1/clusters/$CLUSTER_ID | jq '.state'"
"Deploying"
...
"OK"
# Check the cluster details in the Web GUI.
- Download kubeconfig from KQueen “testcluster”:
$ curl -s -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" 127.0.0.1:5000/api/v1/clusters/$CLUSTER_ID/kubeconfig --output kubeconfig.conf
$ head kubeconfig.conf
- Use kubeconfig and check kubernetes:
$ export KUBECONFIG=$PWD/kubeconfig.conf
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
aks-agentpool-21742512-0 Ready agent 11m v1.7.7
$ kubectl describe nodes aks-agentpool-21742512-0
Name: aks-agentpool-21742512-0
Roles: agent
Labels: agentpool=agentpool
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=Standard_D1_v2
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=westeurope
failure-domain.beta.kubernetes.io/zone=0
kubernetes.azure.com/cluster=MC_kqueen-demo-rg_4b0363bf-0c74-4cc3-9468-23a79e0a2ec2_westeuro
kubernetes.io/hostname=aks-agentpool-21742512-0
kubernetes.io/role=agent
storageprofile=managed
storagetier=Standard_LRS
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 17 May 2018 09:12:32 +0200
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 17 May 2018 09:13:10 +0200 Thu, 17 May 2018 09:13:10 +0200 RouteCreated RouteController created a route
OutOfDisk False Thu, 17 May 2018 09:21:33 +0200 Thu, 17 May 2018 09:12:32 +0200 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 17 May 2018 09:21:33 +0200 Thu, 17 May 2018 09:12:32 +0200 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 17 May 2018 09:21:33 +0200 Thu, 17 May 2018 09:12:32 +0200 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Thu, 17 May 2018 09:21:33 +0200 Thu, 17 May 2018 09:12:57 +0200 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.240.0.4
Hostname: aks-agentpool-21742512-0
Capacity:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 1
memory: 3501580Ki
pods: 110
Allocatable:
alpha.kubernetes.io/nvidia-gpu: 0
cpu: 1
memory: 3399180Ki
pods: 110
System Info:
Machine ID: df3ffcd7ab1347709fce4c012f61baba
System UUID: 8B317B9A-D6E6-D846-B6FB-F8B396AA5AFF
Boot ID: 1d97c041-54fd-4d80-8515-2e3ef0f2f96c
Kernel Version: 4.13.0-1012-azure
OS Image: Ubuntu 16.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.13.1
Kubelet Version: v1.7.7
Kube-Proxy Version: v1.7.7
PodCIDR: 10.244.0.0/24
ExternalID: /subscriptions/8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx2/resourceGroups/MC_kqueen-demo-rg_4b0363bf-0c74-4cc3-9468-23a79e0a2ec2_westeurope/providers/Microsoft.Compute/virtualMachines/aks-agentpool-21742512-0
ProviderID: azure:///subscriptions/8xxxxxxx-xxxx-xxxx-xxxxxxxxxxxxxxxx2/resourceGroups/MC_kqueen-demo-rg_4b0363bf-0c74-4cc3-9468-23a79e0a2ec2_westeurope/providers/Microsoft.Compute/virtualMachines/aks-agentpool-21742512-0
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system heapster-186967039-7w028 138m (13%) 138m (13%) 294Mi (8%) 294Mi (8%)
kube-system kube-dns-v20-2253765213-nq8vz 110m (11%) 0 (0%) 120Mi (3%) 220Mi (6%)
kube-system kube-dns-v20-2253765213-pn3dd 110m (11%) 0 (0%) 120Mi (3%) 220Mi (6%)
kube-system kube-proxy-lgv61 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-svc-redirect-sw2sx 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kubernetes-dashboard-2898242510-070c5 100m (10%) 100m (10%) 50Mi (1%) 50Mi (1%)
kube-system tunnelfront-440375991-xdftj 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
558m (55%) 238m (23%) 584Mi (17%) 784Mi (23%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 12m kubelet, aks-agentpool-21742512-0 Starting kubelet.
Normal NodeAllocatableEnforced 12m kubelet, aks-agentpool-21742512-0 Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 9m (x5 over 12m) kubelet, aks-agentpool-21742512-0 Node aks-agentpool-21742512-0 status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 9m (x5 over 12m) kubelet, aks-agentpool-21742512-0 Node aks-agentpool-21742512-0 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m (x5 over 12m) kubelet, aks-agentpool-21742512-0 Node aks-agentpool-21742512-0 status is now: NodeHasNoDiskPressure
Normal Starting 8m kube-proxy, aks-agentpool-21742512-0 Starting kube-proxy.
Normal NodeReady 8m kubelet, aks-agentpool-21742512-0 Node aks-agentpool-21742512-0 status is now: NodeReady
$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/heapster-186967039-7w028 2/2 Running 0 9m
kube-system pod/kube-dns-v20-2253765213-nq8vz 3/3 Running 0 11m
kube-system pod/kube-dns-v20-2253765213-pn3dd 3/3 Running 0 11m
kube-system pod/kube-proxy-lgv61 1/1 Running 0 11m
kube-system pod/kube-svc-redirect-sw2sx 1/1 Running 0 11m
kube-system pod/kubernetes-dashboard-2898242510-070c5 1/1 Running 0 11m
kube-system pod/tunnelfront-440375991-xdftj 1/1 Running 0 11m
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system deployment.extensions/heapster 1 1 1 1 11m
kube-system deployment.extensions/kube-dns-v20 2 2 2 2 11m
kube-system deployment.extensions/kubernetes-dashboard 1 1 1 1 11m
kube-system deployment.extensions/tunnelfront 1 1 1 1 11m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.extensions/heapster-186967039 1 1 1 9m
kube-system replicaset.extensions/heapster-482310450 0 0 0 11m
kube-system replicaset.extensions/kube-dns-v20-2253765213 2 2 2 11m
kube-system replicaset.extensions/kubernetes-dashboard-2898242510 1 1 1 11m
kube-system replicaset.extensions/tunnelfront-440375991 1 1 1 11m
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/heapster 1 1 1 1 11m
kube-system deployment.apps/kube-dns-v20 2 2 2 2 11m
kube-system deployment.apps/kubernetes-dashboard 1 1 1 1 11m
kube-system deployment.apps/tunnelfront 1 1 1 1 11m
- Install Helm to install application easily:
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
$ kubectl create serviceaccount tiller --namespace kube-system
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --upgrade --service-account tiller
$ helm repo update
- Install Wordpress:
$ helm repo add azure https://kubernetescharts.blob.core.windows.net/azure
$ helm install azure/wordpress --name my-wordpress --set wordpressUsername=admin,wordpressPassword=password,mariadb.enabled=true,mariadb.persistence.enabled=false,persistence.enabled=false,resources.requests.cpu=100m
$ sleep 300
- Get access details for Wordpress running in Azure k8s and expose the Public IP to AKS:
$ kubectl get svc --namespace default my-wordpress-wordpress
$ kubectl get pods -o wide
$ PUBLIC_IP=$(kubectl get svc --namespace default my-wordpress-wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ DNSNAME="kqueen-demo-wordpress"
$ RESOURCEGROUP=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$PUBLIC_IP')].[resourceGroup]" --output tsv)
$ PIPNAME=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$PUBLIC_IP')].[name]" --output tsv)
$ az network public-ip update --resource-group $RESOURCEGROUP --name $PIPNAME --dns-name $DNSNAME | jq
$ WORDPRESS_FQDN=$(az network public-ip list | jq -r ".[] | select (.ipAddress == \"$SERVICE_IP\").dnsSettings.fqdn")
$ echo Username: admin
$ echo Password: $(kubectl get secret --namespace default my-wordpress-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)
$ echo http://$WORDPRESS_FQDN/admin
$ az ad sp delete --id "$CLIENT_ID"
$ az group delete -y --no-wait --name "$RESOURCE_GROUP_NAME"
\ Sort by:\ best rated\ newest\ oldest\
\\
Add a comment\ (markup):
\``code``
, \ code blocks:::
and an indented block after blank line