Overview

DevOps-at-Scale is a Kubernetes based open source solution which provides:
  • Centralized management of the entire Software Development Tools Ecosystem
  • Centralized management of developer workspaces
  • Fully containerized tools environment (all deployed as Kubernetes services run within pods)
  • Simplified creation of CI/CD pipelines for source code repositories
  • Quick and storage efficient developer workspace creation using ontap technology
  • Easy workspace access for developers via Theia IDE or NFS mounts
  • One click installation via Helm Package Manager

Contents

Prerequisites

Note

Please see https://kubernetes.io/docs/setup/ for kubernetes installation instructions. Please check Trident documentation for supported Kubernetes version.

Note

Please ensure your Kubernetes cluster, ONTAP cluster, and Trident can communicate with each other and reside in secure network(s)

Note

Please visit References on how to use Ansible to automate Kubernetes cluster installation and setup

Note

Please visit References on how to use Ansible to automate Trident installation in a Kubernetes cluster

Installation

Installing Using Helm Package Manager

  1. Download source code from github
git clone https://github.com/NetApp/devops-at-scale
  1. Go to the “devops-at-scale” directory
cd ./devops-at-scale
  1. Enter storage details and installation options by modifying values.yaml
cat values.yaml
global:
  # "LoadBalancer" or "NodePort"
  ServiceType: NodePort
  scm:
    # "gitlab" or "bitbucket"
    type: "gitlab"
  registry:
    # "artifactory" or "docker-registry"
    type: "artifactory"
  persistence:
    ontap:
      # If set to "true", ontap volumes for various services(E.g. gitlab/aritifactory/couchdb) will be automatically created
      automaticVolumeCreation: true
      # ontap data lif IP address
      dataIP: ""
      # ontap SVM name
      svm: ""
      # ontap aggregate
      aggregate: ""
  1. Install helm chart using following command :
helm install –-name devops-at-scale .

Note

If helm is not already installed , visit https://helm.sh/ for installation instructions

  1. Wait for pods to reach the “Running” state:
>kubectl get pods | grep devops-at-scale

NAME                                              READY     STATUS    RESTARTS   AGE

devops-at-scale-couchdb-58f48c5b8d-vw9mb           1/1       Running   0          3m

devops-at-scale-docker-registry-7969844c9f-phshp   1/1       Running   0          3m

devops-at-scale-gitlab-6c6dc79b77-j4dww            1/1       Running   0          3m

devops-at-scale-jenkins-74d87d6fd5-th29g           1/1       Running   0          3m

devops-at-scale-webservice-5bbcdbf88c-rjrp4        1/1       Running   0          3m

Note

It may take up to 10 minutes for all the pods to come up.

  1. After the pods are ready, retrieve the webservice URL:
>kubectl get svc

    NAME                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                  AGE

devops-at-scale-couchdb                     NodePort    10.108.249.65    <none>        5984:14339/TCP                           5m

devops-at-scale-docker-registry             NodePort    10.97.110.240    <none>        5000:24646/TCP                           5m

devops-at-scale-gitlab                      NodePort    10.102.216.157   <none>        80:30593/TCP,22:8639/TCP,443:18600/TCP   5m

devops-at-scale-jenkins                     NodePort    10.99.97.28      <none>        8080:12899/TCP                           5m

devops-at-scale-jenkins-agent               ClusterIP   10.100.249.190   <none>        50000/TCP                                5m

devops-at-scale-webservice                  NodePort    10.101.38.243    <none>        5000:12054/TCP


export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
export SERVICE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services {{.Release.Name}}-webservice)
export SERVICE_URL=$NODE_IP:$SERVICE_PORT

Note

Take note of the port of web service. The web service will be available at $SERVICE_URL:<devops-at-scale-webservice-port>

  1. Using a Web Browser, open the “devops-at-scale-webservice” URL (http://<$SERVICE_URL>:<devops-at-scale-webservice-port>) to visit the DevOps-At-Scale Frontend Management Console
Create CI Pipeline

Note

GitLab service can be accessed using credentials ‘root:root_devopsatscale’ initially

Note

All other services can be accessed using credentials ‘admin:admin’ initially

Additional Configuration

Create Initial GitLab User (Optional)

An initial account has to be created on Gitlab before starting to use it. To create an account on Gitlab, visit the following URL and sign up.

http://<<$SERVICE_URL>>:<<Gitlab_port>>
GitLab

General Usage

Pipeline Creation

DevOps-at-Scale pipelines can be created via pipeline creation page:
http://<<$SERVICE_URL>>:<<devops-at-scale-webservice-port>>/frontend/pipeline/create
Create CI Pipeline
Parameter Value Description
SCM-URL   URL of the source code repository
SCM-Branch   SCM branch off which the pipeline should run
Export-policy default Export-policy that should be used for the pipeline volume

Once the pipeline creation is successful, a Jenkins project with pre-populated build parameters is setup

Jenkins project for Pipeline

Integrate GitLab with Jenkins for automatic build triggers

  1. From the webservice dashboard, copy the Jenkins URL for the pipeline created

    Pipelines dashboard
  2. Open GitLab from the webservice dashboard (http://<$SERVICE_URL>:<devops-at-scale-webservice-port>)

  3. Login using root/root_devopsatscale

  4. In the GitLab project, goto Settings -> Integrations and paste the Jenkins project URL from step (1) and create the webhook

    Note

    When pasting the Jenkins URL, replace /job/<jenkins-project-name> with /project/<jenkins-project-name>

    Create Webhook Gitlab
  5. In global Gitlab settings, allow outbound requests from local network

    Allow Outbound Requests Gitlab

6. Enable the build trigger from webhook in Jenkins. Navigate to the pipeline’s Jenkins URL from the webservice dashboard and goto Configure -> Build Triggers

Enable build trigger Jenkins
  1. Webhook setup is complete. Test the webhook setup manually from GitLab (Project -> Settings -> Integrations -> Webhook -> Test -> Push Events)

    Test WebHook

This will validate whether the GitLab and Jenkins integration has been successful

Successful GitLab Jenkins Integration
  1. All further pushes to the GitLab project will automatically trigger a build in Jenkins project corresponding to the pipeline

    Successful build trigger on git push

Workspace Creation

DevOps-at-Scale workspaces can be created via workspace creation page:
http://<<$SERVICE_URL>>:<<devops-at-scale-webservice-port>>/frontend/workspace/create
TheiaIDE
Parameter Value Description
Pipeline   Select the pipeline
Username   Developer username
Workspace prefix   Enter a prefix which can be used to identify the workspace
Build   Select the build from which the workspace should be created
TheiaIDE

Once a workspace is created, you will be provided instructions on how to access your workspace via Theia Browser IDE or locally via NFS:

Theia IDE
Theia IDE

Merge Workspace Creation

DevOps-at-Scale merge workspaces can be created via workspace creation page:
http://<<$SERVICE_URL>>:<<devops-at-scale-webservice-port>>/frontend/workspace/merge

Users can merge their workspace with the latest build when they feel their workspace is out of date.

This allows users to pull in the latest code and artifacts into their workspace , thus potentially providing incrmental build time savings.

To merge workspaces, navigate to the Merge Workspace tab and fill in the following values :-

Workspace Merge
Parameter Value Description
Username   Developer username
Workspace Name Prefix   Enter a prefix which can be used to identify the workspace
Source Workspace name   Enter name of the source workspace to merge from
Build   Select the build which the workspace should be created off

Uninstalling

Build-at-Scale can be uninstalled using a single command

helm del --purge devops-at-scale

Note

Once all the services’ PVCs are deleted, Trident deletes the associated PVs and ONTAP volumes

Support

Support for Build-at-Scale is handled via Slack.

Please post your comments in the #devops-at-scale channel

Note

Support is done on a best effort basis

License

BSD 3-Clause License

  Copyright (c) 2018-2019, NetApp, Inc.
  All rights reserved.

  Redistribution and use in source and binary forms, with or without
  modification, are permitted provided that the following conditions are met:

  * Redistributions of source code must retain the above copyright notice, this
    list of conditions and the following disclaimer.

  * Redistributions in binary form must reproduce the above copyright notice,
    this list of conditions and the following disclaimer in the documentation
    and/or other materials provided with the distribution.

  * Neither the name of the copyright holder nor the names of its
    contributors may be used to endorse or promote products derived from
    this software without specific prior written permission.

  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
  AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
  IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
  DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
  FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
  DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
  SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
  OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

References

Installation and setup of Kubernetes cluster using Ansible

Pre-requisites

  1. If you do not have an Ansible setup. Please setup by following the instructions here
  2. One or more VMs reachable from where Ansible playbooks are being run

Note

Ansible playbooks referred in the below steps are located in devops-at-scale/devops-at-scale/ansible-playbooks/k8s_setup

Usage

  1. Download roles

    ansible-galaxy install --roles-path roles -c geerlingguy.docker
    ansible-galaxy install --roles-path roles -c geerlingguy.kubernetes
    
  2. Create inventory file

    $ cat inventory
    [all]
    scspa0633050001 kubernetes_role="master"
    scspa0633051001 kubernetes_role="node"
    

If more than one node, tag them appropriately.

  1. Install docker and kubernetes

    ansible-playbook -i inventory -K --become-method=su --become k8s_setup_cluster.yml
    

This will install kudeadm, kubelet, kubectl, and create a cluster with worker nodes.

Installation and setup of Trident on Kubernetes using Ansible

Pre-requisites

  1. If you do not have an Ansible setup. Please setup by following the instructions from Ansible Setup
  2. Kubernetes cluster. The inventory file identifies master and worker nodes.
  3. ONTAP cluster

Note

Ansible playbooks referred in the below steps are located in devops-at-scale/devops-at-scale/ansible-playbooks/trident_setup

Qualify your Kubernetes cluster

ansible-playbook -i inventory kubectl_check.yml -K --become --become-method=su --extra-vars=@vsim_vars.yml
(requires root access on K8S master node to run kubectl)

Preparation

  1. The trident_prereqs.yml playbook will install pip, setuptool, and the openshift python package. This is required to run k8s Ansible module.

This playbook will then create a “trident” namespace.

ansible-playbook -i inventory trident_prereqs.yml -K --become --become-method=su

Download installer and final checks

  1. The trident.yml playbook will install the trident installer and set up a backend storage file to support trident etcd database:

    ansible-playbook -i inventory trident.yml -K --become --become-method=su --extra-vars=@vsim_vars.yml
    

(requires root access on K8S master node to run yum - and maybe k8s)

Trident installation

3. The next step will be to run the trident installer. In the kubernetes master node:

  1. Check Trident is running

    ansible-playbook -i inventory trident_check_pods.yml -K --become --become-method=su
    

As of today, you should see: 2/2 Running (1 pod is running 2 containers out of 2)

Trident configuration

5. The backend created in the preparation step is only used to support the Trident etcd persistent storage. New backend(s) need to be created to support production.

Add backend

Being lazy here, we can reuse the same backend

trident/trident-installer/tridentctl -n trident create backend -f trident/trident-installer/setup/backend.json

6. Add storage class in Kubernetes. Follow instructions Trident documentation

7. Test Trident installation by creating first volume and mounting it into an nginx pod. Follow instructions Trident example

Release Notes

Release 1.1: Known Issues

  • For merge workspace, the new pod is mounting two volumes, one volume with the source workspace and the other volume with a copy of the selected build. The changes have to be merged manually by the developer
  • The UID and GID for the workspace and service volumes are defaulted to 0, 0. We will provide customizable UID, GID values in 1.2
  • Manual webhook setup for GitLab and Jenkins integration is required for every pipeline
  • The solution is only tested with ONTAP using NFS volumes
  • The CI pipeline build clones are required to be purged manually
  • The number of active clones (build and workspace) is limited by ONTAP. Please check the ONTAP release and make sure the purge policies are in place
  • In case of failure during pipeline or workspace creation, the Kubernetes PVCs may have to be purged manually
  • For GitLab, the URL for git cloning is incorrect. Please use http://<$SERVICE_URL>:<devops-at-scale-gitlab-port>/ during git clone.