Welcome to wdocs’s documentation!

Contents:

AWS

Available zones

Actual information on http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

us-east-1 US East (N. Virginia)
us-west-2 US West (Oregon)
us-west-1 US West (N. California)
eu-west-1 EU (Ireland)
eu-central-1 EU (Frankfurt)
ap-southeast-1 Asia Pacific (Singapore)
ap-northeast-1 Asia Pacific (Tokyo)
ap-southeast-2 Asia Pacific (Sydney)
ap-northeast-2 Asia Pacific (Seoul)
ap-south-1 Asia Pacific (Mumbai)
sa-east-1 South America (São Paulo)

AWS CDK

Installation

Install Node.js and TypeScript:

# Using Ubuntu
curl -sL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt-get install -y nodejs

# Install TypeScrypt
sudo npm -g install typescript

Install/update CDK:

# If you regularly work with multiple versions of the AWS CDK, you may want to install a matching version of the AWS CDK Toolkit in individual CDK projects.
# To do this, omit -g from the npm install command.
# Then use npx cdk to invoke it; this will run the local version if one exists, falling back to a global version if not.

sudo npm install -g aws-cdk

## Update
# sudo npm update -g aws-cdk

cdk --version
Instructions

CLI base commands - https://docs.aws.amazon.com/cdk/latest/guide/cli.html

Create first app:

mkdir my-app-name
cd my-app-name

# will be used dir name as prefix for stack name
cdk init app --language typescript

  # app (default) - Creates an empty AWS CDK app.
  # sample-app    - Creates an AWS CDK app with a stack containing an Amazon SQS queue and an Amazon SNS topic.

Add env props into your bin/my-app-name.ts code to make it possible to deploy stack into different account/regions changing ENV VARS:

const stack = new MyAppNameStack(app, 'MyAppNameStack', {
    env: {
        account: process.env.CDK_DEPLOY_ACCOUNT || process.env.CDK_DEFAULT_ACCOUNT,
        region: process.env.CDK_DEPLOY_REGION || process.env.CDK_DEFAULT_REGION
    }
});

Set versionReporting to false in ./cdk.json or ~/.cdk.json. This opts out unless you opt in by specifying –version-reporting on an individual command.

{
“app”: “…”, “versionReporting”: false

}

Commands:

# Build JS from TS
npm run build

# List stacks
cdk ls/list
  # --long - Shows additional inforamtion (account, region, id, etc.)

# Get diff between code and infrastructure
cdk diff

# Compare cdk app with CF stack template
cdk diff --template ~/stacks/MyStack.old MyStack

# Synthesize CF template and print to stdout
cdk synth

# Specify multiple context values (any number)
cdk synth --context key1=value1 --context key2=value2 MyStack

# Different context values for each stack
cdk synth --context Stack1:key=value Stack2:key=value Stack1 Stack2

# Checks your CDK project for potential problems
cdk doctor

# Bootstrap toolkit CF stack
cdk bootstrap

# To use modern boostrap template
export CDK_NEW_BOOTSTRAP=1

Deploy CF stacks:

# Deploy changes to AWS
cdk deploy [stack_name]
    --profile profile_name                                          # Set AWS profile
    --parameters param=value --parameters param=value               # Pass parameters to template
    --parameters Stack1:param=value --parameters Stack2:param=value # Pass parameters to templates

# Save output to a specific file
cdk deploy --outputs-file outputs.json [stack_name]

# Destroy CF stack
cdk destroy

Install/update modules:

# Install modules and add them to package.json
npm install @aws-cdk/aws-s3 @aws-cdk/aws-lambda
    -D  # option to add module to DevDependencies

# Update all modules to the latest permitted version according to the rules you specified in package.json
npm update
SAM

Run lambda locally in docker using SAM:

# SAM must be installed

cdk synth --no-staging > template.yaml
sam local invoke WidgetsWidgetHandler1BC9DB34 --no-event
# Where 'WidgetsWidgetHandler1BC9DB34' is a logical ID from CF template

# Invoke function with vars
# Create file tmp.json
{
    "WidgetsWidgetHandler1BC9DB34": {
      "BUCKET": "applambdastack-widgetswidgetstore0ed7fdb7-1oek5wz08wpnv"
    }
}

echo '{"path": "/", "httpMethod": "GET" }' | sam local invoke WidgetsWidgetHandler1BC9DB34 --event - --env-vars tmp.json

CLI

~/.aws/config:

[default]
output = json
region = eu-west-1      # important!

[profile <username>]
output = json
region = eu-west-1      # important!

~/.aws/credentials:

[default]
    aws_access_key_id = <key_id>
    aws_secret_access_key = <access_key>

    [<username>]
    aws_access_key_id = <key_id>
    aws_secret_access_key = <access_key>

Run aws command as user:

aws --profile <username> <some_aws_command>

Controll output:

aws iam get-user --query 'User.Arn' --output text
aws iam list-users --query 'Users[0]'
aws iam list-users --query 'Users[*].{name:UserName, arn:Arn}'
# output without keys
aws iam list-users --query 'Users[*].[UserName, Arn]'
# output where UserName==den
aws iam list-users --query 'Users[?UserName==`den`].[UserName, Arn]'
aws ec2 describe-volumes --query 'Volumes[*].{ID:VolumeId,InstanceId:Attachments[0].InstanceId,AZ:AvailabilityZone,Size:Size}'
S3
aws --profile <user> s3 ls s3://<backet> --recursive  --human-readable

CloudFront

  1. Create distribution (cname - cdn.example.com)
  2. Create Route53 CNAME (cdn.example.com -> d34cbfrb6qn178.cloudfront.net)
  3. Add cert for https (*.example.com or cdn.example.com)

Create own AMI from instance

Tested on AWS Ubuntu Xenial create-own-ami.sh:

#!/bin/bash

set -e

instance_id=$(ec2metadata --instance-id)
instance_name=$(aws ec2 describe-tags --filters Name=resource-id,Values=$instance_id Name=key,Values=Name --query Tags[].Value --output text)

if [ -z $instance_name ]; then
        ami_name="ec2-$instance_id-auto-created"
else
        ami_name="$instance_name-auto-created"
fi

ami_description="AMI auto created by script from instance $instance_id"

function delete_snapshots_of_ami() {
        array=($(aws ec2 describe-images --filters "Name=image-id,Values=$1" --query Images[0].BlockDeviceMappings[].Ebs.SnapshotId --output text))

        for i in "${array[@]}"
        do
                aws ec2 delete-snapshot --snapshot-id $i
                echo 'Deleted snapshot $i'
        done

}


function delete_ami() {
        ami_id=$1

        snapshots=($(aws ec2 describe-images --filters "Name=image-id,Values=$ami_id" --query Images[0].BlockDeviceMappings[].Ebs.SnapshotId --output text))

        aws ec2 deregister-image --image-id $ami_id
        echo "Deregistered AMI $ami_id"

        sleep 5.0

        for snap_id in "${snapshots[@]}"
        do
                aws ec2 delete-snapshot --snapshot-id $snap_id
                echo "Deleted snapshot $snap_id"
        done
}


ami_last_id=$(aws ec2 describe-images --filters "Name=name,Values=$ami_name" --query Images[0].ImageId --output text)

if [ "$ami_last_id" = "None" ]; then
        echo "Not exist previous AMI. Nothing to delete."
else
        echo "Start deleting previous AMI $ami_name with id $ami_last_id"
        delete_ami $ami_last_id

        echo "Previous AMI and associated snapshots were deleted."
fi

echo "Start creating AMI $ami_name from $instance_id"
ami_new_id=$(aws ec2 create-image --instance-id $instance_id --name "$ami_name" --description "$ami_description" --output text)

echo "Created new AMI $ami_new_id"

Deploy without downtime

deploy.sh:

# Define some global variables
export AUTO_SCALING_GROUP_NAME="asg"
export SCALING_POLICY="LaunchOne"
export ELB_NAME="load-balancer-new-site"

date >> deploy.log

# Returns the number of instances currently in the AutoScaling group
function getNumInstancesInAutoScalingGroup() {
    local num=$(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name "$AUTO_SCALING_GROUP_NAME" --query "length(AutoScalingGroups[0].Instances)")
    local __resultvar=$1
    eval $__resultvar=$num
}

# Returns the number of healthy instances currently in the ELB
function getNumHealthyInstancesInELB() {
    local num=$(aws elb describe-instance-health --load-balancer-name "$ELB_NAME" --query "length(InstanceStates[?State=='InService'])")
    local __resultvar=$1
    eval $__resultvar=$num
}

# Get the current number of desired instances to reset later
export existingNumDesiredInstances=$(aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name "$AUTO_SCALING_GROUP_NAME" --query "AutoScalingGroups[0].DesiredCapacity")

# Determine the number of instances we expect to have online
getNumInstancesInAutoScalingGroup numInstances
numInstancesExpected=$(expr $numInstances \* 2)
echo "Expecting to have $numInstancesExpected instance(s) online."

echo "Will launch $numInstances Instance(s)..."
for i in `seq 1 $numInstances`;
do
    echo "Launching instance..."
    aws autoscaling execute-policy --no-honor-cooldown --auto-scaling-group-name "$AUTO_SCALING_GROUP_NAME" --policy-name "$SCALING_POLICY"
    sleep 5s
done

# Wait for the number of instances to increase
getNumInstancesInAutoScalingGroup newNumInstances
until [[ "$newNumInstances" == "$numInstancesExpected" ]];
do
    echo "Only $newNumInstances instance(s) online in $AUTO_SCALING_GROUP_NAME, waiting for $numInstancesExpected..."
    sleep 10s
    getNumInstancesInAutoScalingGroup newNumInstances
done

# Wait for the ELB to determine the instances are healthy
echo "All instances online, waiting for the Load Balancer to put them In Service..."
getNumHealthyInstancesInELB numHealthyInstances
until [[ "$numHealthyInstances" == "$numInstancesExpected" ]];
do
    echo "Only $numHealthyInstances instance(s) In Service in $ELB_NAME, waiting for $numInstancesExpected..."
    sleep 10s
    getNumHealthyInstancesInELB numHealthyInstances
done

# Update the desired capacity back to it's previous value
echo "Resetting Desired Instances to $existingNumDesiredInstances"
aws autoscaling update-auto-scaling-group --auto-scaling-group-name "$AUTO_SCALING_GROUP_NAME" --desired-capacity $existingNumDesiredInstances

# Success!
echo "Deployment complete!"

EC2

aws ec2 describe-security-groups –group-names ‘Default’ –query ‘SecurityGroups[0].OwnerId’ –output text
describe-regions # show available regions

aws ec2 stop-instances –instance-ids <ids> # stop instances aws ec2 start-instances –instance-ids <ids> # start

ECS

Step 1: Configure repository
create repository

Step 2: Build, tag, and push Docker image

Step 3: Create a task definition
Task definition name Container name Image Memory Limits (MB) Port mappings
Step 4: Configure service
Service name Desired number of tasks Add ELB
Step 5: Configure cluster
Cluster name EC2 instance type Number of instances Key pair Security group Container instance IAM role

ECS

Create cluster: clusterName1 ECS cluster clusterName1

Create task definition: clusterName1 taskDefName1 Task definition taskDefName1:1

Create instances for: clusterName1 ECS instances for clusterName1

Create service: servName1 Service created. Tasks will start momentarily. View: servName1

EC2

CloudFormation stack created CloudFormation stack arn:aws:cloudformation:eu-west-1:798327215437:stack/EC2ContainerService-clusterName1/a12e95d0-c1f2-11e6-adc6-503abe701cfd

Internet gateway created Internet gateway igw-2b3ef34f

VPC created VPC vpc-50024b34

Route table created Route table rtb-995c45fd

VPC attached gateway created VPC attached gateway EC2Co-Attac-6YYFGR8UODP0

Subnet 1 created Subnet 1 subnet-e92243b1

ELB security group created ELB security group sg-e1009987

Subnet 2 created Subnet 2 subnet-ff3e1a9b

Public routing created Public routing EC2Co-Publi-1JR7V606VFBC5

Subnet 1 association created Subnet 1 association rtbassoc-46f3be21

ECS security group created ECS security group sg-a60099c0

Subnet 2 association created Subnet 2 association rtbassoc-45f3be22

Auto Scaling group created Auto Scaling group EC2ContainerService-clusterName1-EcsInstanceAsg-RMPBUPVOQ9OC

Launch configuration created Launch configuration EC2ContainerService-clusterName1-EcsInstanceLc-953ZPR1US8UH

Elastic load balancer created Elastic load balancer EC2Contai-EcsElast-5NIU1WVQN742

ECR

aws ecr get-login

aws ecr describe-repositories aws ecr list-images –repository-name my-web-app

IAM

Attach exist managed policy to user:

aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/<value> --user-name <value>

Create new managed policy:

aws iam     create-policy --policy-name <value> --policy-document file://<value>

Get user ID:

aws iam get-user --query 'User.Arn' --output text
aws iam get-user | awk '/arn:aws:/{print $2}'
aws iam list-users --query 'Users[?UserName==`den`].[Arn]' --output text
aws iam create-group –group-name <value> # create group
list-groups # show groups attach-group-policy –group-name <value> –policy-arn <policy_arn> # attach policy to group (example arn - arn:aws:iam::aws:policy/AdministratorAccess) list-attached-group-policies –group-name <value> # show attached policies remove-user-from-group –user-name <value> –group-name <value> # delete user from group delete-group –group-name <value> # delete group (first remove the users in the group, delete inline policies and detach any managed policies)
Control permissions across accounts

Service control policies (SCPs): Developers in all accounts cannot turn off CloudTrail, create IAM users, or set up AWS Directory Service:

"Statement": [
    {
        "Sid": "DenyUnapprovedAction",
        "Effect": "Deny",
        "Action": [
            "ds:*",
            "iam:CreateUser",
            "cloudtrail:StopLogging"
        ],
        "Resorce": [
            "*"
        ]
}
]

IAM permissions policy: Allow creating resources only in allowed regions:

"Effect": "Allow",
"Action": [
    "lambda:*"
],
"Resource": "*",
"Condition": {
    "StringEquals": [
        "us-west-1"
    ]
}

Permissions boundaries: Enable your developers to create IAM roles but ensure they cannot exceed their own permissions:

# region-restriction policy
"Effect" "Allow",
"Action": [
    "iam:CreatePolicy",
    "iam:CreatePolicyVersion",
    "iam:DeletePolicyVersion"
],
"Resource": "arn:aws:iam::<account-id>:policy/unicorns-*"

#

"Effect" "Allow",
"Action": [
    "iam:DetachRolePolicy",
    "iam:CreateRole",
    "iam:AttachRolePolicy"
],
"Resource": "arn:aws:iam::<account-id>:role/unicorns-*",
"Condition": {
    "StringEquals": {
        "iam:PermissionsBoundary": "arn:aws:iam::<account-id>:policy/region-restriction"
    }
}

S3

Link to the file looks like: https://<aws_region>.amazonaws.com/<backet_name>/<file_name>. Example: https://s3-eu-west-1.amazonaws.com/my-super-bucket/linux-155549_960_720.png

Anaconda

Anaconda instructions

http://conda.pydata.org/docs/_downloads/conda-cheatsheet.pdf

Update:

conda update conda
conda update anaconda

Display a list of installed packages and their versions:

conda list

Create a new environment named /envs/snowflakes with the program Biopython:

conda create --name snowflakes biopython
conda create --name bunnies python=3 astroid babel

To activate this environment, use:

source activate snowflakes

To deactivate this environment, use:

source deactivate

See a list of environments:

conda info --envs

Make an exact copy of an environment:

conda create --name <dest_name> --clone <source_name>

Remove environment:

conda remove --name flowers --all

Remove pkg from env:

conda remove --name <env_name> <pkg_name>

Search packages contain ‘text’:

conda search <text>

--full-name search packages with full name 'python'


conda install <pkg_name>                                    # install pkg to current env
                          --name <env_name> <pkg_name>      # install pkg to 'env_name' environment
                          --channel https://conda.anaconda.org/pandas <pkg_name>    # install a package from Anaconda.org

pip install <pkg_name>      # install via pip
        uninstall <pkg_name>        # uninstall via pip

Ansible

Ansible

Ansible Vault

Create a new encrypted data file:

$EDITOR=nano ansible-vault create foo.yml

Edit encrypted file:

$EDITOR=nano ansible-vault edit foo.yml

Change your password on a vault-encrypted file or files:

ansible-vault rekey foo.yml bar.yml baz.yml

Encrypt/Decrypt files:

ansible-vault encrypt [--vault-password-file <path_to_file>] foo.yml bar.yml baz.yml
ansible-vault decrypt [--vault-password-file <path_to_file>] foo.yml bar.yml baz.yml
Instruction

https://habrahabr.ru/post/195048/

Config /etc/ansible/ansible.cfg or ~/.ansible.cfg:

# if host changed.. known_hosts...
host_key_checking = False

Hosts /etc/ansible/hosts

ssh-keygen
ssh-copy-id # to hosts

ssh-agent bash
ssh-add ~/.ssh/id_rsa
# run simple command
ansible <[group_of_hosts]> -a "/bin/echo Hello, World!"
ansible all -a "/bin/echo hello"

# using modulw 'service' on 'webservers' group
ansible webservers -m service -a "name=nginx state=stopped"

# ping known hosts
ansible all -m ping
                    -u spider                           # ping with username
                    -u bruce -b                             # to root user
                    -u bruce -b --become-user batman        # to sudo user

# run playbook with inventory file
ansible-playbook -i <inventory> <playbook.yml>

# play playbook with ask sudo pass or root
ansible-playbook -i inventory playbooks/lemp.yml --ask-become-pass(-K) -u spider
ansible-playbook -i inventory playbooks/lemp.yml -u root

Inventory file:

# become for all operations
192.168.0.5 ansible_become=true ansible_user=manager

Show all available facts:

# gather_facts = true

ansible -m setup localhost

Use factsin playbook:

{{ ansible_distribution_release }}  # trusty, ...
{{ ansible_distribution }}                  # Debian, ...
Inventory file example

Example inventory:

[WebServersG1]
webserver1-g1 ansible_ssh_port=4444 ansible_ssh_host=192.168.1.50 ansible_ssh_user=ubuntu
staging.test.com.ua

[WebServersG2]
webserver1-g2:4444          # alternative SSH port
webserver2-g2

[WebServersProxy]
webserver-proxy1
webserver-proxy2

[DataBase]
db1
db2

[DataBaseSlave]
dbs1
dbs2

[SomeServers]
someserver1
someserver2


[WEB:children]
WebServersG1
WebServersG2
WebServersProxy

Ansible playbook

Best prctices project structure:

http://docs.ansible.com/ansible/playbooks_best_practices.html
---
- hosts: test
  tasks:

  - name: Install nginx package
    apt: name=nginx update_cache=yes
    sudo: yes

  - name: Starting nginx service
    service: name=nginx state=started
    sudo: yes
---
- hosts: test
  tasks:

  - name: Stopping nginx service
    service: name=nginx state=stopped
    #sudo: yes

web.yml:

---
- hosts: all
  user: ubuntu

  tasks:
    - name: Update apt cache
      apt: update_cache=yes
      sudo: yes

    - name: Install required packages
      apt: name={{ item }}
      sudo: yes
      with_items:
        - nginx
        - postgresql

        - name: Add User Pupkin
          user: name=’pupkin’

        - name: Add User Pupkin
          user: name=’pupkin’ shell=’/bin/zsh’ groups=’sudo’

          - name: Add BestAdminsTeam
                user: name={{ item.user }} shell={{ item.shell }} groups=’sudo’
                with_items:
                        - { user: ‘pupkin’, shell: ‘/bin/zsh’ }
                        - { user: ‘oldfag’, shell: ‘/bin/sh’ }

{{ansible_distribution_release}} # codename of release

Ansible-galaxy

# https://galaxy.ansible.com/intro

ansible-galaxy install <username.rolename>
–roles-path <destination> -r <roles.txt>
roles.txt
<username.rolename>[,version] user1.role1,v1.0.0 user2.role2,v0.5 user2.role3

# install roles from different sources # ansible-galaxy install -r install_roles.yml

install_roles.yml:

# from galaxy
- src: yatesr.timezone

# from github
- src: https://github.com/bennojoy/nginx

# from github, overriding the name and specifying a specific tag
- src: https://github.com/bennojoy/nginx
  version: master
  name: nginx_role

# from a webserver, where the role is packaged in a tar.gz
- src: https://some.webserver.example.com/files/master.tar.gz
  name: http-role

# from bitbucket, if bitbucket happens to be operational right now :)
- src: git+http://bitbucket.org/willthames/git-ansible-galaxy
  version: v1.4

# from bitbucket, alternative syntax and caveats
- src: http://bitbucket.org/willthames/hg-ansible-galaxy
  scm: hg

Debug mode

---
- name: Setup database (remote) | downloading db dump
  register: output                  # write debug to output var
  shell: /usr/local/bin/aws s3 cp {{ project_db_init_dump_folder }}{{ project_db_init_dump_name }} /tmp/{{ project_db_init_dump_name }}
  delegate_to: 127.0.0.1
  ignore_errors: True               # don't stop on error

- debug: var=output                 # view output debug

Docker

Dockerfile

Without bash:

RUN ["apt-get", "install", "-y", "nginx"]

Example Dockerfile:

FROM ubuntu:14.04
MAINTAINER John Smith <john@gmail.com>
RUN apt-get update && apt-get install -y nginx
RUN echo 'Hi, I am in your container' > /usr/share/nginx/html/index.html
EXPOSE 80

Example php5-fpm:

FROM ubuntu:14.04
MAINTAINER John Smith <john@gmail.com>
RUN apt-get update && apt-get install -y php5-fpm
RUN echo 'cgi.fix_pathinfo = 0' >> /etc/php5/fpm/php.ini
ENTRYPOINT ["php5-fpm"]
CMD ["-F"]
EXPOSE 9000

Clean up APT when done:

RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
ENV myName="John Doe" myDog=Rex\ The\ Dog \
    myCat=fluffy


ARG var[=value]     # переменные, которые используются при билде, можно инициализировать
                                docker build --build-arg var=value, только те что описаны в докерфайле
Old php in docker

Dockerfile:

FROM debian:7.8

MAINTAINER Siryk Valentin <valentinsiryk@gmail.com>

RUN apt-get update \
    && apt-get install -y \
        libbz2-dev \
        libcurl4-gnutls-dev \
        libpng12-dev \
        libjpeg62-dev \
        libmcrypt-dev \
        libmhash-dev \
        libmysqlclient-dev \
        libxml2-dev \
        libxslt1-dev \
        make \
        apache2 \
        apache2-threaded-dev \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

ENV PHP_VERSION 5.2.9

# Download and unpack PHP
#COPY ./php-${PHP_VERSION}.tar.gz /tmp/
ADD http://museum.php.net/php5/php-${PHP_VERSION}.tar.gz /tmp/
RUN tar -xzf /tmp/php-${PHP_VERSION}.tar.gz -C /tmp

WORKDIR /tmp/php-${PHP_VERSION}

RUN ln -s /usr/lib/x86_64-linux-gnu/libjpeg.a /usr/lib/libjpeg.a \
    && ln -s /lib/x86_64-linux-gnu/libpng12.so.0.49.0 /usr/lib/libpng.so \
    && ln -s /usr/lib/x86_64-linux-gnu/libmysqlclient.so /usr/lib/libmysqlclient.so \
    && ln -s /usr/lib/x86_64-linux-gnu/libmysqlclient_r.so /usr/lib/libmysqlclient_r.so


# Configure
RUN ./configure \
        --with-apxs2=/usr/bin/apxs2 \
        --disable-cgi \
        --with-mysql \
        --with-pdo-mysql
        #--with-mysqli \
        #--enable-cli \
        #--enable-discard-path \
        #--enable-mbstring \
        #--with-curl \
        #--with-gd \
        #--with-jpeg-dir \
        #--with-mcrypt

# Install
RUN make \
    && make install

RUN rm -rf /tmp/php* /var/tmp/*

RUN a2enmod rewrite

COPY ./default.conf /etc/apache2/sites-available/default

#COPY ./php.ini /usr/local/lib/php.ini

#RUN echo "<?php phpinfo(); ?>" > /var/www/index.php

EXPOSE 80

CMD [ "/usr/sbin/apache2ctl", "-D", "FOREGROUND" ]

default.conf:

<VirtualHost *:80>
        ServerAdmin webmaster@localhost

        DocumentRoot /var/www

        <Directory />
               Options FollowSymLinks
               AllowOverride All
        </Directory>

        <Directory /var/www/>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride All
                Order allow,deny
                allow from all
        </Directory>


        <FilesMatch \.php$>
                SetHandler application/x-httpd-php
        </FilesMatch>


        DirectoryIndex index.php


        ErrorLog ${APACHE_LOG_DIR}/error.log

        # Possible values include: debug, info, notice, warn, error, crit,
        # alert, emerg.
        LogLevel warn

        CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Docker

Show Docker logs on Ubuntu 16.04+:

journalctl -u docker.service

Security benchmark: https://github.com/docker/docker-bench-security

docker run --name docker-bench --rm -it --net host --pid host --userns host --cap-add audit_control -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST -v /var/lib:/var/lib -v /var/run/docker.sock:/var/run/docker.sock -v /usr/lib/systemd:/usr/lib/systemd -v /etc:/etc --label docker_bench_security docker/docker-bench-security

Move volume data to another volume (rename volume):

# Will create new volume and copy data to it from old volume
# FROM  - old volume
# TO    - new volume
export FROM=<v_1> TO=<v_2> && docker volume create --name $TO && docker run --rm -it -v $FROM:/from -v $TO:/to alpine ash -c "cd /from; cp -arv . /to"

Clear container logs:

> $(docker inspect --format='{{.LogPath}}' <CONTAINER>)

To use node_modules locally, but ignore it in the docker container using the following syntax in the docker-compose.yml. So everything in ./angularApp is mapped to /opt/app and then I create another mount volume /opt/app/node_modules/ which is now empty directory - even if in my local machine ./angularApp/node_modules is not empty:

volumes:
   - './angularApp:/opt/app'
   - /opt/app/node_modules/

File .dockerignore:

Dockerfile
docker-compose.yml

Exit without stopped:

Ctrl+p, Ctrl+q

The TERM environment variable is unset!:

export TERM=xterm
docker ps       # show running containers
          -a    # show all containers
          -l    # show last started container


docker search <image>        # search images

docker pull <image>:<tag>    # pull to local copy

docker run -t -i <image>:<tag> [comand]
            <image>
            -ti <image>                          # run end get TTY
            -ti <image> [command]                # run command
            -d <image> [command]                 # run on background
            -P ...                               # open all required ports
            -p 80:5000 -p 85:5005  ...           # 80 -> 5000
            --name <some_name>                   # add name for container
            --env MYVAR2=foo                     # add env variable
            --env-file ./env.list                # add env file
            -v /etc/localtime:/etc/localtime:ro  # mount volume or file

            --log-driver=syslog
            --log-opt syslog-address=udp://<address>:514
                      tag="some_tag"
                      max-file=2
                      max-size=2k
                      syslog-facility=daemon


docker images                   # show local images
              -q                # show only IDs
              -f dangling=true  # show trash images

docker rmi $(docker images -f dangling=true -q)    # remove trash images


docker port <name>         # show opens ports
                   [port]  # show port

docker logs <name>         # Shows us the standard output of a container.
            -f <name>      # tail -f

docker stop <name>         # stop running container. return name of stopped

docker start <name>        # start stopping container. return name of started
               -i <name>   # and in

docker atach <name>    # atach to running container

docker rm <name> <name> ...       # remove container if stopped
          -f <name> <name> ...    # remove container!

docker rmi training/sinatra       # remove images

docker cp <container>:<src_path_in_container> <dest_local_path>    # cp files and directories. Example: backup data


docker top <name>        # top for container

docker inspect <name>    # return json information
               -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <name>

docker commit -m "message" -a "John Smith" 0b2616b0e5a8 ouruser/sinatra:v2    # save current state of image

docker tag 5db5f8471261 ouruser/sinatra:devel    # add new tag

Images that use the v2 or later format have a content-addressable identifier called a digest. As long as the input used to generate the image is unchanged, the digest value is predictable. To list image digest values, use the –digests flag:

docker images --digests | head
docker pull ouruser/sinatra@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf

docker push ouruser/sinatra
Docker monitoring
ctop

https://github.com/bcicen/ctop/blob/master/README.md

Installation:

wget https://github.com/bcicen/ctop/releases/download/v0.5/ctop-0.5-linux-amd64 -O ctop
sudo mv ctop /usr/local/bin/
sudo chmod +x /usr/local/bin/ctop

Run via docker:

docker run -ti --name ctop --rm -v /var/run/docker.sock:/var/run/docker.sock quay.io/vektorlab/ctop:latest

Docker-compose

Installation:

# https://docs.docker.com/compose/install/

# Docker
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce
sudo usermod -aG docker $USER

# Docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.11.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Bash completion (https://docs.docker.com/compose/completion/):

curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
docker-compose up -d        # start sevices
                           stop
                           stop <service_name>
                           run <service_name> <command>
                           exec <service_name> <command>    # -ti auto
                           -p <project_name>        # name of project
                           --force-recreate         # recreate containers
                           down                                     # stop and delete containers and network_default
                                        --rmi all           # + del all service images

Logging:

logging:
driver: "json-file"
options:
    max-size: "100M"
    max-file: "5"
version: '2.1'
services:
  nginx1:
    env_file: .env

Example nginx docker-compose.yml:

version: '2'
services:
  nginx1:
    image: nginx
    ports:
     - "81:80"
    external_links:
     - redis_1
     - project_db_1:mysql
     - project_db_1:postgresql
    tty: true

Example app with postgres:

version: '2'
services:
  app:
    build: .
    container_name: hydrareports
    image: hydrareports
    ports:
     - "80:80"
    restart: always
    depends_on:
     - postgres
    volumes:
     - ./.env:/var/www/html/.env

  postgres:
    image: postgres
    container_name: hydrareports_postgres
    volumes:
     - ./.data/db:/var/lib/postgresql/data
    restart: always
    environment:
     - TZ=Europe/Kiev
     - POSTGRES_PASSWORD=${DATASOURCES_PASSWORD}
     - POSTGRES_USER=${DATASOURCES_USERNAME}
     - POSTGRES_DB=${DATASOURCES_DB}

Gedit and xed docker syntax highlighting

https://github.com/mrorgues/gedit-xed-docker-syntax-highlighting

wget -q https://raw.githubusercontent.com/mrorgues/gedit-xed-docker-syntax-highlighting/master/docker.lang
sudo mv -i docker.lang /usr/share/gtksourceview-3.0/language-specs/docker.lang
sudo chmod 644 /usr/share/gtksourceview-3.0/language-specs/docker.lang

SWARM

Requirements

The following ports must be available. On some systems, these ports are open by default.

TCP port 2377 for cluster management communications TCP and UDP port 7946 for communication among nodes UDP port 4789 for overlay network traffic

If you plan on creating an overlay network with encryption (–opt encrypted), you also need to ensure ip protocol 50 (ESP) traffic is allowed.

Init/join cluster:

docker swarm init --advertise-addr <ip>

# join to cluster
docker swarm join --token <token> <ip>:2377

Commands:

docker info
docker node ls  # shwo nodes in cluster

# get join token (run on master)
docker swarm join-token worker -q

# inspect node
docker node inspect <node>

# change node activity
docker node update --availability drain <worker_hostname>
docker node update --availability active <NODE-ID>

# add/rm node labels
docker node update --label-add <key>=<value> <node>
docker node update --label-rm <key>=<value> <node>

# run service only on workers
docker service create --constraint node.role==worker <service>

Elastic

X-Pack

X-Pack (security, alerting, monitoring, reporting, and graph capabilities into one easy-to-install package)

Installation

On host with Elasticsearch and Kibana run next commands:

# install X-Pack plugin for Elasticsearch
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install x-pack --batch

# restart Elasticsearch
sudo service elasticsearch restart

# install X-Pack plugin for Kibana
sudo /usr/share/kibana/bin/kibana-plugin install x-pack

# restart Kibana
sudo service kibana restart

Note

To log in to Kibana, you can use the built-in elastic user and the password changeme. In Kibana WEB UI select Management->Users and change default passwords for elastic and kibana users.

After changing the password for the kibana user, you will need to update /etc/kibana/kibana.yml:

elasticsearch.username: "kibana"
elasticsearch.password: "<kibana_user_password>"

And restart Kibana:

sudo service kibana restart
Removing

On host with Elasticsearch and Kibana run next commands:

# remove X-Pack plugin for Elasticsearch
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin remove x-pack
sudo service elasticsearch restart

# remove X-Pack plugin for Kibana
sudo /usr/share/kibana/bin/kibana-plugin remove x-pack
sudo service kibana restart
Enabling and Disabling X-Pack Features

By default, all X-Pack features are enabled. You can explicitly enable or disable X-Pack features in elasticsearch.yml and kibana.yml:

# Set to false to disable X-Pack security. Configure in both elasticsearch.yml and kibana.yml.
xpack.security.enabled

# Set to false to disable X-Pack monitoring. Configure in both elasticsearch.yml and kibana.yml.
xpack.monitoring.enabled

# Set to false to disable X-Pack graph. Configure in both elasticsearch.yml and kibana.yml.
xpack.graph.enabled

# Set to false to disable Watcher. Configure in elasticsearch.yml only.
xpack.watcher.enabled

# Set to false to disable X-Pack reporting. Configure in kibana.yml only.
xpack.reporting.enabled

Elasticsearch

Optimisations

Remove old indices’ replicas using Curator “action: replicas”

Best practices: 1) max heap size for java: 30-32GB 2) one shard per index per node 3) two replicas per index for failover

Get cluster stats:

curl localhost:9200/_cluster/stats?human&pretty
# /_cluster/stats/nodes/node1,node*,master:false

Get health:

curl localhost:9200/_cat/health?v
# Output
#epoch      timestamp cluster        status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
#1548094256 20:10:56  docker-cluster green           1         1    645 645    0    0        0             0                  -                100.0%

Get allocation:

curl localhost:9200/_cat/allocation?v

# Output:
#shards disk.indices disk.used disk.avail disk.total disk.percent host      ip        node
# 5         260b    47.3gb     43.4gb    100.7gb           46 127.0.0.1 127.0.0.1 CSUXak2

Get index settings:

curl "localhost:9200/<index>/_settings"

# multiple
/<index1>,<index2>/_settings
/_all/_settings
/log_2013_*/_settings

# filtering settings by name
/log_2013_-*/_settings/index.number_*

Create template with settings for all indices:

curl -XPUT "localhost:9200/_template/all" -H 'Content-Type: application/json' -d'
{
  "template": "*",
  "settings": {
    "number_of_replicas": 0,
    "number_of_shards": 1
  }
}
'
Basics

Templates:

# get
curl localhost:9200/_cat/templates?v
# check if the template exists
curl -I localhost:9200/_template/<template>
# delete
curl -XDELETE localhost:9200/_template/<template>

Clear all indexes:

curl -XDELETE localhost:9200/*

Delete index:

curl -XDELETE localhost:9200/<index>

Get all indexes and their sizes:

# more actual options here - https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html
curl localhost:9200/_cat/indices?v
    ?v          - show headers
    ?h=index    - show only indexes
Curator
# show elasticsearch indices
curator_cli show_indices
Installation
  • Ubuntu 16.04:

    sudo sh -c "echo 'deb http://packages.elastic.co/curator/4/debian stable main' > /etc/apt/sources.list.d/elasticsearch-curator.list"
    sudo apt update && sudo apt install elasticsearch-curator
    
Configuration

Configuration file /<HOME>/.curator/curator.yml:

---
# Remember, leave a key empty if there is no value.  None will be a string,
# not a Python "NoneType"
client:
  hosts:
    - 127.0.0.1
  port: 9200
  url_prefix:
  use_ssl: False
  certificate:
  client_cert:
  client_key:
  ssl_no_validate: False
  http_auth:
  timeout: 30
  master_only: False

logging:
  loglevel: INFO
  logfile:
  logformat: default
  blacklist: ['elasticsearch', 'urllib3']
Delete old Elasticsearch indices

Create action file. Example - del.yml:

---
# Remember, leave a key empty if there is no value.  None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True.  If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
  1:
    action: delete_indices
    description: >-
      Delete indices older than 30 days (based on index name), for logstash-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: logstash-
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 30
      exclude:

Run action:

curator del.yml

# curator --dry-run del.yml

ELK stack 5.x [Docker]

sudo sysctl -w vm.max_map_count=262144
docker-compose up -d

docker-compose.yml:

version: '2'
services:

  filebeat:
    image: prima/filebeat:5.1.1
    container_name: filebeat
    volumes:
      - ./logs:/logs
      - ./filebeat/data:/data
      - ./filebeat/filebeat.yml:/filebeat.yml

  logstash:
    image: logstash:5.1.1
    container_name: logstash
    volumes:
      - ./logstash.conf:/logstash.conf
      - ./geo/GeoLite2-City.mmdb:/etc/logstash/geo_db
    command: -f /logstash.conf

  elasticsearch:
    image: elasticsearch:5.1.1
    container_name: elasticsearch
    environment:
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    mem_limit: 1g
    volumes:
      - ./esdata:/usr/share/elasticsearch/data

  kibana:
    image: kibana:5.1.1
    container_name: kibana
    environment:
      - ELASTICSEARCH_URL=http://elasticsearch:9200
    ports:
      - "5601:5601"

filebeat/filebeat.yml:

filebeat.prospectors:

- input_type: log

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /logs/*.log


#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["logstash:5044"]

logstash.conf:

input {
  beats {
    port => "5044"
  }
}

filter {
  grok {
    match => { "message" => "%{WORD}\[%{NUMBER}\]: \[%{HTTPDATE:time}\] %{IPORHOST:ip} \"(?:%{WORD:verbs} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:agent} \"%{NUMBER:duration}\"" }
  }

  date {
    match => [ "time", "dd/MMM/yyyy:HH:mm:ss Z" ]
  }

  if "_grokparsefailure" in [tags] {
    drop { }
  }

  geoip {
      source => "ip"
      database => "/etc/logstash/geo_db"
      fields => [
        "city_name",
        "continent_code",
        "country_code2",
        #"country_code3",
        "country_name",
        #"dma_code",
        #"ip",
        #"latitude",
        "location",
        #"longitude",
        #"postal_code",
        "region_name",
        "timezone"
      ]
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
  }
}

ELK stack 5.x [Ubuntu 14.04/16.04]

_images/elk_stack.svg
1. NGINX
Installation and configuration
apt update
apt install nginx

Use openssl to create an admin user, called kibanaadmin (you should use another name), that can access the Kibana web interface:

sudo -v
echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

Change the Nginx default server block /etc/nginx/sites-available/default:

server {
    listen 80 default_server;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl default_server;

    # Replace with your hostname
    server_name elk.company.com;

    ssl on;

    # Replace with your paths to certs
    ssl_certificate /path/to/keys/cert.crt;
    ssl_certificate_key /path/to/keys/key.key;

    location / {
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/htpasswd.users;

        proxy_pass http://localhost:5601;
    }

    # Path for letsencrypt temporary files
    location /.well-known {
        root   /var/www/html;
    }
}
Generate certificates

Self-signed [not-recomended]:

sudo mkdir /path/to/keys
cd /path/to/keys
sudo openssl req -x509 -nodes -newkey rsa:4096 -keyout key.key -out cert.crt -days 365

sudo nginx -t
sudo service nginx restart

Letsencrypt:

sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot


# Replace with your webroot and hostname
letsencrypt certonly --webroot -w /var/www/html -d elk.company.com

# Letsencrypt will generate certs and show path to them (paste this path to web-server config)

Add CRON tasks to renew automatically Letsencrypt certs:

sudo crontab -e

# Check or renew certs twice per day
0 12,18 * * * certbot renew --post-hook "systemctl reload nginx"
2. Elastic apt-repos

Add Elastic apt-repos. Run next commands step by step:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
sudo apt update
3. Elasticsearch

Elasticsearch (receives input messages from Logstash and stores them).

Installation

Note

Elasticsearch requires Java 8. Java 9 is not supported. Use the official Oracle distribution or an open-source distribution such as OpenJDK.

Install Java 8:

# Ubuntu 16.04
sudo apt install openjdk-8-jre

# Ubuntu 14.04
# use any method of Java 8 installation

Install Elasticsearch from repos:

sudo apt install elasticsearch

Elasticsearch will be installed in /usr/share/elasticsearch

Note

Add Elasticsearch to autorun - https://www.elastic.co/guide/en/elasticsearch/reference/5.1/deb.html#_sysv_literal_init_literal_vs_literal_systemd_literal

  • systemd [Ubuntu 16.04]:

    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable elasticsearch.service
    
  • init [Ubuntu 14.04]:

    sudo update-rc.d elasticsearch defaults 95 10
    
Configuration

Note

Elasticsearch will assign the entire heap specified in /etc/elasticsearch/jvm.options via the Xms (minimum heap size) and Xmx (maximum heap size) settings.

Main config-file /etc/elasticsearch/elasticsearch.yml:

# path to directory where to store the data (separate multiple locations by comma)
path.data: /path/to/data

# Use a descriptive name for the node:
node.name: node-1

Start Elasticsearch:

sudo service elasticsearch start
4. Kibana

Kibana (visualises data from Elasticsearch).

Installation

Note

Full newest instruction here - https://www.elastic.co/guide/en/kibana/current/deb.html

Install Kibana from repos:

sudo apt install kibana

Note

Add Kibana to autorun - https://www.elastic.co/guide/en/kibana/current/deb.html#_sysv_literal_init_literal_vs_literal_systemd_literal

  • systemd [Ubuntu 16.04]:

    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable kibana.service
    
  • init [Ubuntu 14.04]:

    sudo update-rc.d kibana defaults 95 10
    
Configuration

Note

Kibana loads its configuration from the /etc/kibana/kibana.yml file by default.

Change main parameters in /etc/kibana/kibana.yml:

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# allow remote connections
server.host: "localhost"

# Elasticsearch URL
elasticsearch.url: "http://localhost:9200"

Start Kibana:

sudo service kibana start
5. Logstash

Logstash (receives input messages, filters them and sends to Elasticsearch).

Installation

Note

Logstash requires Java 8. Java 9 is not supported. Use the official Oracle distribution or an open-source distribution such as OpenJDK.

Install Logstash from repos:

sudo apt install logstash

Note

Add Logstash to autorun

  • systemd [Ubuntu 16.04]:

    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable logstash.service
    
  • init [Ubuntu 14.04]:

    sudo update-rc.d logstash defaults 95 10
    
Configuration

Note

Logstash will assign the entire heap specified in /etc/logstash/jvm.options via the Xms (minimum heap size) and Xmx (maximum heap size) settings.

Create config for certs generating /etc/logstash/ssl/ssl.conf:

[ req ]
distinguished_name = req_distinguished_name
x509_extensions    = v3_req
prompt             = no

[ req_distinguished_name ]
O = My Organisation

[ v3_req ]
basicConstraints = CA:TRUE
subjectAltName   = @alt_names

[ alt_names ]
DNS.1 = <HOST_DNS>
# IP.1 = 127.0.0.1 # example for IP

HOST_DNS - elk.my-company.com, etc. (Host, where logstash is installed and listen input connections)

Note

In [ alt_names ] section you can describe host DNS name’s or IP’s.

Generate SSL cert:

cd /etc/logstash/ssl/
sudo openssl req -x509 -newkey rsa:4096 -keyout logstash.key -out logstash.crt -days 10000 -nodes -batch -config ssl.conf

Add your config-file in conf.d-directory /etc/logstash/conf.d/my-conf.conf:

input {
  beats {
    port => "5044"
    ssl => true
    ssl_certificate => "/etc/logstash/ssl/logstash.crt"
    ssl_key => "/etc/logstash/ssl/logstash.key"
  }
}


filter {
  grok {

    # Pattern for next nginx access log format:
    #
    # log_format  main  '[$time_local] $remote_addr "$request" '
    #                   '$status $body_bytes_sent '
    #                   '"$http_user_agent" "$request_time"';

    match => { "message" => "%{WORD}\[%{NUMBER}\]: \[%{HTTPDATE:time}\] %{IPORHOST:ip} \"(?:%{WORD:verbs} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes:integer}|-) %{QS:agent} \"%{NUMBER:duration:float}\"" }
  }

  # Drop messages, who not match with grok pattern.
  if "_grokparsefailure" in [tags] {
    drop { }
  }

  mutate {
    add_field => { "request_clean" => "%{request}" }
  }

  mutate {
    gsub => [
      "request_clean", "\?.*", ""
    ]
  }

  # Set @timestamp same as 'time' field.
  date {
    match => [ "time", "dd/MMM/yyyy:HH:mm:ss Z" ]
  }

  # Add useragent information.
  useragent {
    source => "agent"
    target => "useragent"
  }

  # Remove not necessary fields.
  mutate {
    remove_field => [
      "[useragent][major]",
      "[useragent][minor]",
      "[useragent][os_major]",
      "[useragent][os_minor]",
      "[useragent][patch]"
    ]
  }

  # Add geoip information.
  geoip {
      source => "ip"
      fields => [
        "city_name",
        "continent_code",
        "country_code2",
        "country_name",
        "location",
        "region_name",
        "timezone"
      ]
  }
}


output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "logstash-%{[fields][env]}-%{+YYYY.MM.dd}"
  }

  # Debug mode (output on stdout).
  #stdout {
  #  codec => rubydebug
  #}
}

Note

Logstash will send messages with next index template logstash-<env_field_from_filebeat>-<timestamp>

Start Logstash:

sudo service logstash start

Note

After installation and configuration Logstash will receive messages, filter them and send to Elasticsearch

7. Filebeat

Filebeat (reads logs and delivers them to logstash).

Note

Install Filebeat on host, where logs are situated

Installation
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
sudo dpkg -i filebeat-5.1.1-amd64.deb

Use the update-rc.d command to configure Filebeat to start automatically when the system boots up:

# Ubuntu 14.04
sudo update-rc.d filebeat defaults 95 10
Configuration

Create dir /etc/filebeat/ssl and put there your Logstash certificate (generated early at /etc/logstash/ssl/logstash.crt on ELK host)

Create file /etc/filebeat/filebeat.yml:

filebeat.prospectors:

- input_type: log
  paths:
    - <path_to_log(s)>
  fields:
    env: <environment>

# Different environments on same host.
#- input_type: log
#  paths:
#    - <path_to_log(s)>
#  fields:
#    env: <environment>

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["<logstash_ip>:5044"]
  ssl:
  ssl.certificate_authorities: ["/etc/filebeat/ssl/logstash.crt"]
  • path_to_log(s) - /var/log/nginx/access.log, etc.
  • environment - staging, live, some_env, etc. This field necessary for separating custom environment.
  • logstash_ip - 192.168.10.10, logstash.myhost.com, etc.

Start Filebeat:

# start in background
sudo service filebeat start

Note

After installation and configuration Filebeat will read and send messages to Logstash. When filebeat will have sent first message, you will can open WEB UI of Kibana (<elk_host_dns>:5601) and setup index with next template logstash-env_field_from_filebeat-*

Filebeat

Install

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html

deb:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.1.1-amd64.deb
sudo dpkg -i filebeat-5.1.1-amd64.deb
Config

/etc/filebeat/filebeat.yml

Run

Run in background:

sudo /etc/init.d/filebeat start

Run in foreground:

sudo filebeat.sh -e
Config template

https://github.com/elastic/beats/blob/master/filebeat/filebeat.yml

filebeat.yml:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- input_type: log

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ["^DBG"]

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ["^ERR", "^WARN"]

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: [".gz$"]

  # Optional additional fields. These field can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

Kibana

Filters

http://www.lucenetutorial.com/lucene-query-syntax.html

# To exclude a term containing specific text
-field: "text"

# To exclude different texts
-field: ("text1" or "text2")

#If it's two separate fields
-field1: "text1" -field2: "text2"

Logstash

Patterns

https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns

/etc/logstash/patterns/nginx

Route events to Elasticsearch only when grok is successful:

output {
  if "_grokparsefailure" not in [tags] {
    elasticsearch { ... }
  }
}

Metrics and conditions:

if [response] =~ /^2[0-9]{2,2}$/ {
  metrics {
    meter => [ "http_2xx" ]
    add_tag => "metric"
  }
}

Hashicorp Consul

Consul

Config options with comments - https://gowalker.org/github.com/hashicorp/consul/agent/config

consul agent -dev           # run agent

consul members              # show members
                -detailed   # show additional details

consul join <ip>            # join member to server (on server: client ip;  on client: server ip)

curl localhost:8500/v1/catalog/nodes            # check nodes
curl localhost:8500/v1/catalog/service/web
curl localhost:8500/v1/health/state/critical    # check health cheks

Consul agent options:

consul agent
    -server                     # act node at server mode
    -bootstrap-expect=1         # hints to the Consul server the number of additional server nodes we are expecting to join
    -data-dir=/tmp/consul
    -node=<name>                # override unique name of node. By default, Consul uses the hostname
    -bind=<ip>                  # address that Consul listens on. It must be accessible by all other nodes in the cluster
    -config-dir=/etc/consul.d   # marking where service and check definitions can be found
    -join=<ip>                  # auto join in cluster (connect to server)
    -ui                         # enable web ui (port:8500)
    -client=0.0.0.0             # listen ip
    -http-port=<port>           # http port (default:8500)
Example

Consul Server:

./consul agent -ui -server -bootstrap-expect=1 -enable-script-checks=true -data-dir=/tmp/consul -client 0.0.0.0 -config-dir=consul.d -bind=<public_ip> -datacenter=dc1

NODE_1:

# Start server
consul agent -server -bootstrap-expect=1 -data-dir=/tmp/consul -node=agent-one -bind=172.20.20.10 -config-dir=/etc/consul.d

# Add node to server
consul join 172.20.20.11

# Query for the address of the node "agent-two"
dig @127.0.0.1 -p 8600 agent-two.node.consul

NODE_2:

# Start agent
consul agent -data-dir=/tmp/consul -node=agent-two -bind=172.20.20.11 -config-dir=/etc/consul.d

# Definition files in the Consul configuration directory (restart agent after add definition)
sudo nano /etc/consul.d/ping.json
    {
        "check": {
            "name": "ping",
            "script": "ping -c1 google.com >/dev/null",
            "interval": "30s"
        }
    }

sudo nano /etc/consul.d/web.json
    {
        "service": {
            "name": "web",
            "tags": ["rails"],
            "port": 80,
            "check": {
                "script": "curl localhost >/dev/null 2>&1",
                "interval": "10s"
            }
        }
    }

dig @127.0.0.1 -p 8600 web.service.consul

Consul KEY/VALUE storage

Put value:

consul kv put <key> <value>
                -flags=42

Get value:

consul kv get <key>
                -flags=42
                -detailed   # detailed view
                -recurse    # all child view

Examples:

consul kv get some/key
consul kv put some/key 1
consul kv put some/key @<file>      # put file content

echo "5" | consul kv put redis/config/password -    # put from stdin ('-' parameter)

consul kv delete some/key       # delete
consul kv delete -recurse <key> # delete recursive

Hashicorp Packer

Packer

packer validate example.json

packer build -var ‘aws_access_key=YOUR ACCESS KEY’ -var ‘aws_secret_key=YOUR SECRET KEY’ example.json
-only=amazon-ebs # build only something
Example json

example.json:

{
  "variables": {
    "aws_access_key": "",
    "aws_secret_key": "",
    "do_api_token": ""
  },
  "builders": [{
    "type": "amazon-ebs",
    "access_key": "{{user `aws_access_key`}}",
    "secret_key": "{{user `aws_secret_key`}}",
    "region": "us-east-1",
    "source_ami": "ami-fce3c696",
    "instance_type": "t2.micro",
    "ssh_username": "ubuntu",
    "ami_name": "packer-example {{timestamp}}"
  },{
    "type": "digitalocean",
    "api_token": "{{user `do_api_token`}}",
    "image": "ubuntu-14-04-x64",
    "region": "nyc3",
    "size": "512mb"
  }],
  "provisioners": [{
    "type": "shell",
    "inline": [
      "sleep 30",
      "sudo apt-get update",
      "sudo apt-get install -y redis-server"
    ]
  }],
  "post-processors": ["vagrant"]
}

Hashicorp Terraform

Terraform

Examples - https://github.com/hashicorp/terraform/tree/master/examples

Tests - https://blog.gruntwork.io/open-sourcing-terratest-a-swiss-army-knife-for-testing-infrastructure-code-5d883336fcd5

Secrets management - https://blog.gruntwork.io/a-comprehensive-guide-to-managing-secrets-in-your-terraform-code-1d586955ace1

Terragrunt - move remote state after moving folders:

# Dump out the state
terragrunt state pull > state.bkp.json

# Move the module to the new location

# In the new folder, import the state
terragrunt state push state.bkp.json
*.tf    # configuration
terraform plan                          # show plan, check config
                -out=<file>             # save plan in file
                -destroy                # show what will be destroyed
                -var '<key>=<val>'      # add var
                -var-file="v.tfvars"    # add file with vars. File 'terraform.tfvars' load auto

VARS:

# file v.tfvars content
access_key = "foo"
secret_key = "bar"
terraform apply     # run
terraform destroy   # destroy resources

terraform show      # inspect the state

terraform output    # show only output (after apply)

terraform get       # get the modules (before apply). Download the modules. By default, the command will not check for updates.
             -u     # check and download updates
terraform init github.com/hashicorp/terraform/examples/aws-two-tier

Create graph of dependencies:

sudo apt-get install graphviz
terraform graph | dot -Tpng > graph.png

Hashicorp Vagrant

Vagrant

Boxes-hub:

https://atlas.hashicorp.com/boxes/search
::

# show outdated boxes vagrant box outdated

# update box vagrant box update

vagrant init [box_name] # create vagrantfile with ‘base’ box
up # start vm, starts and provisions the vagrant environment reload # restarts vagrant machine, loads new Vagrantfile configuration ssh # connect to machine ssh-config # outputs OpenSSH valid configuration to connect to the machine reload –provision # enable provision global-status # status of all machines

vagrant destroy # terminate vm (not remove downloaded box)

Remove old box versions

https://github.com/swisnl/vagrant-remove-old-box-versions

vagrant plugin install vagrant-remove-old-box-versions

vagrant remove-old-versions [options]

-p, --provider PROVIDER          The specific provider type for the boxes to destroy.
-n, --dry-run                    Only print the boxes that would be removed.
    --name NAME                  The specific box name to destroy.
-f, --force                      Destroy without confirmation even when box is in use.
Create own box
http://sysadm.pp.ua/linux/sistemy-virtualizacii/vagrant-box-creation.html

Vagrantfile example

Create in directory file Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  # Box name
  config.vm.box = "hashicorp/precise64"

  # Provision file. Exec in shell.
  config.vm.provision :shell, path: "bootstrap.sh"

  # Port-forwarding
  config.vm.network :forwarded_port, guest: 80, host: 4567

  # Sync folder
  config.vm.synced_folder "site/", "/var/www/html/", owner: "root", group: "root"
  # disable sync
  config.vm.synced_folder ".", "/vagrant", disabled: true

  config.vm.provider "virtualbox" do |v|
    v.gui = true            # gui or headless
        v.name = my_vm      # name of vm
    v.cpus = 1
        v.memory = 512
  end
end

Hashicorp Vault

Vault

Instructions
vault init -key-shares=1 -key-threshold=1   # Initialize Vault with 1 unseal key

vault seal                                  # seal vault
vault unseal <key>                          # unseal vault

vault auth <root token>                     # authorize with a client token

vault write secret/<path> <key>=<value>
vault read -format=json secret/<path>
vault delete secret/<path>

# examples
vault write secret/hello value=world excited=yes
vault read -format=json secret/hello | jq -r .data.excited

vault mount generic                         # mount generic backend
            aws                             # mount aws backend
            -path=<path>

vault unmount generic/

vault mounts                                # show mounts

vault path-help aws                         # show help paths
vault path-help aws/creds/operator
Tokens
vault token-create
vault token-revoke <token_id>
vault auth <token_id>

Auth backend - https://www.vaultproject.io/intro/getting-started/authentication.html

AWS
vault mount aws

vault write aws/config/root access_key=<ACCESS_KEY> secret_key=<SECRET_KEY>

# file policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1426528957000",
      "Effect": "Allow",
      "Action": [
        "ec2:*"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

vault write aws/roles/deploy policy=@policy.json

# generate credentials
vault read aws/creds/deploy    # create IAM user with policy (show credentials)

vault revoke <lease_id>        # purge access
Docker

https://hub.docker.com/_/vault/

# host1
vault server -dev

# host2
export VAULT_ADDR='http://127.0.0.1:8200'
Example policy

policy.json:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1426528957000",
      "Effect": "Allow",
      "Action": [
        "ec2:*"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

Jboss EAP

Add user

::

EAP_HOME/bin/add-user.sh

configuration/mgmt-users.properties configuration/mgmt-groups.properties

# Example: jboss-eap-7.0/bin/add-user.sh -dc dc/domain/configuration/ -p password1! -u admin

Options:

-a                              # Create a user in the application realm. If omitted, the default is to create a user in the management realm.
-dc <value>                     # Domain configuration directory that will contain the properties files. Default "EAP_HOME/domain/configuration/".
-sc <value>                     # Alternative standalone server configuration directory that will contain the properties files.
                                # Default "EAP_HOME/standalone/configuration/".
-up, --user-properties <>       # Alternative user properties file.
                                # Absolute path or a file name used in conjunction with the -sc or -dc argument that specifies the alternative configuration directory.
-g, --group <value>             # A comma-separated list of groups to assign to this user.
-gp, --group-properties <>      # This argument specifies the name of the alternative group properties file.
                                # Absolute path or a file name used in conjunction with the -sc or -dc argument that specifies the alternative configuration directory.
-p, --password <value>          # The password of the user.
-u, --user <value>              # The name of the user. Only alphanumeric characters and the following symbols are valid: ,./=@\.
-r, --realm <value>             # The name of the realm used to secure the management interfaces. If omitted, the default is ManagementRealm.
-s, --silent                    # Run the add-user script with no output to the console.
-e, --enable                    # Enable the user.
-d, --disable                   # Disable the user.
-cw, --confirm-warning          # Automatically confirm warning in interactive mode.
-h, --help                      # Display usage information for the add-user script.

JBOSS CLI

Examples:

EAP_HOME/bin/jboss-cli.sh

jboss-cli.sh --connect
jboss-cli.sh --connect  controller=controller-host.net[:PORT]
                        controller=remote://controller-host.net:1234
                        controller=http-remoting://controller-host.net:1234

reload --host=<hostname>    # reload host

/core-service=platform-mbean/type=runtime:read-attribute(name=system-properties)
/system-property=foo:add(value=bar)

:read-resource
:read-resource(recursive=true)

/host=<HOST_NAME>:read-resource

/host=<HOST_NAME>/server=<SERVER_NAME>:read-resource

cd host=<HOST_NAME>

cd /profile=default/subsystem=datasources/data-source=ExampleDS
:read-attribute(name=min-pool-size)
:write-attribute(name=min-pool-size,value=<VALUE>)

# Add new group
/server-group=<NEW_GROUP_NAME>:add(profile=ha, socket-binding-group=ha-sockets)

/server-group=<GROUP_NAME>/jvm=default:add(heap-size=<VALUE>,max-heap-size=<VALUE>)

/host=master/interface=management:write-attribute(name=inet-address, value="${jboss.bind.address.management:0.0.0.0}")

/server-group=<GROUP_NAME>:read-resource

/host=slave1/interface=public:write-attribute(name=inet-address,value=127.0.0.1)

/server-group=qa-group:remove()

JBOSS EAP

# https://access.redhat.com/documentation/en/red-hat-jboss-enterprise-application-platform/7.0/paged/installation-guide/ # # RedHat Enterprise Linux server 7.2

sudo yum update
sudo yum install java-1.8.0-openjdk

java -jar jboss-eap-7.0.0-installer.jar -console

# auto install
# java -jar jboss-eap-7.0.0-installer.jar auto.xml

sudo yum install nano

nano EAP_HOME/bin/init.d/jboss-eap.conf
JBOSS_HOME JBOSS_USER # JBOSS_MODE=domain # if need start as domain mode

sudo cp EAP_HOME/bin/init.d/jboss-eap.conf /etc/default sudo cp EAP_HOME/bin/init.d/jboss-eap-rhel.sh /etc/init.d/ sudo chmod +x /etc/init.d/jboss-eap-rhel.sh

sudo chkconfig –add jboss-eap-rhel.sh # add the new jboss-eap-rhel.sh service to list of automatically started services sudo service jboss-eap-rhel start sudo service jboss-eap-rhel status

sudo chkconfig jboss-eap-rhel.sh on # To make the service start automatically when the Red Hat Enterprise Linux server starts

/home/spider/EAP-7.0.0/standalone/configuration/standalone.xml
<interfaces>
<interface name=”management”>
<inet-address value=”${jboss.bind.address.management:0.0.0.0}”/>

</interface> <interface name=”public”>

<inet-address value=”${jboss.bind.address:0.0.0.0}”/>

</interface>

</interfaces>

# sudo firewall-cmd –get-active-zones sudo firewall-cmd –zone=public –add-port=8080/tcp –permanent sudo firewall-cmd –zone=public –add-port=9990/tcp –permanent

# sudo firewall-cmd –permanent –zone=public –add-rich-rule=’rule family=ipv4 sourceaddress=”10.10.0.1/8” port port=81 protocol=tcp accept’

sudo firewall-cmd –reload sudo iptables -S

# start/stop firewall sudo systemctl disable firewalld sudo systemctl stop firewalld sudo systemctl status firewalld

EAP_HOME/standalone/deployments # dir for deployment EAP_HOME/standalone/tmp # *.war files temp EAP_HOME/standalone/tmp/vfs/temp # unpacked *.war files

LOG
EAP_HOME/domain/configuration/logging.properties
handler.BOOT_FILE.fileName=${org.jboss.boot.log.file:domain.log} # handler.BOOT_FILE.fileName=/var/log/eap/domain.log
Standalone mode

#

./jboss-eap-7.0/bin/standalone.sh -Djboss.server.base.dir=standalone1/standalone –server-config=standalone.xml

Domains mode

# # http://developers.redhat.com/blog/2016/07/28/jboss-eap-7-domain-deployments-part-1-setup-a-simple-eap-domain/

## # host0 (master domain controller) ##

./jboss-eap-7.0/bin/add-user.sh # add user with name of slave host

host0/domain/configuration/host-master.xml

<host name=”host0-master” xmlns=”urn:jboss:domain:1.7”>

<domain-controller>
<local/>

</domain-controller>

<inet-address value=”${jboss.bind.address.management:127.0.0.1}”/> # own IP

<socket interface=”management” port=”${jboss.management.native.port:9999}”/> <socket interface=”management” port=”${jboss.management.http.port:9990}”/>

host0/domain/configuration/domain.xml

<server-groups>
<server-group name=”ha-server-group” profile=”ha”>
<jvm name=”default”>
<heap size=”1000m” max-size=”1000m”/> <permgen max-size=”256m”/>

</jvm> <socket-binding-group ref=”ha-sockets”/>

</server-group>

</server-groups>

./jboss-eap-7.0/bin/domain.sh –host-config=host-master.xml -Djboss.domain.base.dir=host0/domain/ -bmanagement=127.0.0.1

default - logging, security, datasources, infinispan, webservices, ee, ejb3, transactions, etc. ha - ‘default’ + jgroups and modcluster subsystems for high availability full - ‘default’ + messaging and iiop subsystems full-ha - ‘full’ + jgroups and modcluster subsystems for high availability

# host1 (slave ) ##

host1/domain/configuration/host-slave.xml

<host xmlns=”urn:jboss:domain:4.1” name=”host1-slave”>

<domain-controller>
<remote host=” ${jboss.domain.master.address:127.0.0.1}” port=”${jboss.domain.master.port:9999}” security-realm=”ManagementRealm”/>

</domain-controller>

<socket interface=”management” port=”${jboss.management.native.port:19999}”/> # only if start on the same host

<servers>
<server name=”Server11” group=”primary-server-group” auto-start=”true”> <socket-bindings port-offset=”100”/> </server>

</servers>

./jboss-eap-7.0/bin/domain.sh –host-config=host-slave.xml -Djboss.domain.base.dir=host1/domain -Djboss.host.name=host1-slave

## # host2 (slave ) ##

host2/domain/configuration/host-slave.xml

<host xmlns=”urn:jboss:domain:4.1” name=”host2-slave”>

<domain-controller>
<remote host=”${jboss.domain.master.address:127.0.0.1}” port=”${jboss.domain.master.port:9999}” security-realm=”ManagementRealm”/>

</domain-controller>

<socket interface=”management” port=”${jboss.management.native.port:29999}”/> # only if start on the same host

<servers> <server name=”Server21” group=”primary-server-group” auto-start=”true”>

<socket-bindings port-offset=”300”/>

</server>

</servers>

./jboss-eap-7.0/bin/domain.sh –host-config=host-slave.xml -Djboss.domain.base.dir=host2/domain -Djboss.host.name=host2-slave

Multiple domain controller
<domain-controller>
<remote security-realm=”ManagementRealm”>
<discovery-options>
<static-discovery name=”primary” protocol=”${jboss.domain.master.protocol:remote}” host=”172.16.81.100” port=”${jboss.domain.master.port:9999}”/> <static-discovery name=”backup” protocol=”${jboss.domain.master.protocol:remote}” host=”172.16.81.101” port=”${jboss.domain.master.port:9999}”/>

</discovery-options>

</remote>

</domain-controller>

LOAD BALANCER on HTTPD

## # host0 (master domain controller)

sudo yum install httpd

# sudo apt-get install apache2 (Ubuntu)

# download jboss mod_cluster # http://downloads.jboss.org/mod_cluster//1.3.1.Final/linux-x86_64/mod_cluster-1.3.1.Final-linux2-x64-so.tar.gz

cp *.so /etc/httpd/modules/

# or /usr/lib/apache2/modules (Ubuntu)

# check rights: # chown root:root /etc/httpd/modules/*.so

nano /etc/httpd/conf.modules.d/00-proxy.conf
# Add load modules mod_cluster (*.so unpacked)

nano /etc/httpd/conf/httpd.conf

Configure Undertow as a Load Balancer Using mod_cluster

# Built-in mod_cluster front-end load balancer to load balance other JBoss EAP instances.

# Requirements

  1. A JBoss EAP server that will act as the load balancer.

    profile = default socket binding group = standard-sockets

    /server-group=lb-server-group:add(profile=default, socket-binding-group=standard-sockets) /host=<SLAVE/server-config=server-lb:add(auto-start=true, group=lb-server-group, socket-binding-port-offset=0, socket-binding-group=standard-sockets)

  2. Two JBoss EAP servers, which will act as the back-end servers.

    profile = ha socket binding group = ha-sockets

  3. The distributable application to be load balanced deployed to the back-end servers.

## # Set the mod_cluster advertise security key allows the load balancer and servers to authenticate during discovery:

/profile=ha/subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=advertise-security-key, value=<PASSWORD>)

## # Add a modcluster socket binding with the appropriate multicast address and port configured:

/socket-binding-group=standard-sockets/socket-binding=modcluster:add(multicast-port=23364, multicast-address=224.0.1.105)

## # Add the mod_cluster filter to Undertow for the load balancer instance:

/profile=default/subsystem=undertow/configuration=filter/mod-cluster=modcluster:add(management-socket-binding=http, advertise-socket-binding=modcluster, security-key=<PASSWORD>)

## # Bind the mod_cluster filter to the default host:

/profile=default/subsystem=undertow/server=default-server/host=default-host/filter-ref=modcluster:add

Standalone/domain run options

./EAP_HOME/bin/standalone.sh ./EAP_HOME/bin/domain.sh

#
--admin-only Standalone Set the server’s running type to ADMIN_ONLY. This will cause it to open administrative interfaces and accept management requests, but not start other runtime services or accept end user requests.
--admin-only Domain Set the host controller’s running type to ADMIN_ONLY causing it to open administrative interfaces and accept management requests but not start servers or, if this host controller is the master for the domain, accept incoming connections from slave host controllers.
-b=<value>, -b <value> Standalone, Domain Set system property jboss.bind.address, which is used in configuring the bind address for
the public interface. This defaults to 127.0.0.1 if no value is specified. See the -b<interface>=<value> entry for setting the bind address for other interfaces.

-b<interface>=<value> Standalone, Domain Set system property jboss.bind.address.<interface> to the given value. E.g -bmanagement=IP_ADDRESS

--backup Domain Keep a copy of the persistent domain configuration even if this host is not the Domain Controller.

-c=<config>, -c <config> Standalone Name of the server configuration file to use. The default is standalone.xml.

-c=<config>, -c <config> Domain Name of the server configuration file to use. The default is domain.xml.

--cached-dc Domain If the host is not the Domain Controller and cannot contact the Domain Controller at boot, boot using a locally cached copy of the domain configuration.
–debug [<port>] Standalone Activate debug mode with an optional argument to specify the port.
Only works if the launch script supports it.

-D<name>[=<value>] Standalone, Domain Set a system property.

--domain-config=<config>
 Domain Name of the server configuration file to use. The default is domain.xml.
-h, --help Standalone, Domain Display the help message and exit.
--host-config=<config>
 Domain Name of the host configuration file to use. The default is host.xml.
--interprocess-hc-address=<address>
 Domain Address on which the host controller should listen for communication from the process controller.
--interprocess-hc-port=<port>
 Domain Port on which the host controller should listen for communication from the process controller.
--master-address=<address>
 Domain Set system property jboss.domain.master.address to the given value. In a default slave Host Controller config, this is used to configure the address of the master Host Controller.
--master-port=<port>
 Domain Set system property jboss.domain.master.port to the given value. In a default slave Host Controller config, this is used to configure the port used for native management communication by the master Host Controller.
--read-only-server-config=<config>
 Standalone Name of the server configuration file to use. This differs from –server-config and -c in that the original file is never overwritten.
--read-only-domain-config=<config>
 Domain Name of the domain configuration file to use. This differs from –domain-config and -c in that the initial file is never overwritten.
--read-only-host-config=<config>
 Domain Name of the host configuration file to use. This differs from –host-config in that the initial file is never overwritten.

-P=<url>, -P <url>, –properties=<url> Standalone, Domain Load system properties from the given URL.

--pc-address=<address>
 Domain Address on which the process controller listens for communication from processes it controls.
--pc-port=<port>
 Domain Port on which the process controller listens for communication from processes it controls.

-S<name>[=<value>] Standalone Set a security property.

-secmgr Standalone, Domain Runs the server with a security manager installed.
--server-config=<config>
 Standalone Name of the server configuration file to use. The default is standalone.xml.
-u=<value>, -u <value> Standalone, Domain Set system property jboss.default.multicast.address, which is used in configuring the
multicast address in the socket-binding elements in the configuration files. This defaults to 230.0.0.4 if no value is specified.
-v, -V, --version
 Standalone, Domain Display the application server version and exit.

Kubernetes

Kubernetes

MINIKUBE install on local machine

https://github.com/kubernetes/minikube/blob/v0.9.0/README.md http://kubernetes.io/docs/getting-started-guides/minikube/

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.9.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/v1.3.6/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# where v1.3.6 - last version of kubernetes

minikube start      # start kubernetes
    stop            # stop kubernetes
    dashboard       # open web-interface
    ip              # show dashboard ip
    service NAME    # open ip in browser

curl $(minikube service hello-minikube --url)    # show ip
KUBECTL

POD - one or group of containers, combined for common goals (launch as a single unit)

$HOME/.kube/config    # config

kubectl get pods --all-namespaces
        pod
            -o wide             # show ip and nodes (gcloud compute ssh <NODE>    # ssh logout)
        deployments
        nodes                   # show nodes
        services <service>      # show ip and ports
        rc                      # show replication controllers
        events                  # show events

kubectl config view             # show config

kubectl set image deployment/test2 test2=eu.gcr.io/test/php7-apache:latest    # update source image

kubectl describe pod <NAME>
                 services <NAME>

kubectl cluster-info
Deployment
kubectl run <NAME> --image=<image_hub>                                  # create pod, create deployment object
                                    [--env="key=value"]
                                    [--port=port]
                                    [--replicas=replicas]
                                    [--dry-run=bool]
                                    [--overrides=inline-json]
                                    [--command] -- [COMMAND] [args...]


kubectl expose (-f FILENAME | TYPE NAME)
                                    [--port=port]
                                    [--protocol=TCP|UDP]
                                    [--target-port=number-or-name]
                                    [--name=name]
                                    [--external-ip=external-ip-of-service]
                                    [--type=type]
                                        --type="LoadBalancer"    # for expose external ip

kubectl scale deployment <deployment_name> --replicas=4
kubectl autoscale deploymeent <deployment_name> --min=1 --max=3

kubectl proxy --port=8001 &                 # connect to proxy (& - backround)
# http://localhost:8001/ui


kubectl create -f ./<deployment>.yaml       # create deployment
kubectl create -f ./<service>.yaml          # create service

kubectl apply -f <file>.yaml                # compare the version of the configuration with the previous version and apply the changes,
                                            # without overwriting any automated changes to properties you haven’t specified.

kubectl delete pod,service <name> <name> ...
               deployment  <name> <name> ...

kubectl logs <POD-NAME>

kubectl exec <pod-name> date
             <pod-name> -c <container-name> date

kubectl exec -ti <pod-name> /bin/bash    # Get an interactive TTY and run /bin/bash from pod <pod-name>. By default, output is from the first container.
Abbreviation
  • componentstatuses - cs
  • daemonsets - ds
  • events - ev
  • endpoints - ep
  • horizontalpodautoscalers - hpa
  • ingress - ing
  • limitranges - limits
  • nodes - no
  • namespaces - ns
  • pods - po
  • persistentvolumes - pv
  • persistentvolumeclaims - pvc
  • resourcequotas - quota
  • replicasets - rs
  • replicationcontrollers - rc
  • serviceaccounts - sa
  • services - svc
Example
apiVersion: v1
kind: Service
metadata:
  name: ms1
  labels:
    app: ms1
spec:
  type: LoadBalancer
  ports:
  - port: 3000
    targetPort: 3000
  selector:
    app: ms1
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ms1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ms1
    spec:
      restartPolicy: Always
      containers:
      - name: ms1
        image: eu.gcr.io/vsiryk-test/ms1
        ports:
        - containerPort: 3000
        env:
        - name: MICROSERVICE_DB_HOST
          value: db_host
        - name: MYSQL_ROOT_PASSWORD
          value: password
        - name: MYSQL_DATABASE
          value: database
        - name: MYSQL_USER
          value: user
        - name: MYSQL_PASSWORD
          value: password

Mac OS

Create bootable Mac OS iso

Create ISO from DMG

Create bootable Sierra.iso:

#!/bin/bash

# mount InstallESD.dmg to /Volumes/install_app
hdiutil attach /Applications/Install\ macOS\ Sierra.app/Contents/SharedSupport/InstallESD.dmg -noverify -nobrowse -mountpoint /Volumes/install_app

# create Sierra.cdr.dmg
# hdiutil create -o /tmp/Sierra.cdr -size 6200m -layout SPUD -fs HFS+J
hdiutil create -o /tmp/Sierra.cdr -size 7316m -layout SPUD -fs HFS+J

# createmount Sierra.cdr.dmg to /Volumes/install_build
hdiutil attach /tmp/Sierra.cdr.dmg -noverify -nobrowse -mountpoint /Volumes/install_build

# restore BaseSystem.dmg to /Volumes/install_build (will renamed to "/Volumes/OS X Base System")
asr restore -source /Volumes/install_app/BaseSystem.dmg -target /Volumes/install_build -noprompt -noverify -erase

rm /Volumes/OS\ X\ Base\ System/System/Installation/Packages

cp -rp /Volumes/install_app/Packages /Volumes/OS\ X\ Base\ System/System/Installation/
cp -rp /Volumes/install_app/BaseSystem.chunklist /Volumes/OS\ X\ Base\ System/BaseSystem.chunklist
cp -rp /Volumes/install_app/BaseSystem.dmg /Volumes/OS\ X\ Base\ System/BaseSystem.dmg

hdiutil detach /Volumes/install_app
hdiutil detach /Volumes/OS\ X\ Base\ System/
hdiutil convert /tmp/Sierra.cdr.dmg -format UDTO -o /tmp/Sierra.iso

mv /tmp/Sierra.iso.cdr ./Sierra.iso
rm /tmp/Sierra.cdr.dmg

Mac OS instructions

Add to user administrator privileges and all access abilities:

dscl . -append /groups/admin GroupMembership <USERNAME>

# confirm
dscl . -read /groups/admin GroupMembership

Download and install DMG from commandline:

curl -O https://site.com/<file.dmg>
sudo hdiutil attach <file.dmg>
sudo installer -package /Volumes/<volume>/<app_name.pkg> -target /
sudo hdiutil detach /Volumes/<volume>

Show partition information:

diskutil list

Check resize partition restrictions:

diskutil list
diskutil resizeVolume <disk> limits

Resize Mac OS partition:

sudo diskutil resizeVolume <disk> <size>GB

Get CPU Information:

sysctl -n machdep.cpu.brand_string

system_profiler | grep Processor

sysctl -a | grep machdep.cpu

# CPU instructions
sysctl -a | grep machdep.cpu.features
sysctl -a | grep machdep.cpu.leaf7_features

On boot, hold Command + Option + P and R keys to clear NVRAM.

Get username of user that currently uses GUI of WindowServer:

stat -f '%Su' /dev/console
# or
ls -l /dev/console | cut -d " " -f4

Converting plist/xml:

# convert from binary PLIST to XML
plutil -convert xml1 <binary.plist>

# convert from XML to binary PLIST
plutil -convert binary1 <xml.plist>

Get System Info on Mac OS:

hostinfo

Get screencapture via terminal:

screencapture -xr -t gif capture.gif

Turn on performance mode for macOS Server (OS X El Capitan 10.11 and later). Provides extended system limits (maxproc, etc.):

# check
nvram boot-args

# turn on (reboot after turn on)
sudo nvram boot-args="serverperfmode=1 $(nvram boot-args 2>/dev/null | cut -f 2-)"


# turn off (reboot after turn off)
sudo nvram boot-args="$(nvram boot-args 2>/dev/null | sed -e $'s/boot-args\t//;s/serverperfmode=1//')"

Delete a local account:

dscl localhost delete /Local/Default/Users/<user>
rm -rf /Users/<user>

Show listen ports:

sudo lsof -PiTCP -sTCP:LISTEN

Manipulating OS X login items from command line

Snoop new processes as they are executed, creatbyproc.d for files:

sudo newproc.d
sudo creatbyproc.d

List memory usage in real time:

sudo fs_usage | grep -v 0.0000

Force flush cached memory:

sudo sync && sudo purge

Enable/disable swapping in Mac OS X (not checked solution):

# to disable swap (pager daemon)
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist

# after stopping pager daemon, you may want to remove swapfiles
sudo rm /private/var/vm/swapfile*

# to enable swap, you need to boot in Single Mode (Hold [CMD + S] at booting time) and run this command:
sudo launchctl load /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist

Remove RecoveryHD partition:

diskutil list
diskutil eraseVolume HFS+ Blank /dev/<RecoveryHD_ID>
diskutil mergePartitions HFS+ <nameOfMainDisk> <mainDisk_ID> <RecoveryHD_ID>

List of users (with information):

dscacheutil -q user

# only not system users
dscacheutil -q user | grep -A 3 -B 2 -e uid:\ 5'[0-9][0-9]'

Default background pictures paths:

# all pictures
/Library/Desktop\ Pictures/*.jpg
/Library/Desktop\ Pictures/Solid\ Colors/*.jpg

# default background
/System/Library/CoreServices/DefaultDesktop.jpg

Logout user:

sudo launchctl bootout user/$(id -u test)

# graceful method
sudo launchctl asuser $(id -u <USERNAME>) osascript -e 'tell application "System Events"' -e 'log out' -e 'keystroke return' -e 'end tell'

Set/get Computer/Host/LocalHost names. The settings are stored in /Library/Preferences/SystemConfiguration/preferences.plist.:

# set
sudo scutil --set ComputerName "newname"
sudo scutil --set LocalHostName "newname"
sudo scutil --set HostName "newname"

# get
scutil --get ComputerName
scutil --get LocalHostName
scutil --get HostName

Allow only ssh key authentication /etc/ssh/sshd_config:

PasswordAuthentication no
ChallengeResponseAuthentication no

Control autoupdate:

# https://derflounder.wordpress.com/2014/12/29/managing-automatic-app-store-and-os-x-update-installation-on-yosemite/
# to disable the automatic update check
softwareupdate --schedule off
# or
defaults write /Library/Preferences/com.apple.SoftwareUpdate AutomaticCheckEnabled -bool FALSE

Allow to install apps from anywhere:

sudo spctl --master-disable

Switch user:

uid=`id -u ${username}`
/System/Library/CoreServices/Menu\ Extras/User.menu/Contents/Resources/CGSession -switchToUserID ${uid}

Restart Screensharing:

sudo launchctl unload /System/Library/LaunchDaemons/com.apple.screensharing.plist
sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.screensharing.plist

Enable/disable fast user switching:

# enable
sudo defaults write /Library/Preferences/.GlobalPreferences MultipleSessionEnabled -bool YES

# disable
sudo defaults write /Library/Preferences/.GlobalPreferences MultipleSessionEnabled -bool no

Open multiple instances of application:

open -na "Messages"
open -n /Applications/Messages.app/Contents/MacOS/Messages

Run application at another user:

sudo login -f <username> /Applications/Messages.app/Contents/MacOS/Messages

Remote connect:

https://support.apple.com/en-us/HT201710
LaunchDaemons

/System/Library/LaunchDaemons/ - load when system running

Bonjour:

/System/Library/LaunchDaemons/com.apple.mDNSResponder.plist
/System/Library/LaunchDaemons/com.apple.mDNSResponderHelper.plist
Launchctl

Agents and daemons paths:

~/Library/LaunchAgents         # Per-user agents provided by the user.
/Library/LaunchAgents          # Per-user agents provided by the administrator.
/Library/LaunchDaemons         # System-wide daemons provided by the administrator.
/System/Library/LaunchAgents   # Per-user agents provided by Mac OS X.
/System/Library/LaunchDaemons  # System-wide daemons provided by Mac OS X.

Install GUI application:

brew install Caskroom/cask/launchcontrol

Create user agent:

mkdir $HOME/Library/LaunchAgents
# drwxr-xr-x    3 spider  staff   102 Apr 28 16:46 LaunchAgents

Disabled jobs are stored in /var/db/com.apple.xpc

launchctl
launchctl   list                                    # list all loaded services (PID LAST_EXIT_CODE NAME)
                    <service_name>                  # show info about service

launchctl   print-disabled user/<user_id>           # show disabled services for user

launchctl   enable|disable user/<user_id>/<service> # disable service for user
                           system/<service>         # disable daemons whir starts as root

launchctl   remove <service>                        # unload service from launchd
launchctl   unload <path_to_plist>                  # unload service from launchd
                   -w                               # pick disabled (save after reboot)

launchctl   load <path_to_plist>                    # load service to launchd
                   -w                               # pick enabled (save after reboot)

launchctl   start|stop                              # start|stop service

# run app as another user
sudo launchctl asuser $(id -u <user>) <command>
Brew

Installation:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Install some app:

brew install <app>
System limits

Note

To increase system limits turn on performance mode. Mac OS Sierra:

sudo nvram boot-args="serverperfmode=1 $(nvram boot-args 2>/dev/null | cut -f 2-)"
launchctl limit

# set limit (before system restart)
sysctl -w kern.maxproc=2048
sysctl -w kern.maxfiles=5000

sysctl -a

# show sysem limits
sysctl kern.maxproc
sysctl kern.maxfiles

# show user limits
ulimit -a

# set user limit
ulimit -u 2048

launchctl limit maxfiles 1000000 1000000
launchctl limit maxproc 1000000 1000000

# set mac server mode
sudo serverinfo --setperfmode true

For 10.9 (Mavericks), 10.10 (Yosemite), 10.11 (El Capitan), and 10.12 (Sierra)

Create file (owner: root:wheel, mode: 0644) /Library/LaunchDaemons/limit.maxfiles.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
  <plist version="1.0">
    <dict>
      <key>Label</key>
        <string>limit.maxfiles</string>
      <key>ProgramArguments</key>
        <array>
          <string>launchctl</string>
          <string>limit</string>
          <string>maxfiles</string>
          <string>524288</string>
          <string>524288</string>
        </array>
       <key>RunAtLoad</key>
        <true/>
      <key>ServiceIPC</key>
        <false/>
    </dict>
  </plist>

Create file (owner: root:wheel, mode: 0644) /Library/LaunchDaemons/limit.maxproc.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple/DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
  <plist version="1.0">
    <dict>
      <key>Label</key>
        <string>limit.maxproc</string>
      <key>ProgramArguments</key>
        <array>
          <string>launchctl</string>
          <string>limit</string>
          <string>maxproc</string>
          <string>2048</string>
          <string>2048</string>
        </array>
      <key>RunAtLoad</key>
        <true />
      <key>ServiceIPC</key>
        <false />
    </dict>
  </plist>

Mac OS programming

AppleScript Programming/System Events

https://en.wikibooks.org/wiki/AppleScript_Programming/System_Events

Loop (repeat) - http://alvinalexander.com/apple/applescript-for-loop-while-loop-examples

Get applescript script filename function:

on getScriptFileName()
    set myPath to path to me as text

    if myPath ends with ":" then
        set n to -2
    else
        set n to -1
    end if

    set AppleScript's text item delimiters to ":"

    return text item n of myPath
end getScriptFileName
# Will return '<name>.<extension>'

# Using System Events
on getScriptFileName()
    tell application "System Events"
        return name of (path to me)
    end tell
end getScriptFileName
# Will return '<name>.<extension>'

Check is application running:

if application "Messages" is running then
    log "is running"
end if

Activate application:

tell application "Terminal"
    activate
end tell

# using osascript
osascript -e 'tell application "Terminal" to activate'

Check if static text start with:

if value of static text 1 of group 2 of group 1 of UI element 1 of scroll area 1 does not start with "Enter the verification code" then
    error "Expected label does not exist."
end if

Check multiple elements exists:

if (static text "bla bla" exists) and (static text "yeah" exists) and (text field 1 exists) then
    log "true"
else
    log "false"
end if

Check if button not exists:

if not exists (button "Verify") then
    error "Verify buttonl does not exist."
end if

Get all UI elements:

# my showElements(element, "test")
# showElements(element, "test")
on showElements(element, id)
    log "-" & id & "-"
    tell application "System Events"
        set elements to get every UI element of element
        repeat with el in elements
             log el
        end repeat
    end tell
    log "-" & id & "-"
end showElements

# my showAllElements(element)
on showAllElements(element)
    tell application "System Events"
        set elements to get every UI element of element
        repeat with el in elements
             log el
             my showAllElements(el)
        end repeat
    end tell
end showElements

Show all processes of application:

tell application "Finder"
   set l to get name of every process

   repeat with i in l
        log i
   end repeat
end tell

Pass command-line arguments to AppleScript osascript script.scpt hello world. script.scpt:

on run argv
  return item 1 of argv & item 2 of argv
end run

Debug UI elements (dirty):

#!/usr/bin/osascript

tell application "Messages"
        activate
end tell

tell application "System Events"
        get properties
        if UI elements enabled then
                tell process "Messages"

                        set t1 to get every UI element
                        log t1 & "\n"

                        tell window "Messages"
                                set t1 to get every UI element
                            log t1 & "\n"

                            tell sheet 1
                                set t1 to get every UI element
                                log "get every UI element"
                                log t1 & "\n"

                                tell scroll area 1
                                    set t1 to get every UI element
                                    log "get every UI element"
                                    log t1 & "\n"

                        tell UI element 1
                            set t1 to get every UI element
                                        log "get every UI element"
                                        log t1 & "\n"

                                        tell group 1
                                            set t1 to get every UI element
                                            log "get every UI element"
                                            log t1 & "\n"

                                            tell group 2
                                                set t1 to get every UI element
                                                log "get every UI element"
                                                log t1 & "\n"

                                                tell static text "Apple ID Locked"
                                                if not (exists) then
                                                                                log "FALSE"
                                                                        else
                                                                            log "TRUE"
                                                                        end if

                                                                        end tell

                                            end tell

                                        end tell

                        end tell

                        set t1 to get every button
                                    log t1 & ".\n"
                                end tell

                                set t1 to get every static text
                                    log t1 & "\n"
                            end tell

                                set t1 to get every button
                                log t1 & "\n"

                                set t1 to get properties of every button
                                log t1 & "\n"

                                set t1 to get every UI element of every button
                                log t1 & "\n"

                                set t1 to get every static text
                                log t1 & "\n"

                                set t1 to get properties of every static text
                                log t1 & "\n"

                                set t1 to get every UI element of every static text
                                log t1 & "\n"

                                set t1 to get every scroll bar
                                log t1 & "\n"

                                get properties of every scroll bar
                                get every UI element of every scroll bar

                                (*get every UI element ¬
                                        whose class is not button and class is not static text ¬
                                        and class is not scroll bar
                                get properties of every UI element ¬
                                        whose class is not button and class is not static text ¬
                                        and class is not scroll bar*)

                        end tell

                end tell
        else
                tell application "System Preferences"
                        activate
                        set current pane to pane "com.apple.preference.universalaccess"
                        display dialog "UI element scripting is not enabled. Check \"Enable access for assistive devices\""
                end tell
        end if
end tell
/System/Library/CoreServices/Menu\ Extras/User.menu/Contents/Resources/CGSession -switchToUserID ${uid}

osascript << EOF
tell application "System Events"
  delay 1.0
  keystroke "${password}" & return
end tell
EOF

Open new tab in terminal and run command:

osascript << EOF
tell application "Terminal"
    activate
    tell application "System Events" to keystroke "t" using command down
    repeat while contents of selected tab of window 1 starts with linefeed
        delay 0.01
    end repeat
    do script "echo a" in window 1
end tell
EOF

Run command in new or existing terminal an close it:

osascript << EOF
tell application "Terminal"
    activate
    do script "echo a" in window 1
    quit
end tell
EOF

Mac OS VM

Important repo https://github.com/kholia/OSX-KVM

Clean up unused VM disk space
# on guest with Mac OS
cat /dev/zero > zfile; rm zfile

# shutdown VM

# on host
vmware-vdiskmanager -k </path/to/disk.vmdk>
Run Sierra on qemu-kvm

Note

Press Ctrl+F2 to show panel with utilites button

  1. Run next command:

    echo 1 > /sys/module/kvm/parameters/ignore_msrs
    
  2. Create image where mac will be installed:

    qemu-img create -f qcow2 /path/to/Sierra.qcow2 15g
    
  3. Create template sierra.xml:

    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>Sierra</name>
      <title>Sierra</title>
      <description>echo 1 &gt; /sys/module/kvm/parameters/ignore_msrs</description>
      <memory unit='KiB'>2097152</memory>
      <currentMemory unit='KiB'>2097152</currentMemory>
      <vcpu placement='static'>2</vcpu>
      <os>
        <type arch='x86_64' machine='pc-q35-2.4'>hvm</type>
        <kernel>/mnt/data/qemu/bios/enoch_rev2848_boot</kernel>
      </os>
      <features>
        <acpi/>
        <kvm>
          <hidden state='on'/>
        </kvm>
      </features>
      <cpu mode='custom' match='exact'>
        <model fallback='allow'>Penryn</model>
      </cpu>
      <devices>
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2'/>
          <source file='/mnt/data/qemu/images/Sierra.qcow2'/>
          <target dev='sda' bus='sata'/>
          <!--<shareable/>-->
          <boot order='1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <interface type='user'>
          <source network='default'/>
          <model type='e1000-82545em'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
        </interface>
        <input type='mouse' bus='usb'/>
        <input type='keyboard' bus='usb'/>
        <graphics type='vnc' port='-1' autoport='no' listen='127.0.0.1' keymap='en-us'>
          <listen type='address' address='127.0.0.1'/>
        </graphics>
        <video>
          <model type='vmvga' vram='16384' heads='1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
        </video>
        <memballoon model='none'/>
      </devices>
      <qemu:commandline>
        <qemu:arg value='-device'/>
        <qemu:arg value='isa-applesmc,osk=ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc'/>
        <qemu:arg value='-smbios'/>
        <qemu:arg value='type=2'/>
        <qemu:arg value='-k'/>
        <qemu:arg value='en-us'/>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='Penryn,vendor=GenuineIntel'/>
        <!--
        <qemu:arg value='-redir'/>
        <qemu:arg value='tcp:5901::5900'/>
        -->
        <!--
        <qemu:arg value='-device'/>
        <qemu:arg value='ide-drive,bus=ide.1,drive=MacDVD'/>
        <qemu:arg value='-drive'/>
        <qemu:arg value='id=MacDVD,if=none,snapshot=on,file=/mnt/data/iso/mac/Sierra_10_12_3.iso'/>
        -->
      </qemu:commandline>
    </domain>
    
  4. Define domain:

    virsh --connect qemu:///system define alice.xml
    
  5. Autostart Sierra (without press Enter)

Add next file to Sierra HD /Volumes/<Sierra_HD>/Extra/org.chameleon.boot.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Timeout</key>
<string>10</string>
<key>EthernetBuiltIn</key>
<string>Yes</string>
<key>PCIRootUID</key>
<string>1</string>
<key>KernelBooter_kexts</key>
<string>Yes</string>
<key>CsrActiveConfig</key>
<string>103</string>
<key>Graphics Mode</key>
<string>1024x768x32</string>
</dict>
</plist>
  1. Allow usb-tablet device
  1. Add vmware svga driver
  • download and install https://sourceforge.net/projects/vmsvga2 on your Mac

  • add the following to /Extra/org.chameleon.Boot.plist:

    <key>Kernel Flags</key>
    <string>vmw_options_fb=0x06</string>
    
  • add -vga vmware to QEMU parameters.

  1. Allow to install apps:

    sudo spctl --master-disable
    
Install Mac OS Sierra on VMWare
  1. Download and run script to unlock your VMWare for run Mac OS https://github.com/DrDonk/unlocker . P.S. Read readme in repo.

Change VM display resolution:

/Library/Application\ Support/VMware\ Tools/vmware-resolutionSet 1920 1080
Disable Mac OS csrutil (SIP) on VMware VM
  1. Start vmware

  2. Select Mac OS guest and power to firmware

  3. In efi menu, enter setup -> config boot options -> add boot options -> select recovery partition -> select boot.efi

  4. At input file description hit <enter> and type in label e.g. “recovery” -> commit changes and exit

  5. Boot from “recovery” and be patient

  6. Follow prompt until you see OS X Utilities menu

  7. At the very top menu select Utilities -> Terminal

  8. In terminal enter:

    csrutil status
    csrutil disable
    csrutil status
    reboot
    
Run Sierra on VirtualBox

Open VirtualBox and click the “New” button. Name your Virtual Machine “macOS Sierra” and choose “Mac OS X” for the operating system and “Mac OS X (64-bit)” for the version (as of this writing, “macOS Sierra” is not offered, but that’s fine.)

Run next command in console:

cd "C:\Program Files\Oracle\VirtualBox\"
VBoxManage.exe modifyvm "macOS Sierra" --cpuidset 00000001 000306a9 04100800 7fbae3ff bfebfbff
VBoxManage setextradata "macOS Sierra" "VBoxInternal/Devices/efi/0/Config/DmiSystemProduct" "MacBookPro11,3"
VBoxManage setextradata "macOS Sierra" "VBoxInternal/Devices/efi/0/Config/DmiSystemVersion" "1.0"
VBoxManage setextradata "macOS Sierra" "VBoxInternal/Devices/efi/0/Config/DmiBoardProduct" "Mac-2BD1B31983FE1663"
VBoxManage setextradata "macOS Sierra" "VBoxInternal/Devices/smc/0/Config/DeviceKey" "ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc"
VBoxManage setextradata "macOS Sierra" "VBoxInternal/Devices/smc/0/Config/GetKeyFromRealSMC" 1

Change Your Resolution By default, your virtual machine will have a resolution of 1024×768, which is not a lot of room to work with. If you try to change the resolution from within macOS, however, you will see no option to do so. Instead, you need to enter a few commands.

Shut down your Virtual Machine by shutting down macOS: click the Apple in the menu bar, then click “Shut Down.” Next, close VirtualBox entirely (seriously, this step will not work if VirtualBox is still open!) and head back to Windows’ Command Prompt as an admin. You need to run the following two commands:

cd "C:\Program Files\Oracle\VirtualBox\"
VBoxManage setextradata "macOS Sierra" "VBoxInternal2/EfiGopMode" N

In the second command, you need to replace the “N” with a number from one to five, depending on what resolution you want:

1 gives you a resolution of 800×600
2 gives you a resolution of 1024×768
3 gives you a resolution of 1280×1024
4 gives you a resolution of 1440×900
5 gives you a resolution of 1920×1200

OpenLDAP

OpenLDAP

/etc/ldap/slapd.d/cn=config/cn=schema # schema location

slapcat # show content of DB object

/usr/sbin/slapcat -v -l dump.ldif # backup SLAPD database

# Install

sudo apt-get install slapd ldap-utils

dpkg-reconfigure slapd

Omit OpenLDAP server configuration? No DNS domain name? mydomain.com Organization name? any name Administrator password? password Database backend? HDB | BDB Remove the database when slapd is purged? No Move old database? Yes Allow LDAPv2 protocol? No

RESTART SLAPD (or container)

ldapsearch -D “cn=admin,dc=mycorp,dc=com” -w password -b “dc=mycorp,dc=com”

ldapsearch -H ldap://localhost:389 -D “cn=admin,dc=mycorp,dc=com” -w password -b “dc=mycorp,dc=com”

ldapdelete -D cn=admin,dc=mycorp,dc=com -w password -r “dc=mycorp,dc=com”
ldapmodify -D “cn=admin,dc=mydomain,dc=com” -w password
dn: ou=users,dc=mydomain,dc=com changetype: add objectClass: top objectClass: organizationalUnit ou: users description: Domain Users

Всё, начиная с «dn: » вводим руками. После ввода строки описания («description: Domain Users») нажимаем Enter два раза. Если всё введено без ошибок, вы должны увидеть такое сообщение: adding new entry “ou=Users,dc=mydomain,dc=com”

# Нажимаем Ctrl+C для выхода

ldapadd -D cn=admin,dc=mycorp,dc=com -w password -f addgroup.ldif


# Create user

slappasswd # generate hash pass

ldapmodify -D “cn=admin,dc=mydomain,dc=com” -w password
or

nano adduser.ldif

dn: uid=jdoe,cn=John Doe,ou=users,dc=mydomain,dc=com changetype: add objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: John Doe ou: users uid: jdoe givenName: John sn: Doe userPassword: {SSHA}hsxkIVICZSSQsUOQf4xWZutr0t44HSFP
ldapmodify -D “cn=admin,dc=mycorp,dc=com” -w password -f adduser.ldif
# Modify ldif-file:
dn: cn=John Doe,ou=users,dc=mycorp,dc=com changetype: modify add: userPassword userPassword: {SSHA}hsxkIlskflksfOQf4xWZutr0t44HSFP

# Commands #
ldapsearch -D "cn=admin,dc=mydomain,dc=com"
                      -w <pass>     # bind password (for simple authentication)
                      -W            # prompt for bind password



                      -a deref   one of never (default), always, search, or find
                      -A         retrieve attribute names only (no values)
                      -b basedn  base dn for search
                      -c         continuous operation mode (do not stop on errors)
                      -E [!]<ext>[=<extparam>] search extensions (! indicates criticality)
                                             [!]domainScope              (domain scope)
                                             !dontUseCopy                (Don't Use Copy)
                                             [!]mv=<filter>              (RFC 3876 matched values filter)
                                             [!]pr=<size>[/prompt|noprompt] (RFC 2696 paged results/prompt)
                                             [!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...]
                                                                         (RFC 2891 server side sorting)
                                             [!]subentries[=true|false]  (RFC 3672 subentries)
                                             [!]sync=ro[/<cookie>]       (RFC 4533 LDAP Sync refreshOnly)
                                                     rp[/<cookie>][/<slimit>] (refreshAndPersist)
                                             [!]vlv=<before>/<after>(/<offset>/<count>|:<value>)
                                                                         (ldapv3-vlv-09 virtual list views)
                                             [!]deref=derefAttr:attr[,...][;derefAttr:attr[,...][;...]]
                                             [!]<oid>[=:<b64value>] (generic control; no response handling)
                      -f file    read operations from `file'
                      -F prefix  URL prefix for files (default: file:///tmp/)
                      -l limit   time limit (in seconds, or "none" or "max") for search
                      -L         print responses in LDIFv1 format
                      -LL        print responses in LDIF format without comments
                      -LLL       print responses in LDIF format without comments
                                             and version
                      -M         enable Manage DSA IT control (-MM to make critical)
                      -P version protocol version (default: 3)
                      -s scope   one of base, one, sub or children (search scope)
                      -S attr    sort the results by attribute `attr'
                      -t         write binary values to files in temporary directory
                      -tt        write all values to files in temporary directory
                      -T path    write files to directory specified by path (default: /tmp)
                      -u         include User Friendly entry names in the output
                      -z limit   size limit (in entries, or "none" or "max") for search
                    Common options:
                      -d level   set LDAP debugging level to `level'
                      -D binddn  bind DN
                      -e [!]<ext>[=<extparam>] general extensions (! indicates criticality)
                                             [!]assert=<filter>     (RFC 4528; a RFC 4515 Filter string)
                                             [!]authzid=<authzid>   (RFC 4370; "dn:<dn>" or "u:<user>")
                                             [!]chaining[=<resolveBehavior>[/<continuationBehavior>]]
                                                     one of "chainingPreferred", "chainingRequired",
                                                     "referralsPreferred", "referralsRequired"
                                             [!]manageDSAit         (RFC 3296)
                                             [!]noop
                                             ppolicy
                                             [!]postread[=<attrs>]  (RFC 4527; comma-separated attr list)
                                             [!]preread[=<attrs>]   (RFC 4527; comma-separated attr list)
                                             [!]relax
                                             [!]sessiontracking
                                             abandon, cancel, ignore (SIGINT sends abandon/cancel,
                                             or ignores response; if critical, doesn't wait for SIGINT.
                                             not really controls)
                      -h host    LDAP server
                      -H URI     LDAP Uniform Resource Identifier(s)
                      -I         use SASL Interactive mode
                      -n         show what would be done but don't actually do it
                      -N         do not use reverse DNS to canonicalize SASL host name
                      -O props   SASL security properties
                      -o <opt>[=<optparam>] general options
                                             nettimeout=<timeout> (in seconds, or "none" or "max")
                                             ldif-wrap=<width> (in columns, or "no" for no wrapping)
                      -p port    port on LDAP server
                      -Q         use SASL Quiet mode
                      -R realm   SASL realm
                      -U authcid SASL authentication identity
                      -v         run in verbose mode (diagnostics to standard output)
                      -V         print version info (-VV only)

                      -x         Simple authentication
                      -X authzid SASL authorization identity ("dn:<dn>" or "u:<user>")
                      -y file    Read password from file
                      -Y mech    SASL mechanism
                      -Z         Start TLS request (-ZZ to require successful response)
Backup data

backup.sh:

#!/bin/sh

LDAPBK=ldap_$( date +%H-%M_%d-%m-%Y ).ldif
BACKUPDIR=/var/backups

/usr/sbin/slapcat -v -l $BACKUPDIR/$LDAPBK

gzip -9 $BACKUPDIR/$LDAPBK
Restore data
  1. stop slapd daemon:

    /etc/init.d/slapd stop
    
  2. delete old database (make sure you are in right directory to use rm):

    cd /var/lib/ldap
    rm -rf *
    
  3. Restore database from LDIF file:

    /usr/sbin/slapadd -l backup.ldif
    
  4. run slapd daemon:

    /etc/init.d/slapd start
    
/etc/ldap/slapd.d/cn=config/cn=schema       # schema location

slapcat     # show content of DB object


/usr/sbin/slapcat -v  -l dump.ldif  # backup SLAPD database

# Install

sudo apt-get install slapd ldap-utils

dpkg-reconfigure slapd

        Omit OpenLDAP server configuration?                 No
        DNS domain name?                                                    mydomain.com
        Organization name?                                                  any name
        Administrator password?                                             password
        Database backend?                                                   HDB     | BDB
        Remove the database when slapd is purged?   No
        Move old database?                                                  Yes
        Allow LDAPv2 protocol?                                              No

RESTART SLAPD (or container)

ldapsearch -D "cn=admin,dc=mycorp,dc=com" -w password -b "dc=mycorp,dc=com"

ldapsearch -H ldap://localhost:389 -D "cn=admin,dc=mycorp,dc=com" -w password -b "dc=mycorp,dc=com"


IMPORTANT DELETE ALL

 ldapdelete -D cn=admin,dc=mycorp,dc=com -w password -r "dc=mycorp,dc=com"
##########



##############################################
# Создадим группу пользователей с названием «users» при помощи команды ldapmodify

ldapmodify -D "cn=admin,dc=mydomain,dc=com" -w password
        dn: ou=users,dc=mydomain,dc=com
        changetype: add
        objectClass: top
        objectClass: organizationalUnit
        ou: users
        description: Domain Users

Всё, начиная с «dn: » вводим руками. После ввода строки описания («description: Domain Users») нажимаем Enter два раза. Если всё введено без ошибок, вы должны увидеть такое сообщение: adding new entry "ou=Users,dc=mydomain,dc=com"

# Нажимаем Ctrl+C для выхода

ldapadd -D cn=admin,dc=mycorp,dc=com -w password -f addgroup.ldif

##################################################



# Create user

slappasswd  # generate hash pass

ldapmodify -D "cn=admin,dc=mydomain,dc=com" -w password
        or
nano adduser.ldif

        dn: uid=jdoe,cn=John Doe,ou=users,dc=mydomain,dc=com
        changetype: add
        objectClass: top
        objectClass: person
        objectClass: organizationalPerson
        objectClass: inetOrgPerson
        cn: John Doe
        ou: users
        uid: jdoe
        givenName: John
        sn: Doe
        userPassword: {SSHA}hsxkIVICZSSQsUOQf4xWZutr0t44HSFP

ldapmodify -D "cn=admin,dc=mycorp,dc=com" -w password -f adduser.ldif
######################################################################

# Modify ldif-file:
        dn: cn=John Doe,ou=users,dc=mycorp,dc=com
        changetype: modify
        add: userPassword
        userPassword: {SSHA}hsxkIlskflksfOQf4xWZutr0t44HSFP

######################################################################


############
# Commands #
############
::

    ldapsearch -D "cn=admin,dc=mydomain,dc=com"
                          -w <pass> # bind password (for simple authentication)
                          -W                # prompt for bind password



                          -a deref   one of never (default), always, search, or find
                          -A         retrieve attribute names only (no values)
                          -b basedn  base dn for search
                          -c         continuous operation mode (do not stop on errors)
                          -E [!]<ext>[=<extparam>] search extensions (! indicates criticality)
                                                 [!]domainScope              (domain scope)
                                                 !dontUseCopy                (Don't Use Copy)
                                                 [!]mv=<filter>              (RFC 3876 matched values filter)
                                                 [!]pr=<size>[/prompt|noprompt] (RFC 2696 paged results/prompt)
                                                 [!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...]
                                                                             (RFC 2891 server side sorting)
                                                 [!]subentries[=true|false]  (RFC 3672 subentries)
                                                 [!]sync=ro[/<cookie>]       (RFC 4533 LDAP Sync refreshOnly)
                                                         rp[/<cookie>][/<slimit>] (refreshAndPersist)
                                                 [!]vlv=<before>/<after>(/<offset>/<count>|:<value>)
                                                                             (ldapv3-vlv-09 virtual list views)
                                                 [!]deref=derefAttr:attr[,...][;derefAttr:attr[,...][;...]]
                                                 [!]<oid>[=:<b64value>] (generic control; no response handling)
                          -f file    read operations from `file'
                          -F prefix  URL prefix for files (default: file:///tmp/)
                          -l limit   time limit (in seconds, or "none" or "max") for search
                          -L         print responses in LDIFv1 format
                          -LL        print responses in LDIF format without comments
                          -LLL       print responses in LDIF format without comments
                                                 and version
                          -M         enable Manage DSA IT control (-MM to make critical)
                          -P version protocol version (default: 3)
                          -s scope   one of base, one, sub or children (search scope)
                          -S attr    sort the results by attribute `attr'
                          -t         write binary values to files in temporary directory
                          -tt        write all values to files in temporary directory
                          -T path    write files to directory specified by path (default: /tmp)
                          -u         include User Friendly entry names in the output
                          -z limit   size limit (in entries, or "none" or "max") for search
                        Common options:
                          -d level   set LDAP debugging level to `level'
                          -D binddn  bind DN
                          -e [!]<ext>[=<extparam>] general extensions (! indicates criticality)
                                                 [!]assert=<filter>     (RFC 4528; a RFC 4515 Filter string)
                                                 [!]authzid=<authzid>   (RFC 4370; "dn:<dn>" or "u:<user>")
                                                 [!]chaining[=<resolveBehavior>[/<continuationBehavior>]]
                                                         one of "chainingPreferred", "chainingRequired",
                                                         "referralsPreferred", "referralsRequired"
                                                 [!]manageDSAit         (RFC 3296)
                                                 [!]noop
                                                 ppolicy
                                                 [!]postread[=<attrs>]  (RFC 4527; comma-separated attr list)
                                                 [!]preread[=<attrs>]   (RFC 4527; comma-separated attr list)
                                                 [!]relax
                                                 [!]sessiontracking
                                                 abandon, cancel, ignore (SIGINT sends abandon/cancel,
                                                 or ignores response; if critical, doesn't wait for SIGINT.
                                                 not really controls)
                          -h host    LDAP server
                          -H URI     LDAP Uniform Resource Identifier(s)
                          -I         use SASL Interactive mode
                          -n         show what would be done but don't actually do it
                          -N         do not use reverse DNS to canonicalize SASL host name
                          -O props   SASL security properties
                          -o <opt>[=<optparam>] general options
                                                 nettimeout=<timeout> (in seconds, or "none" or "max")
                                                 ldif-wrap=<width> (in columns, or "no" for no wrapping)
                          -p port    port on LDAP server
                          -Q         use SASL Quiet mode
                          -R realm   SASL realm
                          -U authcid SASL authentication identity
                          -v         run in verbose mode (diagnostics to standard output)
                          -V         print version info (-VV only)

                          -x         Simple authentication
                          -X authzid SASL authorization identity ("dn:<dn>" or "u:<user>")
                          -y file    Read password from file
                          -Y mech    SASL mechanism
                          -Z         Start TLS request (-ZZ to require successful response)

Backup data

backup.sh:

#!/bin/sh

LDAPBK=ldap_$( date +%H-%M_%d-%m-%Y ).ldif
BACKUPDIR=/var/backups

/usr/sbin/slapcat -v -l $BACKUPDIR/$LDAPBK

gzip -9 $BACKUPDIR/$LDAPBK

Restore data

  1. stop slapd daemon:

    /etc/init.d/slapd stop
    
  2. delete old database (make sure you are in right directory to use rm):

    cd /var/lib/ldap
    rm -rf *
    
  3. Restore database from LDIF file:

    /usr/sbin/slapadd -l backup.ldif
    
  4. run slapd daemon:

    /etc/init.d/slapd start
    

Python3

Modules

  1. pyowm - weather in world

Python3

General

PEP 8 – Style Guide for Python Code: https://www.python.org/dev/peps/pep-0008/

Examples

Send email via SMTP:

#!/usr/bin/env python3

import smtplib
from email.message import EmailMessage

s = smtplib.SMTP('localhost')
msg = EmailMessage()

msg.set_content('Test message content')

msg['Subject'] = 'The test message'
msg['From'] = 'admin@localhost'
msg['To'] = 'example@gmail.com'

s.send_message(msg)
s.quit()
Base syntax
str(number)         # convert to string
test = 10           # init new var
test = True         # case sensitive
test = False
test_one = None     # analog 'null' (functions returns None if not return any)

print(10 == 10)     # True
== != <= >= < >
'Test' > 'Tesa'     # True (position in text, registr, position in alphabet)

del test            # del var

print("text " + str(test) + " some text")
print('hello ' * 3)         # hello hello hello (num only int)
print('hello', 3 + 4)       # hello 7

str(test)       # to string
int(test)       # to int
float(test)     # to float
list(test)      # to list

type(object)    # return class

input("input your name: ")    # return string input by keyboard

len(<string | list>)    # return length
max()                   # return max
min()

+ - * /
**      # 2 ** 3 == 8
//      # 20 // 6 == 3
%       # 20 % 6 == 2

+= -= *= /= %= **= //=

print('hello \n world')
print("hello \n world")
print('''hello
world''')
print("""hello
world""")

if test == 1 and time >= 12:
    print(test)    # tab important
elif test == 2 or (time <= 12 and date == "summer"):
    print('hah!')
else:
    print('hm..')

if not 1 < 2:        # invert
    print("yeah")    # yeah don't print

while i <= 5:
    print('ok')
    i += 1

    if i == 3:
        break

    if i == 2:
        continue    # go to next iteration

for test in range(15):
    print(test)
Lists
test = [1, 2, 3, [4, 5, 6]]

print(test[3][0])   # 4
print(test[0:2])    # 1, 2
print(test[:2])     # 1, 2
print(test[2:])     # 3, [4, 5, 6]
print(test[::2])    # all with step 2
test[start:to:step]

test = [1, 2, 3]
print(test * 2)     # [1, 2, 3, 1, 2, 3]

test = [1, 2, 3, 4, 5, 6]
print(test[-2])             # 5
print(test[::-1])           # [6, 5, 4, 3, 2, 1]

test = 'HELLO'
print(test[3])      # L

test = [1, 2, 3]
if 1 in test:       # find in list
    print('yeah')

if 1 not in test:
    print('yeah')

test.append(object)         # add to list
test.remove(object)         # find first input and remove
test.insert(pos, object)
test.count(object)
test.reverse()              # reverse list (return none)

test = range(10)
print(list(test))           # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

test = range(10, 15)
print(list(test))           # [10, 11, 12, 13, 14]

test = range(0, 10, 2)
print(list(test))           # [0, 2, 4, 6, 8]

for element in test:
    print(element)          # all elements
Tuples

Less resource pressure than LISTS

names = ("Make", "James")   # elements can't be change
names = "Make", "James"     # elements can't be change
print(names[0])
Dictionary
test = {
    "key" : "value",        # key == only simple types
    "key2" : "value2"
}
print(test["key"])                # exception if not exist
print(test.get('key'))            # return None if not exist
print(test.get('key', 'text'))    # return 'text' if not exist

if 'key' in test:
    print('OK')

if 'key' not in test:
    print('hm..')
Functions
def sum(a, b):        # DEFINE ONLY BEFORE USE
    """On line documentation"""
    return a + b

print(sum(1, 5))
print(sum.__doc__)

my_sum = sum
print(my_sum(1, 5))


def complex(real=0.0, imag=0.0):
    """Form a complex number.

    Keyword arguments:
    real -- the real part (default 0.0)
    imag -- the imaginary part (default 0.0)
    """

def incr(a):
    return a + 1

print(sum(incr, 5))     # send fuction in function

def fun(text):
    assert text != ''
    print(text)
fun('')    # AssertionError

def fun(text):
    assert text != '', "text could be no blank!"
    print(text)
fun('')    # AssertionError: text could be no blank!

# any number of args
def func(*args):
    return args # args - tuple

func(1, 2, 3, 'abc')    # (1, 2, 3, 'abc')
func()                  # ()
func(1)                 # (1,)

# any number of kv args
def func(**kwargs):
    return kwargs   # kwargs - dictionary

func(a=1, b=2, c=3)     # {'a': 1, 'c': 3, 'b': 2}
func()                  # {}
func(a='python')        # {'a': 'python'}
Modules

https://pypi.python.org/pypi http://pypi-ranking.info/alltime

# import module
import random

a = random.randint(0, 10)

# import function from module
from random import randint
from math import sqrt, pi

a = randint(0, 10)

# import all function from module
from random import *

# rename import function
from math import sqrt as my_sqrt
my_sqrt(25)
Exceptions

Catch all exceptions:

try:
    print(4 / 0)

except:
    print("oh no =(")
    raise    # show exception, exit programm

ZeroDivisionError:

try:
    print(4 / 0)

except ZeroDivisionError:
    print("Error")

Different exceptions catching:

try:
    print(4 / 0)

except ZeroDivisionError:
    print('zero division err')

except NameError:
    print('var not defined')

except SyntaxError:
    print('syntax err')    # no catch syntax err

except:
    print('some err')

Syntax error:

try:
    eval('<some code>')

except SyntaxError:
    print('Syntax error')

Finnaly:

try:
    <code>

except:
    <code>

finally:
    <code>    # run always (if catch or no catch)

Override error:

try:
    a = 1
    if a == 1:
        raise ZeroDivisionError

except ZeroDivisionError:
    print("zero division err")
a = 1
if a == 1:
   raise ZeroDivisionError('some text')        # ZeroDivisionError: some text

Own exceptions:

class MyError(Exception):
    print('hm')

raise MyError('TEST')
Files

r - read w - write (create if not exist, rewrite if exist) a - append

b - binary mode (rb wb ab)

file = open('file.txt', 'r')
print(file.read())
file.close()

# read 1 byte
file.read(1)

file = open('file.txt', 'w')
file.write('hello')
file.close()

# append
file = open('file.txt', 'a')
file.write('hello')
file.close()

# get list of lines
file.readlines()

# close file descriptor automatically
with open('file', 'r') as f:
    print(f.read())
COMMENTS
# this is a comment
print('hello')    # this is a comment

'''Multi
lines
comment'''

"""Multi
lines
comment"""
Reserved VARS

foo bar

Classes

All classes is an object of class metaclass - ‘type’.

class MyClass:
    pass        # pass do nothing

# new object of class
obj = MyClass()

RedHat

Advanced Policy Firewall (APF)

/etc/apf/conf.apf

apf  --help

RedHat

RHEL subscription:

sudo subscription-manager register
sudo subscription-manager attach
sudo yum update

YUM:

yum update              # update packets
    install <packet>    # install packet
        -y              # yes for all ansvers
    search <packet>     # search packet
    info htop           # show info

yum install epel-release yum install htop

CentOS yum

yum install epel-release yum install htop

CentOS yum via proxy:

/etc/yum.conf.
    proxy=http://192.168.35.10:8080
    proxy_username=proxyname
    proxy_password=proxypass

RedHat firewall

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Using_Firewalls.html

# start/stop firewall sudo systemctl disable firewalld sudo systemctl stop firewalld sudo systemctl status firewalld

cat /etc/redhat-release # show version of redhat

ip addr # analog ifconfig’a yum install net-tools # install ifconfig

# start/stop firewall sudo systemctl disable firewalld sudo systemctl stop firewalld sudo systemctl status firewalld

# Change timezone sudo mv /etc/localtime /etc/localtime.old sudo ln -s /usr/share/zoneinfo/Europe/Kiev /etc/localtime

# login at another user su - <username>

JBOSS WildFly

WildFly

# Docker # https://github.com/JBoss-Dockerfiles/wildfly # # Deploy

Dockerfile
FROM jboss/wildfly ADD sample.war /opt/jboss/wildfly/standalone/deployments/

docker build –tag=wildfly-app .

docker run -it -d –name wildfly-app -P wildfly-app

# http://localhost:<PORT>/<sample>

# ADMIN console

Dockerfile
FROM jboss/wildfly RUN /opt/jboss/wildfly/bin/add-user.sh <admin_user_name> <password> –silent CMD [“/opt/jboss/wildfly/bin/standalone.sh”, “-b”, “0.0.0.0”, “-bmanagement”, “0.0.0.0”]

docker build -t wildfly-admin .

docker run -ti –name wildfly-admin -d -p 9990:9990 -p 8080:8080 wildfly-admin

docker exec -ti wildfly-admin /opt/jboss/wildfly/bin/add-user.sh

# http://localhost:<PORT>

# CLI

./bin/jboss-cli.sh –connect

help –commands ls deployment

7-Zip on Linux

Ubuntu

Install 7-Zip on Ubuntu:

$ sudo apt-get install p7zip-rar p7zip-full

OpenVPN

Configuration

Server config:

# Allow multiple user connections
duplicate-cn

SWAP

SWAP in file

  1. Create SWAP file at root with size 4GB:

    sudo dd if=/dev/zero of=/swapfile bs=1M count=4096
    sudo chmod 600 /swapfile
    sudo mkswap /swapfile
    
  2. Disable SWAP:

    sudo swapoff /swapfile
    # or disable all swap
    sudo swapoff -a
    
  3. Connect new SWAP file:

    sudo swapon /swapfile
    
  4. Add new SWAP to fstab (auto mount when system start):

    sudo echo "/swapfile swap swap defaults 0 0" | sudo tee -a /etc/fstab
    
  5. Optional. We will use the command sysctl to change settings dedicated to the Linux virtual memory manager vm.swappiness. This setting tells the Linux kernel/VM handler how likely it should be to use VM. It is a percent value, between 0 & 100. If you set this value to 0, the VM handler will be least likely to use any available swap space, and should use all available system memory first. If you set it to 100, the VM handler will be most likely to use available swap space and will try to leave a greater portion of system memory free for use. A value of 30% should be a happy medium between swapping and system memory:

    sudo sysctl -w vm.swappiness=30
    
Automated swapfile creation
#!/bin/bash

set -e

# SWAP file size in MB
SWAP_SIZE="1024"

# SWAP usage tunning
VM_SWAPPINESS="30"

SWAP_PATH="/swapfile"


# Create SWAP file at root
sudo dd if=/dev/zero of=$SWAP_PATH bs=1M count=$SWAP_SIZE
sudo chmod 600 $SWAP_PATH
sudo mkswap $SWAP_PATH

# Disable SWAP
sudo swapoff /swapfile

# Connect new SWAP file
sudo swapon $SWAP_PATH

# Add new SWAP to fstab (auto mount when system start)
sudo echo "$SWAP_PATH swap swap defaults 0 0" | sudo tee -a /etc/fstab

# Tweak system swap usage value
sudo sysctl -w vm.swappiness=$VM_SWAPPINESS

ACL

Apache

# list the first read config file. Show the current mpm, and mod_status
apachectl -V

    # restart
    systemctl restart apache2

    # reload
    service apache2 reload

    # enable modules
    a2enmod [rewrite] [ssl] ...

    # show modules
    apache2ctl -M

    # test configuration
    apachectl configtest

    # enable site
    a2ensite <site>

    # disable site
    a2dissite <site>

Generate a self-signed SSL certificate:

mkdir /etc/apache2/ssl
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/site.key -out /etc/apache2/ssl/site.crt
chmod 600 /etc/apache2/ssl/*

Apache PHP CGI mode

default.conf:

<VirtualHost *:80>
        ServerAdmin webmaster@localhost

        DocumentRoot /var/www

        #<Directory />
        #   Options FollowSymLinks
        #   AllowOverride All
        #</Directory>

        <Directory /var/www/>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride All
                Order allow,deny
                allow from all

                AddHandler cgi-handler .php
                Action cgi-handler /cgi-bin/php-cgi
        </Directory>

        ScriptAlias /cgi-bin/ /opt/php5.2/bin/

        <Directory "/opt/php5.2/bin/">
                AllowOverride All
                Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
                Order allow,deny
                Allow from all
        </Directory>

        ErrorLog ${APACHE_LOG_DIR}/error.log

        # Possible values include: debug, info, notice, warn, error, crit,
        # alert, emerg.
        LogLevel warn

        CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Apache PHP module mode

default.conf:

<VirtualHost *:80>
        ServerAdmin webmaster@localhost

        DocumentRoot /var/www

        <Directory />
               Options FollowSymLinks
               AllowOverride All
        </Directory>

        <Directory /var/www/>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride All
                Order allow,deny
                allow from all
        </Directory>


        <FilesMatch \.php$>
                SetHandler application/x-httpd-php
        </FilesMatch>


        DirectoryIndex index.php


        ErrorLog ${APACHE_LOG_DIR}/error.log

        # Possible values include: debug, info, notice, warn, error, crit,
        # alert, emerg.
        LogLevel warn

        CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Security Apache

  • ModSecurity
  • mod_evasive
  • Fail2ban

apt via proxy

Чтобы получать пакеты из интернет-репозиториев, нужно в файле /etc/apt/apt.conf указать:

Acquire::http::proxy "http://логин:пароль@ip_прокси:порт_прокси/";
Acquire::https::proxy "http://логин:пароль@ip_прокси:порт_прокси/";
Acquire::ftp::proxy "http://логин:пароль@ip_прокси:порт_прокси/";
Acquire::socks::proxy "http://логин:пароль@ip_прокси:порт_прокси/";
Acquire::::Proxy "true";

Если сервер без авторизации, то логин:пароль@ нужно убрать.

Example:

Acquire::http::proxy "http://192.168.35.10:8080/";
Acquire::::Proxy "true";

Arch Linux install

# грузимся с образа

fdisk -l # смотрим диски

cfdisk # размечаем диски

lsblk # смотрим диски и точки монтирования

mkfs.ext4 /dev/partition mount /dev/partition /mnt # монтируем корневой раздел в мнт

mkswap /dev/partition swapon /dev/partition

swapon -s # смотрим статус свап-раздела free -h # смотрим статус свап-раздела

/etc/pacman.d/mirrorlist # редактируем список зеркал
## Score: 1.6, Ukraine Server = http://archlinux.bln-ua.net/$repo/os/$arch ## Score: 27.1, Ukraine Server = http://mirrors.nix.org.ua/linux/archlinux/$repo/os/$arch

# После внесения изменений в /etc/pacman.d/mirrorlist (неважно, вручную или с rankmirrors) выполните синхронизацию базы данных пакетов pacman: pacman -Syyu

pacstrap /mnt base # пишем базовые компоненты новой системы genfstab -pU /mnt >> /mnt/etc/fstab # генерируем фстаб по UUID для новой системы

arch-chroot /mnt # Перейдите к корневому каталогу новой системы:

echo имя_хоста > /etc/hostname

ln -sf /usr/share/zoneinfo/Europe/Kiev /etc/localtime

Autorun scripts

Создаём пустой файл. Первой строкой пишем:

#!/bin/sh

Данная строка указывает, какую командную оболочку необходимо использовать. Дальше свои команды. Сохраним его под оригинальным названием (чтоб не совпадал с уже существующими) в каталоге /usr/sbin/bbb_run.

Чтобы скрипт запускался при загрузке, необходимо прописать его в файле /etc/rc.local до строчки exit 0. Если у вас не существует данного файла, создайте его и вставьте в него следующее содержимое:

#!/bin/sh -e #Здесь вставляем строку с указанием вашего скрипта. /usr/sbin/mescripts exit 0

nano /etc/init.d/vm_start.sh
#! /bin/sh ### BEGIN INIT INFO # Provides: skeleton # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Example initscript # Description: This file should be used to construct scripts to be # placed in /etc/init.d. ### END INIT INFO sudo -u <user> VBoxManage startvm <vm_name> –type headless

chmod +x /etc/init.d/vm_start.sh update-rc.d vm_start.sh defaults

awk

awk - команда контекстного поиска и преобразования текста. Она - фильтр. Ее можно рассматривать как оболочку “awk” в оболочке “shell”.

cat /proc/*/stat | awk ‘{print $1 ” ” $2}’

awk -F ‘<separator>’ ‘{print $2}’

http://linuxgeeks.ru/awk.htm https://ru.wikipedia.org/wiki/AWK https://linuxconfig.org/learning-linux-commands-awk https://www.digitalocean.com/community/tutorials/how-to-use-the-awk-language-to-manipulate-text-in-linux http://citforum.ck.ua/operating_systems/articles/sed_awk.shtml

BASH

http://talk.jpnc.info/bash_lfnw_2017.pdf

Make newlines the only separator:

IFS=$'\n'

Do something on exit (even if error):

function cleanup() {
        # do something
}

trap cleanup EXIT

Functions boolean returns:

# 1st variant
isdirectory() {
  if [ -d "$1" ]; then
    return 0
  else
    return 1
  fi
}

# 2nd variant
isdirectory() {
  if [ -d "$1" ]; then
    true
  else
    false
  fi
}

# 3rd variant
isdirectory() {
  [ -d "$1" ]
}

# USING
if isdirectory $1; then
    echo "is directory"
else
    echo "nopes"
fi

Multiple arguments passing:

ARGS=$@
# ./script.sh "one two three"
# 1. one
# 2. two
# 3. three

ARGS="$@"
# ./script.sh "one two three"
# 1. one two three

Operators:

let "var += 5"
let "var = 5 - 3"

Asynchronous bash script:

mycommand &
child_pid=$!

while kill -0 $child_pid >/dev/null 2>&1; do
    echo "Child process is still running"
    sleep 1
done

echo "Child process has finished"

Parsing script arguments:

#!/bin/sh
# Keeping options in alphabetical order makes it easy to add more.

while :
do
    case "$1" in
      -f | --file)
          file="$2"     # You may want to check validity of $2
          shift 2
          ;;

      -h | --help)
          display_help  # Call your function
          exit 0
          ;;

      -v | --verbose)
          verbose="verbose"
          shift
          ;;

      --)           # End of all options
          shift
          break;

      -*)
          echo "Error: Unknown option: $1" >&2
          exit 1
          ;;

      *)            # No more options
          break
          ;;
    esac
done

Async/sync mode. Add ‘async=true’ before run script:

COMMAND="uptime"
SCRIPT_PATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
VMS_IPS="192.168.1.1 192.168.1.2"
CHILD_PIDS=""

VM_NUM=0

for VM_IP in $VMS_IPS; do

        (( VM_NUM++ ))

        if [ "$async" == "true" ]; then
                ssh root@$VM_IP "$COMMAND" >> $SCRIPT_PATH/temp.$VM_NUM 2>&1 &
                CHILD_PIDS="$CHILD_PIDS $!"
        else
                ssh $SSH_USER@$VM_IP "$COMMAND"
        fi
done

if [ "$async" == "true" ]; then
        for CHILD_PID in $CHILD_PIDS; do
                while kill -0 $CHILD_PID >/dev/null 2>&1; do
                        echo "Processing in asynchronous mode..."
                        sleep 5.0
                done
        done

        cat $(ls -v temp.*)
        rm temp.*
fi

Check is program exists:

if hash <program> 2>/dev/null; then
    echo "exists"
else
    echo "not exists"
fi

Add zeros before number:

VAR=15
echo "$(seq -f %03g $VAR $VAR)"
# 015

BASH if grep ixists statement:

if grep --quiet 'text' /path/to/file; then
  echo exists
else
  echo not found
fi

Get path to script which run:

SCRIPT_PATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

Bash get last line from a variable:

echo "${VAR##*$'\n'}"

Is file exist:

if [ ! -f /tmp/foo.txt ]; then
    echo "File not found!"
fi

Functions on BASH:

#!/bin/sh

# this will fail because foo has not been declared yet.
myFunction parameter_1 parameter_2

[function] myFunction() {
    echo "Parameters: $1 $2"
}

# this will work
myFunction parameter_1 parameter_2

Do while with iterator:

COUNT=0

while [ $COUNT -lt 10 ]
do
        (( COUNT++ ))
        echo $COUNT
done

Boolean in BASH:

VAR=true

if [ "$VAR" = true ] ; then
    echo 'true'
fi

Check if a variable is set:

if [ -z ${var+x} ]; then
    echo "var is unset"
else
    echo "var is set to '$var'"
fi

# this will return true if a variable is unset or set to the empty string ("")
if [ -z "$VAR" ];

For in bash:

for i in $( ls ) ; do
  rm -R $i
done

Multiline file creating:

    cat > /path/to/file <<EOF
    line 1
    line 2
    EOF

# with sudo
sudo bash -c "cat > /path/to/file" << EOF
line1
line2
EOF

Make sure only root can run our script:

if [[ $EUID -ne 0 ]]; then
   echo "This script must be run as root" 1>&2
   exit 1
fi

Stdout/stderr redirects:

# Both threads (stdout/stderr) will be redirected to a file
<command> >file 2>&1

# Stdout to file. Stderr to terminal
<command> 2>&1 >file

Logical operators for if-then-else-fi construction:

-z $VAR         # string is empty
-n $VAR         # string isn't empty

$VAR =/== $VAR  # strings are equal
$VAR != $VAR    # strings aren't equal

-eq             # equal
-ne             # not equal

-lt(<)          # less
-le(<=)         # less or equal

-gt(>)          # more
-ge(>=)         # more or equal

!               # logical not
-a(&&)          # logical and
-o(||)          # logical or

-e <file>       # file exists
-d <file>       # file exists and is a directory
-f <file>       # is a regular file (not a directory or device file)
-b <file>       # file is a block device
-w <file>       # file exists and the write permission is granted
-x <file>       # file exists and the execute permission is granted
-r <file>       # file exists and the read permission is granted
-s <file>       # file exists and it's size is greater than zero (ie. it is not empty)

http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_07_01.html

https://www.opennet.ru/docs/RUS/bash/bash-4.html

Positional parameters

$* $1 $2 $3 … ${N}
$@ $1 $2 $3 … ${N}
“$*” “$1c$2c$3c…c${N}”
“$@” “$1” “$2” “$3” … “${N}”

set

set [abefhkmnptuvxldCHP] [-o опция] [аргумент]:

-a | allexport      отмечает переменные, которые модифицированы или созданы для зкспорта.

-b | notify вызывает прекращение фоновых заданий, о котором сообщает перед выводом следующего базового приглашения.

-е | errexit немедленный выход, если выходное состояние команды ненулевое.

-f | noglob выключает генерацию имени файла (глоббирование).

-h  обнаруживает и запоминает (хеширует) команды как определенные функции до того, как функция будет выполнена.

-к  В окружении команды располагаются все аргументы ключевых слов, не только те, которые предшествуют имени команды.

-m | monitor        Разрешается управление заданиями (см. главу 5).

-n | noexec Читает команды, но не выполняет их.

-о имя_опции        Устанавливает флаг, соответствующий имени_опции.

braceexpand оболочка должна выполнить brace-расширение (см. раздел 2.2).

emacs       использует интерфейс редактирования emacs (см. главу 7 "Редактирование командной строки").


ignoreeof   оболочка не выходит при чтении EOF.

interactive-comments        позволяет вызывать слово, начинающееся с '#', и все оставшиеся символы на этой строке игнорировать в диа логовой оболочке.

posix       изменяет режим Bash в соответствии со стандартом Posix 1003.2, когда операция по умолчанию отличается от него. Предназначен для того, чтобы сделать режим строго подчиненным зтому стандарту.

vi  использует интерфейс редактирования строки редактора vi.

-p | privileged     Включает привилегированный режим. В зтом режиме файл $ENV не выполняется, и функции оболочки не наследуются из среды. Зто включается автоматически начальными действиями, если идентификатор зффективного пользователя (группы) не равен идентификатору реального пользователя (группы). Выключение зтой опции присваивает идентификатор зффективного пользователя (группы) идентификатору реального пользователя (группы).

-t  выход после чтения и выполнения команды.

-u | nounset        во время замещения рассматривает незаданную переменную как ошибку.

-v | verbose        выдает строки ввода оболочки по мере их считывания.

-x | xtrace выводит команды и их аргументы по мере выполнения команд.

-l  сохраняет и восстанавливает связывание имени в команде for.

-d | nohash Выключает хеширование команд, найденных для выполнения. Обычно команды запоминаются в хеш-таблице и, будучи однажды найденными, больше не ищутся.

-С | noclobber не позволяет существующим файлам перенаправление вывода.

-Н | histexpand     закрывает замену стиля ! истории. Этот флаг принимается по умолчанию.

-Р | physical       Если установлена, не следует символьному указателю при выполнении команды типа cd, которая изменяет текущий каталог. Вместо этого используется физический каталог.

--  Если нет аргументов, следующих за зтим флагом, то не задаются позиционные параметры. В противном случае позиционные параметры присваиваются аргументам, даже если некоторые из них начинаются с а-.

-   Сигнал конца опции, вызывающей присваивание оставшихся аргументов позиционным параметрам. Опции -x и -v выключаются. Если здесь нет аргументов, позиционный параметр не изменяется.

При использовании + вместо - осуществляется выключение зтих флагов. Флаги также могут использоваться при вызове оболочки. Текущий набор флагов можно найти в $-. Оставшиеся N переменных - позиционные параметры и присваиваются по порядку $1, $2,...,$N. Если аргументы не даны, выводятся все переменные оболочки.

Cloning disk

This instructions about how you can clone full your physical disk to another physical disk.

https://wiki.archlinux.org/index.php/disk_cloning

dd (clonning)

This will clone the entire drive, including the MBR (and therefore bootloader), all partitions, UUIDs, and data:

dd if=/dev/sdX of=/dev/sdX bs=32M conv=sync,noerror
  • noerror instructs dd to continue operation, ignoring all read errors. Default behavior for dd is to halt at any error.
  • sync fills input blocks with zeroes if there were any read errors, so data offsets stay in sync.
  • bs= sets the block size. Defaults to 512 bytes, which is the “classic” block size for hard drives since the early 1980s, but is not the most convenient. Use a bigger value, 64K or 128K.

Note

If you are positive that your disk does not contain any errors, you could proceed using a larger block size, which will increase the speed of your copying several fold. For example, changing bs from 512 to 64K changed copying speed from 35 MB/s to 120 MB/s on a simple Celeron 2.7 GHz system. But keep in mind that read errors on the source disk will end up as block errors on the destination disk, i.e. a single 512-byte read error will mess up the whole 64 KiB output block.

Note

If you would like to view dd progressing, use the status=progress option.

Maybe faster (not tested):

dd if=/dev/sda | dd of=/dev/sdb
Tested

Clone all disk:

sudo dd if=/dev/sda of=/dev/sdb bs=32M conv=sync,noerror status=progress

120057757696 bytes (120 GB, 112 GiB) copied, 2919.16 s, 41.1 MB/s
3577+1 records in
3578+0 records out
120057757696 bytes (120 GB, 112 GiB) copied, 2951.48 s, 40.7 MB/s

Clone just one partition (Windows 7 system disk):

# check end of partition size
sudo fdisk -l /dev/sda  # source disk
# /dev/sda1   End: 41945714
# (41945652 + 1) / 8 = 5243215  # use this count in the next step
sudo dd if=/dev/sda of=/dev/sdb bs=4096 count=5243215 conv=sync,noerror status=progress

cat (clonning)

Clonning with cat:

cat /dev/sda > /dev/sdb

If you want to run this command in sudo, you need to make the redirection happen as root:

sudo sh -c 'cat /dev/sdb >/dev/sdc'

pv (clonning)

If you want a progress report and your unix variant doesn’t provide an easy way to get at a file descriptor positions, you can install and use pv instead of cat:

pv /dev/sda > /dev/sdb

Backuping in file

To save space, you can compress data produced by dd with gzip, e.g.:

dd if=/dev/hdb | gzip -c  > /image.img

You can restore your disk with:

gunzip -c /image.img.gz | dd of=/dev/hdb

CRON

Online checker - https://crontab.guru/

Edit CRON tasks:

sudo crontab -e

Cron log:

grep CRON /var/log/syslog

Schema:

* * * * * *
| | | | | |
| | | | | +--- Years            (range: 1900-3000)
| | | | +----- Days of week     (range: 1-7)
| | | +------- Months           (range: 1-12)
| | +--------- Days of month    (range: 1-31)
| +----------- Hours            (range: 0-23)
+------------- Minutes          (range: 0-59)

# <timing>  <user> <command>
11 * * * *  root   /usr/lib/command

curl

Download a tarball from GitHub::
curl -L https://github.com/pinard/Pymacs/tarball/v0.24-beta2 | tar zx

Hide output:

curl -s 'http://example.com' > /dev/null

Basic auth:

curl -u user:password https://example.com

DASH

Installation

dash-install.sh:

DASH_VERSION="dash-0.5.9.1"
DASH_ARCHIVE="$DASH_VERSION.tar.gz"

curl -O http://gondor.apana.org.au/~herbert/dash/files/$DASH_ARCHIVE
tar -xf $DASH_ARCHIVE
rm $DASH_ARCHIVE
cd $DASH_VERSION/
./configure
make
make install
echo "$(which dash)" >> /etc/shells
cd ..
rm -R $DASH_VERSION
chmod +x dash-install.sh
./dash-install.sh

Drupal + Apache + PostgreSQL

Tested on Ubuntu Xenial (16.04)

Install prerequirements:

sudo apt update
sudo apt install postgresql apache2 php libapache2-mod-php7.0 php7.0-gd php7.0-xml php7.0-pgsql

a2enmod rewrite ssl
systemctl restart apache2

# check ssl, rewrite
apache2ctl -M | egrep 'ssl|rewrite'

Download end extract drupal:

cd /var/www/
wget https://ftp.drupal.org/files/projects/drupal-8.2.2.tar.gz
tar -xvzf drupal-8.2.2.tar.gz
mv drupal-8.2.2 drupal
chown www-data:www-data -R drupal/

Apache configuration

SSL:

mkdir /etc/apache2/ssl

# generate a self-signed SSL certificate
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/apache2/ssl/drupalssl.key -out /etc/apache2/ssl/drupalssl.crt
sudo chmod 600 /etc/apache2/ssl/*

Create virtual host config /etc/apache2/sites-available/drupal.conf:

<VirtualHost *:80>
        ServerName www.mydrupal.co
        DocumentRoot /var/www/drupal

        # Redirect http to https
        RedirectMatch 301 (.*) https://www.mydrupal.co$1
</VirtualHost>

<VirtualHost _default_:443>

        # Server Info
        ServerName www.mydrupal.co
        ServerAlias mydrupal.co
        ServerAdmin webmaster@localhost

        # Web root
        DocumentRoot /var/www/drupal

        # Log configuration
        ErrorLog ${APACHE_LOG_DIR}/drupal-error.log
        CustomLog ${APACHE_LOG_DIR}/drupal-access.log combined

        #   Enable/Disable SSL for this virtual host.
        SSLEngine on

        # Self signed SSL Certificate file
        SSLCertificateFile      /etc/apache2/ssl/drupalssl.crt
        SSLCertificateKeyFile /etc/apache2/ssl/drupalssl.key

        <Directory "/var/www/drupal">
                Options FollowSymLinks
                AllowOverride All
                Require all granted
        </Directory>

        <FilesMatch "\.(cgi|shtml|phtml|php)$">
                        SSLOptions +StdEnvVars
        </FilesMatch>
        <Directory /usr/lib/cgi-bin>
                        SSLOptions +StdEnvVars
        </Directory>

        BrowserMatch "MSIE [2-6]" \
                        nokeepalive ssl-unclean-shutdown \
                        downgrade-1.0 force-response-1.0
        # MSIE 7 and newer should be able to use keepalive
        BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown

</VirtualHost>

Enable new virtual host:

# check config
apachectl configtest

# enable new host
a2ensite drupal

# disable default host config
a2dissite 000-default

systemctl restart apache2

DB Postgres for drupal

Create DB and User for Drupal:

su - postgres

psql
    CREATE USER drupal WITH password 'PASSWORD';
    CREATE DATABASE drupal OWNER drupal;
    GRANT ALL privileges ON DATABASE drupal TO drupal;

exit

psql -h localhost drupal drupal
    ALTER DATABASE "drupal" SET bytea_output = 'escape';

VMWARE ESXI

Storages path /vmfs/volumes

ffmpeg

Encoding video for the web - https://gist.github.com/Vestride/278e13915894821e1d6f

Reduce video size (change CRF value to change size/quality of result video):

# H.264 codec
ffmpeg -i input.mp4 -vcodec libx264 -crf [18..24] output.mp4

# H.265 codec
ffmpeg -i input.mp4 -vcodec libx265 -crf [24..30] output.mp4

gcloud

gcloud init

gcloud auth list # list accounts whose credentials are stored on the local system gcloud config list # list the properties in your active SDK configuration

gcloud info # view information your Cloud SDK installation and the active SDK configuration gcloud help # view help # e.g. “gcloud help compute instances create”

# Google cloud registry https://cloud.google.com/container-registry/docs/pushing

gcloud compute ssh <NODE> # ssh logout

# BUILD AND PUSH/PULL IMAGE #

docker build -t <user>/<image_name> . # build docker tag <user>/<image> <zone>/<project>/<image> # copy to new tag

gcloud docker push gcr.io/your-project-id/example-image # push to registry gcloud docker pull gcr.io/your-project-id/example-image # pull from registry

gcloud container clusters get-credentials <cluster_name> –zone <xone> –project <project>

gcloud container clusters delete <cluster_name>

gcloud container clusters list gcloud container clusters describe guestbook

# Create cluster #

https://cloud.google.com/compute/docs/regions-zones/regions-zones

gcloud config set compute/zone us-central1-a gcloud container clusters create <cluster_name> gcloud container clusters get-credentials <cluster_name>

# Delete IMAGES gsutil ls You should see: gs://artifacts.<$PROJECT_ID>.appspot.com/ And then to remove the all the images under this path, run: gsutil rm -r gs://artifacts.$PROJECT_ID.appspot.com/

Git

https://git-scm.com/book/ru/v2

Dealing with line endings:

# https://help.github.com/articles/dealing-with-line-endings/

git add . -u
git commit -m "Saving files before refreshing line endings"
rm .git/index
git reset
git status
git add -u
git add .gitattributes
git commit -m "Normalize all the line endings"

# git push

Set up custom SSH key for specific git project:

# clone new project
GIT_SSH_COMMAND="ssh -i /path/to/ssh_key" git clone git@*****/project.git

# go to project dir
cd project

# set project-specific option
git config core.sshCommand "ssh -i /path/to/ssh_key -F /dev/null"

# you can see project config here .git/config
# nice to set right name and email for specific project
git config user.name "User Name"
git config user.email "email"

Edit global/local Git config file:

git config --global --edit
git config -e

# list params
git config [--global] -l

# set params
git config [--global] user.name "User Name"
git config [--global] user.email "email"
# get param
git config [--global] user.name

Credentials store

Store credentials for current git:

git config credential.helper store

# or add in ./git/config
[credential]
    helper = store

# default credentials file location
~/.git-credentials

Development scheme

https://habrahabr.ru/post/106912/

Branches:

master - permanent, stable. With tag develop - permanent feature - temporary release - temporary hotfix - temporary

Feature branches. Can start from develop. Must pull to develop.:

# Start new feature
git checkout -b myfeature develop

# Do changes, commit to myfeature

# End new feature
git checkout develop        # change branch
git merge --no-ff myfeature # merge branches
git branch -d myfeature     # delete feature branch
git push origin develop     # push changes to develop

Release branches. Can start from develop. Must pull to develop and master. Can be named alike release-*.:

# Start release
git checkout -b release-1.2 develop
./bump-version.sh 1.2                           # some changes in project
git commit -a -m "Bumped version number to 1.2"

# End release
git checkout master             # change branch
git merge --no-ff release-1.2   # merge branches
git tag -a 1.2                  # set tag

# Keep actual develop branch
git checkout develop
git merge --no-ff release-1.2

# Delete release branch
git branch -d release-1.2

Hotfix branches. Can start from master. Must pull to develop and master. Can be named alike hotfix-*.:

# Start hotfix
git checkout -b hotfix-1.2.1 master
./bump-version.sh 1.2.1             # some fixes in project
git commit -a -m "Bumped version number to 1.2.1"

git commit -m "Fixed severe production problem"

# End hotfix
git checkout master
git merge --no-ff hotfix-1.2.1
git tag -a 1.2.1

# Keep actual develop branch
git checkout develop
git merge --no-ff hotfix-1.2.1

# Delete hotfix branch
git branch -d hotfix-1.2.1

Git tips and tricks

See all of the changes from latest pull:

git diff master@{1} master

Push current changes to new branch (auto create it on remote and local):

# done some changes
git checkout -b <new-branch>
git add .
git commit -m "added feature"
git push origin <new-branch>

Avoid Are you sure you want to continue connecting (yes/no) on connect via ssh keys:

# bitbucket
ssh-keyscan -t rsa bitbucket.org >> ~/.ssh/known_hosts

Delete credentials from GIT repository [rus] - https://the-bosha.ru/2016/07/01/udaliaem-sluchaino-zakomichennye-privatnye-dannye-iz-git-repozitoriia/

Force “git pull” to overwrite local files:

# Downloads the latest from remote without trying to merge or rebase anything
git fetch --all

# Resets the master branch to what you just fetched. The --hard option changes all the files in your working tree to match the files in origin/master
git reset --hard origin/master

# OR If you are on some other branch
git reset --hard origin/<branch_name>

# Maintain current local commits by creating a branch from master before resetting
git checkout master
git branch new-branch-to-save-current-commits
git fetch --all
git reset --hard origin/master

Clone only single branch:

git clone -b <branch> --single-branch git://sub.domain.com/repo.git

Change url:

git remote set-url origin git@bitbucket.org:johnsmith/test.git

Add remote repository:

git remote add origin https://github.com/johnsmith/test.git
git push origin serverfix               # remote server/branch
git push origin serverfix:awesomebranch     # local branch serverfix will transfer to branch awesomebranch of remote project

git remote      # Show configured remote servers
                -v  # Show match URL/name

    git add <file_name> # add file to git index (stage changes)
        .           # stage all changes (all files)

git push origin --delete serverfix  # delete branch on server

git fetch [имя удал. сервера]       # pull from server, but not merge with local changes

git branch                          # show all local branches
                -v                      # also show last commits on branches
                --merged/--no-merged    # show merged/unmerged branches with current
            -a                      # show also remote branches

git branch testing      # new branch, without switching
                   -d hotfix    # delete unused branch
                   -D testing   # delete unused branch also if not merged

git checkout testing        # switching to branch testing (HEAD transfer to branch testing)
                         -b newbranch   # create and switching branch
             COMMIT_HASH    # switch to specific commit

git checkout path/to/file   # switching file version from server

git merge hotfix        # merge hotfix into current branch

Git log:

# Shows commit history and branch indexes and branch history
git log --oneline --decorate --graph --all

# get commit history
git log
git log --pretty=oneline
git log --pretty=oneline --author="Valentin Siryk"
# formatting
git log --pretty=format:"%h - %s : %ad %an"

Git stash:

# Stash unstaged local changes (changes which were not added to index by 'git add')
git stash
git stash list  # show all stashed changes
git stash pop   # unstash last changes and drop these from stash
git stash apply # unstash last changes (without dropping from stash)
git stash show  # show last changes from stash
git stash drop  # drop last stashed changes from stash
git stash clear # drop all stashed changes

Git reset:

git reset [--mixed] COMMIT_HASH     # reset to specific commit. Changes will be removed from index.
git reset --soft    COMMIT_HASH     # reset to specific commit. Changes will be ready to commit - staged state.
git reset --hard    COMMIT_HASH     # reset to specific commit. You will lose all changes!

Delete commits from remote server:

# Make sure that nobody could not pull commits before deleting!
git reset COMMIT_HASH   # remove all commits after COMMIT_HASH locally
git push -f             # force push commit history to remote server

Git revert:

# Creates commit to revert commit
git revert HEAD         # create commit to revert last commit

Command aliases:

git config --global alias.co checkout
git config --global alias.br branch
git config --global alias.ci commit
git config --global alias.st status
git config --global alias.unstage 'reset HEAD --'

GitLab

GitLab CI Server

Obtain the Active User count:

sudo gitlab-rails runner 'p User.active.count'

Obtain the Maximum billable user count:

sudo gitlab-rails runner 'p ::HistoricalData.max_historical_user_count'

HAProxy

Add header if ssl:

http-request add-header X-Forwarded-Proto https if { ssl_fc }

Check config:

haproxy -c -f /etc/haproxy/haproxy.cfg

Log rotation config /etc/logrotate.d/haproxy:

/var/log/haproxy.log {
    daily
    rotate 52
    missingok
    notifempty
    compress
    delaycompress
    postrotate
        invoke-rc.d rsyslog rotate >/dev/null 2>&1 || true
    endscript
}

haproxy.cfg:

frontend fe

# HTTP log format format, which is the most advanced for HTTP proxying.
# It provides the same information as the TCP format with some HTTP-specific fields such as the request, the status code, and captures of headers and cookies.
# This format is recommended for HTTP proxies.
   option httplog

backend be

    stick-table type string len 32 size 1M peers haproxy-peers
    # type string len 32 - String 32 characters
    # size 1M - maximum number of entries that can fit in the table. Count approximately 50 bytes per entry, plus the size of a string if any.
    # The size supports suffixes "k", "m", "g" for 2^10, 2^20 and 2^30 factors.

    # Define a request pattern to associate a user to a server
    stick on req.cook(SERVERID)

    # Define a request pattern matching condition to stick a user to a server
    stick match <pattern> [table <table>] [{if | unless} <cond>]

Proxy to backend depending on hostname. Only https. If hostname rule is not exist - proxy to default backend:

frontend https-front-session
        bind *:443 ssl crt /etc/ssl/key.pem
        bind *:80
        redirect scheme https if !{ ssl_fc }

        default_backend back-session

        acl is_old hdr_end(host) -i old.example.com
        use_backend old_example if is_old

        acl is_new hdr_end(host) -i new.example.com
        use_backend new_example if is_new

SSL backend:

backend be
    balance roundrobin
    server s1 example.com:443 check ssl verify none
# ssl verify none - without ssl verification

HATop

HATop is an interactive ncurses client and real-time monitoring, statistics displaying tool for the HAProxy TCP/HTTP load balancer.

http://feurix.org/projects/hatop/

First of all, make sure you have the stats socket enabled in the haproxy config:

global
  stats socket /run/haproxy/admin.sock mode 0600 level admin

That’s all you need to use HATop:

sudo hatop -s /run/haproxy/admin.sock

HAProxy + LetsEncrypt

sudo apt install letsencrypt

haproxy.cfg:

frontend https-front-session
        bind *:443 ssl crt /etc/ssl/le/one.example.com.pem crt /etc/ssl/le/two.example.com.pem
        bind *:80

        redirect scheme https if !{ ssl_fc }

        default_backend back-session

        acl is_le path_beg /.well-known/acme-challenge/
        acl is_old hdr_end(host) -i one.example.com
        acl is_new hdr_end(host) -i two.example.com

        # Order is important
        use_backend le if is_le
        use_backend b1 if is_old
        use_backend b2 if is_new

backend le
        server letsencrypt 127.0.0.1:54321

Get cert:

sudo letsencrypt certonly --agree-tos --renew-by-default --standalone-supported-challenges http-01 --http-01-port 54321 -d <one.example.com>

renewal.sh:

#!/bin/sh

letsencrypt renew --agree-tos --standalone-supported-challenges http-01 --http-01-port 54321

DOMAIN=<one.example.com>
cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/ssl/le/$DOMAIN.pem

DOMAIN=<two.example.com>
cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/ssl/le/$DOMAIN.pem

service haproxy reload

Test HDD speed in Linux

Write speed (dd)

sync; dd if=/dev/zero of=tempfile bs=1024k count=1024; sync

# 1024+0 records in
# 1024+0 records out
# 1073741824 bytes (1.1 GB) copied, 3.28696 s, 327 MB/s

# if you want to delete created tempfile
rm tempfile

Read speed (dd)

tempfile, был закэширован в буфер и скорость его чтения будет выше чем реальная . Чтобы получить реальную скорость, необходимо очистить кэш. Выполни следующую команду, чтобы узнать Скорость ЧТЕНИЯ из Буфера:

$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.159273 s, 6.7 GB/s

Очистите Кэш в Linux и измерьте Реальную СКОРОСТЬ Чтения непосредственно с жесткого диска:

$ sudo /sbin/sysctl -w vm.drop_caches=3
vm.drop_caches = 3
$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.27431 s, 472 MB/s

ТЕСТ Скорость Чтения/Записи Внешнего Накопителя

Для проверки производительности какого-либо Внешнего HDD, USB Флэшки либо другогосъемного носителя или удаленной файловой системы, просто перейдите в точку монтирования и выполните приведенные выше команды. Либо, вместо tempfile, можно непосредственно прописать путь к точке монтирования, например:

$ sync; dd if=/dev/zero of=/media/user/MyUSB/tempfile bs=1M count=1024; sync

tempfile. Не забудьте удалить его по окончанию тестов.

Проверка Производительности HDD – «hdparm»

$ sudo apt-get install hdparm $ sudo yum install hdparm

$ sudo hdparm -Tt /dev/sda /dev/sda:

Timing cached reads: 16924 MB in 2.00 seconds = 8469.95 MB/sec Timing buffered disk reads: 1386 MB in 3.00 seconds = 461.50 MB/sec

Hg mercurial

https://www.mercurial-scm.org/wiki/Tutorial https://www.mercurial-scm.org/guide

# ssh access https://confluence.atlassian.com/bitbucket/set-up-ssh-for-mercurial-728138122.html

sudo apt-get install mercurial

nano ~/.hgrc
[ui] username = John Smith <john@gmail.com>

hg status # show changes in working directory

hg branch # show current branch
feature1 // создать ветку

hg branch feature1 # create branch hg update feature1 # change branch hg commit -m “start new branch” # commit new branch hg push –new-branch # push creates new remote branches hg addremove # add all new files that are not ignored, and remove all locally missing files

hg commit –close-branch -m “finished feature1” // удалить ветку

hg merge feature1 // слияние веток

hg commit == hg ci hg update == hg up

history

History path ~/.bash_history.

Show history of commands:

history
        20          # show last 20 commands
        -c          # clean history (don't forgot clear .bash_history file)
        -d <num>    # clear command

set +o history  # off save history
set -o history  # on save history

echo $HISTSIZE  # show history size

<space>command  # don't save history of this command

Exec command from history:

!<num>      # show and exec specific command
!!          # show and exec last command

Add time to history:

echo "export HISTTIMEFORMAT="%d.%m.%Y-%H:%M:%S \"" >> ~/.bashrc
source .bashrc

.htaccess

https://habrahabr.ru/company/sprinthost/blog/129560/

$1 # обозначает ту часть исходного пути, которая расположена внутри первой пары скобок, $2 - внутри второй пары и далее по аналогии.

redirect|R [=code] (вызывает редирект) Префикс в Подстановке вида http://thishost[:thisport]/ (создающий новый URL из какого-либо URI) запускает внешний редирект (перенаправление). Если нет никакого кода в подстановке ответ будет с HTTP статусом 302 (ВРЕМЕННО ПЕРЕМЕЩЕН). Если вы хотите использовать другие коды ответов в диапазоне 300-400, просто напишите их в виде числа или используйте одно из следующих символических имён: temp (По-умолчанию), permanent, seeother.

last|L (последнее правило) Остановить процесс преобразования на этом месте и не применять больше никаких правил преобразований. Это соответствует оператору last в Perl или оператору break в языке C. Используйте этот флаг для того, чтобы не преобразовывать текущий URL другими, следующими за этим, правилами преобразований. К примеру, используйте это для преобразования корневого URL из (‘/’) в реальный, например, ‘/e/www/’.

Есть специальный формат: %{HTTP:заголовок} где заголовок может быть любым именем HTTP MIME-заголовка. Это ищется в HTTP запросе.

Аргумент директивы RewriteCond. Flags список следующих флагов разделенных запятыми: ‘nocase|NC’ (регистро независимо)

RewriteBase # устанавливает базовый URL для преобразований в контексте каталога. Используется в конфигурационных файлах каталогов .htaccess.
Префикс локального каталога отбрасывается на этом этапе обработки и ваши правила преобразований работают только в оставшейся части. В конце он автоматически добавляется обратно к пути.

Rewrite to subdir

RewriteEngine On
#RewriteBase /

#RewriteCond %{HTTP_HOST} ^test.loc
#RewriteRule bbc/(.*)$ http://bbc.test.loc/$1 [R=301,L]

#RewriteCond %{HTTP_HOST} ^bbc.test.loc$
#RewriteCond %{REQUEST_URI} !^/bbs
#RewriteRule ^(.*)$ /bbs/$1 [L,QSA]

RewriteCond %{HTTP_HOST} ^www.test2.tixclix.com$
RewriteCond %{REQUEST_URI} !^/test/public_fcn
RewriteRule ^(.*)$ /test/public_fcn/$1 [L,QSA]


RewriteCond %{HTTP_HOST} ^test2.tixclix.com$
RewriteCond %{REQUEST_URI} !^/blog
RewriteRule ^(.*)$ /blog/$1 [L,QSA]


#RewriteRule ^(.*)$ /test/$1 [L,QSA]

htpasswd

Generate BCrypt password and write to file:

htpasswd -Bc <file> <username>

Generate BCrypt password and write to stdout:

htpasswd -Bn <username>

Install BBB auto

BigBlueButton 0.81 on Debian 7 Wheezy

aptitude install sudo
echo "deb http://ubuntu.bigbluebutton.org/lucid_dev_081/ bigbluebutton-lucid main" | sudo tee /etc/apt/sources.list.d/bigbluebutton.list
#
# Add the BigBlueButton key
wget http://ubuntu.bigbluebutton.org/bigbluebutton.asc -O- | sudo apt-key add -
#
# Add sources
echo "deb http://security.debian.org/ wheezy/updates main contrib non-free" | sudo tee /etc/apt/sources.list
echo "deb-src http://security.debian.org/ wheezy/updates main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb http://ftp.debian.org/debian stable main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://ftp.debian.org/debian stable main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb http://ftp.debian.org/debian wheezy main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://ftp.debian.org/debian wheezy main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb http://ftp.debian.org/debian/ wheezy-updates main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://ftp.debian.org/debian/ wheezy-updates main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb http://ftp.debian.org/debian wheezy-proposed-updates main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://ftp.debian.org/debian wheezy-proposed-updates main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb http://ftp.debian.org/debian wheezy-backports main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://ftp.debian.org/debian wheezy-backports main contrib non-free" | sudo tee -a /etc/apt/sources.list
echo "deb http://www.deb-multimedia.org wheezy main non-free" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://www.deb-multimedia.org wheezy main non-free" | sudo tee -a /etc/apt/sources.list
echo "deb http://www.deb-multimedia.org wheezy-backports main" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://www.deb-multimedia.org wheezy-backports main" | sudo tee -a /etc/apt/sources.list
#
sudo aptitude update
sudo aptitude install deb-multimedia-keyring
#sudo aptitude install mc htop ssh
sudo aptitude update
#
sudo aptitude  -f install subversion dpkg-dev debhelper dh-make devscripts fakeroot
sudo aptitude  -f dist-upgrade
#
sudo aptitude  -f install build-essential git-core checkinstall yasm texi2html libopencore-amrnb-dev \
libopencore-amrwb-dev libsdl1.2-dev libtheora-dev libvorbis-dev libx11-dev libxfixes-dev libxvidcore-dev zlib1g-dev
#
sudo aptitude  -f install libfaad-dev faad faac libfaac0 libfaac-dev libmp3lame-dev x264 libx264-dev libxvidcore-dev \
build-essential checkinstall libffi5 subversion libmpfr4 libmpfr-dev
#sudo aptitude  -f install ffmpeg
sudo aptitude  -f install

# Copy file for fixing depedency
cd /var/cache/apt/archives
wget http://ftp.us.debian.org/debian/pool/main/g/gmp/libgmp3c2_4.3.2+dfsg-1_amd64.deb
wget http://free.nchc.org.tw/ubuntu//pool/main/m/mpfr/libmpfr1ldbl_2.4.2-3ubuntu1_amd64.deb
wget http://ftp.us.debian.org/debian/pool/main/o/openssl/libssl0.9.8_0.9.8o-4squeeze14_amd64.deb
#
cd /var/cache/apt/archives
sudo dpkg -i libgmp3c2_4.3.2+dfsg-1_amd64.deb libmpfr1ldbl_2.4.2-3ubuntu1_amd64.deb libssl0.9.8_0.9.8o-4squeeze14_amd64.deb
sudo aptitude -f install

#
# Copy file sun-java6 & depedency
cd /var/cache/apt/archives
wget http://ftp.us.debian.org/debian/pool/non-free/s/sun-java6/ia32-sun-java6-bin_6.26-0squeeze1_amd64.deb
wget http://ftp.us.debian.org/debian/pool/non-free/s/sun-java6/sun-java6-bin_6.26-0squeeze1_amd64.deb
wget http://ftp.us.debian.org/debian/pool/non-free/s/sun-java6/sun-java6-jdk_6.26-0squeeze1_amd64.deb
wget http://ftp.us.debian.org/debian/pool/non-free/s/sun-java6/sun-java6-jre_6.26-0squeeze1_all.deb
#
#wget http://ftp.us.debian.org/debian/pool/main/i/ia32-libs/ia32-libs-i386_0.4_i386.deb
#wget http://ftp.us.debian.org/debian/pool/main/i/ia32-libs/ia32-libs_20140131_amd64.deb
#wget http://ftp.us.debian.org/debian/pool/main/e/eglibc/libc6-i386_2.11.3-4_amd64.deb
#wget http://ftp.us.debian.org/debian/pool/main/e/eglibc/libc6_2.11.3-4_amd64.deb
#wget http://ftp.us.debian.org/debian/pool/main/e/eglibc/libc6-i686_2.11.3-4_i386.deb
#

cd /var/cache/apt/archives
#sudo dpkg -i ia32-libs-i386_0.4_i386.deb libc6-i686_2.11.3-4_i386.deb
#sudo dpkg -i ia32-libs_20140131_amd64.deb libc6-i386_2.11.3-4_amd64.deb libc6_2.11.3-4_amd64.deb
sudo dpkg -i ia32-sun-java6-bin_6.26-0squeeze1_amd64.deb sun-java6-bin_6.26-0squeeze1_amd64.deb
sudo dpkg -i sun-java6-jdk_6.26-0squeeze1_amd64.deb sun-java6-jre_6.26-0squeeze1_all.deb

sudo aptitude -f install openjdk-6-jdk sun-java6-jdk sun-java6-jre && sudo aptitude -y -f install

# Install Flash plugin
sudo aptitude -f install flashplugin-nonfree

sudo aptitude update
sudo aptitude -f dist-upgrade

# Build-dep JAVA & PHP5 & Python & Ruby for ruby1.9.2_1.9.2-p290
################################################################
#sudo aptitude -f build-dep php5
#sudo aptitude -f build-dep python
################################################################
sudo aptitude -f build-dep ruby
sudo aptitude -f build-dep java-package
#
sudo service apache2 stop && sudo aptitude -y -f --purge remove apache2
sudo aptitude -f install nginx nginx-common nginx-full
sudo service nginx restart
#
# Install The Latest Libreoffice via backport
sudo aptitude -t wheezy-backports install libreoffice libreoffice-common libreoffice-gnome
sudo aptitude -f dist-upgrade && sudo aptitude -y -f install
#
#
#
sudo aptitude -f install build-essential libssl-dev libcurl4-openssl-dev libreadline-dev libxslt-dev libxml2-dev libreadline5 libyaml-0-2
#
wget https://bigbluebutton.googlecode.com/files/ruby1.9.2_1.9.2-p290-1_amd64.deb
sudo dpkg -i ruby1.9.2_1.9.2-p290-1_amd64.deb
sudo aptitude -f install

sudo update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.9.2 500 \
                         --slave /usr/bin/ri ri /usr/bin/ri1.9.2 \
                         --slave /usr/bin/irb irb /usr/bin/irb1.9.2 \
                         --slave /usr/bin/erb erb /usr/bin/erb1.9.2 \
                         --slave /usr/bin/rdoc rdoc /usr/bin/rdoc1.9.2

sudo update-alternatives --install /usr/bin/gem gem /usr/bin/gem1.9.2 500
#sudo aptitude clean

# Install All Needed Gems
sudo gem install rubygems-update -v 1.3.7
sudo gem install builder -v '2.1.2' -- --with-cflags=-w --nodoc
sudo gem install cucumber -v '0.9.2' -- --with-cflags=-w --nodoc
sudo gem install json -v '1.4.6' -- --with-cflags=-w --nodoc
sudo gem install diff-lcs -v '1.1.2' -- --with-cflags=-w --nodoc
sudo gem install term-ansicolor -v '1.0.5' -- --with-cflags=-w --nodoc
sudo gem install gherkin -v '2.2.9' -- --with-cflags=-w --nodoc
sudo gem install god -v '0.13.3' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install addressable -v '2.3.5' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install fastimage -v '1.6.1' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install curb -v '0.7.15' -- --with-cflags=-w --nodoc
sudo gem install mime-types -v '1.16' -- --with-cflags=-w --nodoc
sudo gem install nokogiri -v '1.4.4' -- --with-cflags=-w --nodoc
sudo gem install open4 -v '1.3' -- --with-cflags=-w --nodoc
sudo gem install rack -v '1.2.2' -- --with-cflags=-w --nodoc
sudo gem install redis -v '2.1.1' -- --with-cflags=-w --nodoc
sudo gem install redis-namespace -v '0.10.0' -- --with-cflags=-w --nodoc
sudo gem install tilt -v '1.2.2' -- --with-cflags=-w --nodoc
sudo gem install sinatra -v '1.2.1' -- --with-cflags=-w --nodoc
sudo gem install vegas -v '0.1.8' -- --with-cflags=-w --nodoc
sudo gem install resque -v '1.15.0' -- --with-cflags=-w --nodoc
sudo gem install rspec-core -v '2.0.0' -- --with-cflags=-w
sudo gem install rspec-expectations -v '2.0.0' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install rspec-mocks -v '2.0.0' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install rspec -v '2.0.0' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install rubyzip -v '0.9.4' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install streamio-ffmpeg -v '0.7.8' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install trollop -v '1.16.2' -- --with-cflags=-w --disable-install-doc --nodoc

# Install Optional Gems
sudo gem install rails -v '2.3.5' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install bundler -v '1.5.3' -- --with-cflags=-w --disable-install-doc --nodoc
#sudo gem install mysql -v '2.8.1' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install rake -v '0.8.7' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install activesupport -v '2.3.5' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install rack -v '1.0.1' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install actionpack -v '2.3.5' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install actionmailer -v '2.3.5' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install activerecord -v '2.3.5' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install activeresource -v '2.3.5' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install declarative_authorization -v '0.5.1' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install fattr -v '2.2.1' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install i18n -v '0.4.2' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install session -v '3.1.0' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install rush -v '0.6.8' -- --with-cflags=-w --disable-install-doc --nodoc
sudo gem install test-unit -v 1.2.3
#
## Install BigBlueButton 0.81
## Step 1
sudo aptitude -f install libjpeg62-dev libbs2b0 liblzo2-2 libmpg123-0 libsox-fmt-alsa libsox-fmt-base libsox2 liblockfile1 \
libvorbisidec1 libx264-132 mencoder sox vorbis-tools jsvc libcommons-daemon-java
#
## Step 2
sudo aptitude -f install java-package authbind cabextract libcommons-dbcp-java libcommons-pool-java libecj-java \
libgeronimo-jta-1.1-spec-java libtomcat6-java netcat-openbsd openoffice.org ttf-mscorefonts-installer
#
## Step 3
sudo aptitude -f install bbb-record-core red5 bbb-freeswitch
#
## Step 4
sudo aptitude -f install libjpeg62-dev
#
sudo aptitude -f install bbb-config \
authbind bbb-apps bbb-apps-deskshare bbb-apps-sip bbb-apps-video bbb-client bbb-config bbb-openoffice-headless \
bbb-playback-presentation bbb-web cabextract libcommons-dbcp-java libcommons-pool-java libecj-java \
libgeronimo-jta-1.1-spec-java libtomcat6-java netcat-openbsd openoffice.org redis-server-2.2.4 swftools-0.9.1 tomcat6 \
tomcat6-common bigbluebutton bbb-demo
#
sudo aptitude -y -f install
sudo aptitude clean

iptables

Usage

iptables [COMMAND] [options]

Commands:

--append        -A chain                    # Append to chain
--check         -C chain                    # Check for the existence of a rule
--delete        -D chain                    # Delete matching rule from chain
--delete        -D chain rulenum            # Delete rule rulenum (1 = first) from chain
--insert        -I chain [rulenum]          # Insert in chain as rulenum (default 1=first)
--replace       -R chain rulenum            # Replace rule rulenum (1 = first) in chain
--list          -L [chain [rulenum]]        # List the rules in a chain or all chains
--list-rules    -S [chain [rulenum]]        # Print the rules in a chain or all chains
--flush         -F [chain]                  # Delete all rules in  chain or all chains
--zero          -Z [chain [rulenum]]        # Zero counters in chain or all chains
--new           -N chain                    # Create a new user-defined chain
--delete-chain  -X [chain]                      # Delete a user-defined chain
--policy        -P chain target             # Change policy on chain to target
--rename-chain  -E old-chain new-chain      # Change chain name, (moving any references)

Options:

    --ipv4          -4                      # Nothing (line is ignored by ip6tables-restore)
    --ipv6          -6                      # Error (line is ignored by iptables-restore)
[!] --protocol      -p proto                # protocol: by number or name, eg. `tcp'
[!] --source        -s address[/mask][...]  # source specification
[!] --destination   -d address[/mask][...]  # destination specification
[!] --in-interface  -i input name[+]        # network interface name ([+] for wildcard)
    --jump          -j target               # target for rule (may load target extension)
    --goto          -g chain                # jump to chain with no return
    --match         -m match                # extended match (may load extension)
    --numeric       -n                      # numeric output of addresses and ports
[!] --out-interface -o output name[+]       # network interface name ([+] for wildcard)
    --table         -t table                # table to manipulate (default: `filter')
    --verbose       -v                      # verbose mode
    --wait          -w [seconds]            # wait for the xtables lock
    --line-numbers                          # print line numbers when listing
    --exact         -x                      # expand numbers (display exact values)
[!] --fragment      -f                      # match second or further fragments only
    --modprobe=<command>                    # try to insert modules using this command
    --set-counters PKTS BYTES               # set the counter during insert/append
[!] --version       -V                          # print package version

Make rules persistent after reboot:

apt-get install iptables-persistent

Examples:

iptables -I INPUT -p tcp --dport 9990 -j ACCEPT     # add rule in start

Java install

sudo add-apt-repository ppa:webupd8team/java sudo apt-get update && sudo apt-get install oracle-jdk7-installer

# After the installation you have enable the jdk

update-alternatives –display java

# Check if Ubuntu uses Java JDK 7

java -version

# If all went right the answer should be something like this:

# java version “1.7.0_25″ # Java(TM) SE Runtime Environment (build 1.7.0_25-b15) # Java HotSpot(TM) Server VM (build 23.3-b01, mixed mode) # Check what compiler is used

javac -version

# It should show something like this

# javac 1.7.0_25 # Add JAVA_HOME to environment variable

# Edit /etc/environment and add JAVA_HOME=/usr/lib/jvm/java-7-oracle to the end of the file

sudo nano /etc/environment

# Append to the end of the file

JAVA_HOME=/usr/lib/jvm/java-7-oracle

# Log in and out (or reboot) for the changes to take effect.

# If you want to remove oracle JDK

sudo apt-get remove oracle-jdk7-installer

Jenkins CI

<host>:8080/systemInfo - shows system propeties, env vars, plugins

CLI

<host>:8080/cli - Jenkins CLI

java -jar jenkins-cli.jar -s http://localhost:8080/ safe-restart

Docker

https://hub.docker.com/_/jenkins/

docker-compose.yml:

version: '2'
services:
  jenkins:
    image: jenkins
    container_name: jenkins
    user: root
    ports:
     - "8080:8080"
     - "50000:50000"
    volumes:
     - ./jenkins_home:/var/jenkins_home
    environment:
     - TZ=Europe/Kiev

Switch UI language

  1. Install plugin Locale Plugin
  2. Manage Jenkins > Configure System > Locale section
  3. Here you can enter the Default Language: this should be a language code or locale code like “fr” (for French), or “de_AT” (German, in Austria), or “en_US”, etc.

Role-based Authorization Strategy

  1. Install plugin Role-based Authorization Strategy
  2. As admin user Manage Jenkins > Configure Global Security > Access Control section > Authorization section
  3. Select Role-Based Strategy
  4. Create and config roles Manage Jenkins > Manage and Assign Roles > Manage Roles
  5. Assign roles with groups/users Manage Jenkins > Manage and Assign Roles > Assign Roles

Kernels and modules

The file /etc/modules configures which loadable modules are automatically loaded. Here is a sample:

To view a list of loadable kernel modules on a system, as well as their status, run:

lsmod
# or
cat /proc/modules

To see a list of the module files:

modprobe –list # or find /lib/modules/uname -r -type f -name “*.ko”

To load a module, and any modules that it depends on, use modprobe:

modprobe <module>

Remove a module from the Linux Kernel:

rmmod <module>

Finding more info about any module or driver:

modinfo <module|driver>

Install the lastest kernel headers:

apt-get install linux-headers-`uname -r`

libvirt + QEMU + KVM + Debian

Check cpu virtualization access:

sudo virt-host-validate

egrep -c '(vmx|svm)' /proc/cpuinfo

sudo apt-get install cpu-checker
sudo kvm-ok

Check Debian source list sources.list:

deb http://ftp.de.debian.org/debian jessie main non-free
deb-src http://ftp.de.debian.org/debian jessie main non-free

Free up not used space on a qcow2-image-file:

# zero fill on guest
dd if=/dev/zero of=/some/file
rm /some/file

# shut down the VM

# on host
qemu-img convert -O qcow2 original_image.qcow2 deduplicated_image.qcow2

qemu-system-x86_64

Show supported machines types:

qemu-system-x86_64 -machine help

QEMU config file - /etc/libvirt/qemu.conf. VNC, SPICE, etc. configuration.

Networking - http://wiki.qemu.org/Documentation/Networking

qemu-img

Images and snapshots - http://azertech.net/content/kvm-qemu-qcow2-qemu-img-and-snapshots

qemu-img tool:

# Show information (size, backing files, snapshots)
qemu-img info <image.qcow2>

# Create a simple QCOW2 image file
qemu-img create -f qcow2 <image.qcow2> <max_capacity_in_gigabytes>g

# Create a QCOW2 image linked with base image
qemu-img create -b <base_image> -f qcow2 -l <new_image>

# List snapshots
qemu-img snapshot -l <imagename>

# Create snapshot
qemu-img snapshot -c <snapshot-name> <imagename>

# Apply snapshot
qemu-img snapshot -a <snapshot-name> <imagename>

# Delete snapshot
qemu-img snapshot -d <snapshot-name> <imagename>

# Clone image
qemu-img convert -p -f qcow2 <source_image> -O qcow2 <dest_image>

Nested virtualization

Check is enabled:

cat /sys/module/kvm_intel/parameters/nested
# Y or N
Enable nested module on host
sudo sh -c "echo 'options kvm-intel nested=Y' > /etc/modprobe.d/kvm-intel.conf"

# reboot or reload the kernel module
modprobe -r kvm_intel
modprobe kvm_intel
Configuration in virt-manager

Make sure your VM is shut down. Open virt-manager, go to the VM details page for that VM. Click on the Processor page. In the Configuration section, there are two options - either type host-passthrough into to Model field or enable Copy host CPU configuration checkbox (that fills host-model value into the Model field). Click Apply. The difference between those two values is complicated, some details are in bug 1055002. For nested virt, you’ll probably want to use host-passthrough until issues with host-model are worked out. Be aware though, that host-passthrough is not recommended for general usage, just for nested virt purposes.

Install KVM, libvirt

sudo apt-get update
sudo apt-get install qemu-kvm libvirt-bin qemu-utils
# adduser root libvirt
# adduser lee libvirt
/etc/libvirt/         # configs
/etc/libvirt/storage/ # storage configs
/var/lib/libvirt/     # images, snapshots, etc.

Libvirt

/var/lib/libvirt/dnsmasq/virbr0.status - MAC/IP/hostname of domains

Domain.xml:

#
# Port forwarding to machine
#

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
<interface type='network'> -> <interface type='user'>
...
<qemu:commandline>
   <qemu:arg value='-redir'/>
   <qemu:arg value='tcp:2222::22'/>
</qemu:commandline>

#
# Custom loader::
#

<os>
    <loader readonly='yes' type='pflash'>/path_to/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/path_to/fedora22_VARS.fd</nvram>
</os>

#
# Network
#

<interface type="network">
   <source network="default"/>
</interface>

Virt-viewer

# connect via VNC
virt-viewer --connect qemu+ssh://<user>@<host>/system <domain>

Virsh CMD

Snapshots:

# get list of snapshot
virsh snapshot-list <domain>

# get snapshot info
virsh snapshot-info --domain <domain> --snapshotname <snapshot>

# create snapshot
virsh snapshot-create-as <domain> --name <snap_name> --description <snap_description>

# delete snapshot
virsh snapshot-delete --domain <domain> --snapshotname <snap_name>

# revert to snapshot
virsh snapshot-revert <domain> --snapshotname <snap_name>

Domains management:

# define domain from XML (without run)
virsh define <file.xml>

# undefine domain from XML
virsh undefine <domain>
virsh undefine --nvram <domain>                 # also remove nvram
virsh undefine --remove-all-storage <domain>    # also remove storages

# show domains list
    virsh list          # run domains
    virsh list --all    # all domains

# generate xml from domain
    virsh dumpxml <domain> > <domain>.xml

# create domain and start VM
    virsh create <config.xml>

# forcefully shutdown VM
    virsh destroy alice

# shutdown VM
    virsh shutdown <domain>

# suspend/resume VM
    virsh suspend alice
    virsh resume alice

# add domain to autostart on host run
    virsh autostart <domain>

# get domain info
    virsh dominfo <domain>

    # edit domain xml
virsh edit <domain>

# get VNC port of domain
virsh vncdisplay <domain>

Pool management:

# show pools (storages) list
    virsh pool-list         # active polls
virsh pool-list --all   # all pools

    virsh pool-define-as <name_of_storage> dir --target /etc/libvirt/images/    # create pool
    virsh pool-autostart <name_of_storage>  # add pool to autostart
    virsh pool-start <name_of_storage>      # start pool

virsh pool-destroy <pool>
virsh pool-undefine <pool>

Clonning:

virt-clone -o <from_domain> -n <to_domain> --auto-clone --original-xml <from_domain_xml>

Network:

# show all networks
virsh net-list

# get network info
virsh net-info <network>

# print lease info for a given network
net-dhcp-leases <network>

# print networks of domain (MAC, type, etc.)
virsh domiflist <domain>

Bridge configuration

sudo apt-get update
sudo apt-get dist-upgrade

# install drivers
sudo apt-get install firmware-realtek firmware-linux

sudo apt-get install bridge-utils

nano /etc/sysctl.conf
        net.ipv4.ip_forward=1

Create bridge::

    sudo brctl addbr br0

Network configuration /etc/network/interfaces:

auto lo
iface lo inet loopback

auto eth1
iface eth1 inet manual

auto br0
iface br0 inet static
        address <host_ip>
        netmask <mask>
        gateway <ip>
        dns-nameservers <ip1> <ip2>
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0

Restart and check network configuration:

sudo /etc/init.d/networking restart
ifconfig

Show bridges:

brctl show

Virt-manager

Install latest version:

sudo wget -q -O - http://archive.getdeb.net/getdeb-archive.key | sudo apt-key add -
sudo sh -c 'echo "deb http://archive.getdeb.net/ubuntu wily-getdeb apps" >> /etc/apt/sources.list.d/getdeb.list'
sudo apt-get update && sudo apt-get dist-upgrade
apt-get install virt-manager

# config file virt-manager
~/.gconf/apps/virt-manager/

Install latest version from GIT (prefer mode) [Ubuntu 16.04 - OK]:

git clone https://github.com/virt-manager/virt-manager.git
cd virt-manager/
sudo apt-get install -y gobject-introspection gir1.2-libosinfo-1.0 intltool-debian intltool
sudo python setup.py install -v

Linux commands

Find latest authenticated IP addresses using authorized_keys:

while read l; do if [[ -n $l && ${l###} = $l ]]; then echo $l; grep `ssh-keygen -l -f /dev/stdin <<<$l | awk '{print $2}'` /var/log/auth.log; echo '---'; fi; done < /home/ubuntu/.ssh/authorized_keys

Append line in file if not exist:

LINE='test'
FILE=/path/to/file
grep -q -F -E "(^$LINE)" $FILE || echo "$LINE" >> $FILE

uniq:

uniq            # if lines are repeated - show only one of several these lines
    -c          # in start of all lines show repeats number and space
    -d          # show only repeatable lines
    -f <num>    # ignore first num words (word include only not spaces)
    -s <num>    # ignore first num chars
    -u          # show only uniq lines
    -i          # case-insensitive matching

case (all of syntax from example are true):

case "test1" in
   "apple" | "test" ) echo "Apple pie is quite tasty." ;;
   "kiwi") echo "New Zealand is famous for kiwi."
        ;;
   "test"*) echo "test* match"
        ;;
   *) echo "matches not found"
        ;;
esac

# Output:
# test* match

expr:

expr ( 2 + 3 )      # error
expr 2+3            # 2+3
expr 2 + 3          # 5
expr '2 + 3'        # 2 + 3
expr "2 + 3"        # 2 + 3
expr 6 % 3          # 0
expr 6 / 3          # 2
expr 6 / 3.5        # error
expr 6 \* 3         # 18
expr 3 \> 3         # 0
expr 3 \>= 3        # 1
expr tt = t         # 0
expr length test    # 4
expr index test e   # 2

Get first line:

head -n 1

Sleep for 1 second between each xargs command:

ps aux | awk '{print $1}' | xargs -I % sh -c '{ echo %; sleep 1; }'

Empty pagecache, dentries and inodes:

# run as root

# pagecache
free -h && echo && sync && echo 1 > /proc/sys/vm/drop_caches && free -h

# (not recomended) pagecache, dentries and inodes
free -h && echo && sync && echo 3 > /proc/sys/vm/drop_caches && free -h

Flush swap:

# run as root
swapoff -a && swapon -a

Command ‘ps’ options:

# show resource usage of process
ps -p <pid> -o %cpu,%mem,cmd,args   # % - means without column header

Show count of processes of some user:

ps aux  | awk '{print $1}' | grep <user> | wc -l

# show processes count of each user
ps -eo user=|sort|uniq -c

Show user group id’s (GID):

id <username>

Show free/usage space on partitions:

df -h # readable
   -m
   -k

Show count of chars, lines, bytes, etc.:

wc

Show size:

du <dir/file>
    -h # readable
    -s # size of dir

Show size and sort with hidden files/directories and total size:

du -sch /.[!.]* /* |sort -h

Show last lines of file in realtime:

tail -f <file>

Show listening ports:

netstat -antup

Change group:

chgrp [-R] <GID/gname>

Compress image size:

sudo apt-get install jpegoptim

# recursive compress with 50% quality
find . -type f -name "*.jpg" -exec jpegoptim -m50 {} \;

ARP:

arp     # ARP table contains device addresses and mac's which located in same network
    -n  # only IP (without domains)
    -a  # IP and domains

# show in the file
cat /proc/net/arp

APT

apt-cache policy <package>  # search versions of packages which enabled to install
fdisk -l    # show info about disks and partitions
blkid       # show small list of disks, UUID, TYPE
lsblk       # show disk type, size, name, mountpoint

# show UUID of all disks
ls -al /dev/disk/by-uuid/

# show disk info
sudo hdparm -I /dev/sda
                        | grep -i trim  # TRIM for SSD

# mount all from fstab (run it after fstab was changed)
sudo mount -a

# create partition table
sudo mkfs -t ext4 <device_name>
# create 100 files: file1, file2, ..., file100
touch file{1..100}

NMAP:

nmap <ip>       # scan IP
            -sT # TCP
            -sU # UDP
            -O  # show software version
            -v  # show process of scannig
# show status of all services
service --status-all

Linux users and groups:

# add existing user to existing group
sudo usermod -a -G groupName userName

adduser <name>  # create new user
        -r      # create system user
        -g      # group

useradd
        -u          # UID
        -r          # create system user
        -g <group>  # name or ID of group
        -m          # create home dir
        -d <path>   # path to home dir
        -s <shell>  # register shell for user
        -c          # comments

userdel <name>  # delete user
        -r      # also delete home dir

# create group
groupadd <name>
        -r      # create system group
        -g      # group ID

Shutdown commands:

shutdown [options] now|<time> # power management
        -r  # reboot
        -h  # power off after shutdown
        -c  # cancelling scheduled shutdown

# poweroff now
shutdown -h now
halt -p
# send message to another user
write <user> [tty]

# show entered users
who

last       # login history
    reboot # reboot history
    <user> # user login history

lastlog    # show all user with login time

Show system information:

inxi -F             # show information about system
lshw | more | less  # show hardware information, also sector size
lscpu               # show CPU information

# show hardware information, also type (virtual or bare metal)
sudo dmidecode -t system

CPU benchmark:

apt install sysbench
sysbench --test=cpu --cpu-max-prime=20000 run

Decrease *.vdi size:

# 1. On guest - zero fill free space
sudo dd if=/dev/zero of=zero bs=512k    # bs=<sector size>
sudo rm zero

# 2. On host
vboxmanage modifyhd /path/to/thedisk.vdi --compact
# run command as another user
sudo -u lee <command>

# for adding PATH env
sudo nano /etc/environment

# restart networking
sudo ifconfig wlan0 down && sudo ifconfig wlan0 up

# get and show page
wget -qO- 127.0.0.1

grep:

grep -iR -E '(error_report|ini_set)' ./*.php    # find in files
        -R                                      # recursive (include inherited dirs)
        -l                                      # show only file names
        -i                                      # ignore case sensetive
        --exclude="*\.svn*"                     # exclude
        --exclude-dir="dir" | {dir1,dir2}       # exclude dir
        -A <num>                                # show also <num> lines after
        -B <num>                                # show also <num> lines before

Word replacement:

grep -Rl -E "(SOURCE)" ./* | xargs sed -i "s/SOURCE/DEST/g"

sed -i "s@SOURCE@DEST@g" /path/to/file

Remove the line which contains string:

# print the output to standard out
sed '/pattern to match/d' <file>

# directly modify the file (and create a backup). Will create <file>.bak.
sed -i.bak '/pattern to match/d' <file>

Replace whole line started by:

sed "s/start_of_line.*/new_line_content/g"

Find

find ./ -name 'path*'
    -type (f=file, d=dir, l=link, p=pipe, s=socket)

find ./ -mtime 0 -type f -not -path "./" -exec cp --parents "{}" $TODAY_DIR/ \;

# chmod only files
find . -type f -exec chmod 644 {} \;

# faster chmod only files
find . -type f -print0 | xargs -0 chmod 644

# make iconv replace the input file with the converted output in one line
find . -type f -name '*.php' -exec sh -c 'iconv -f WINDOWS-1251 -t UTF-8 -o {}UTF_TMP {} && mv {}UTF_TMP {}' \;

# clear file
> filename

Delete old files

ls -t | sed -e '1,10d' | xargs -d '\n' rm -f
    ls -t               # lists all files in the current directory in decreasing order of modification time
    sed -e '1,10d'      # deletes the first 10 lines, ie, the 10 newest files
    xargs -d '\n' rm    # collects each input line (without the terminating newline) and passes each line as an argument to rm.
    -f                  # ignore if empty

Create backup tar(tar.gz) archive:

tar -cvpzf backup.tar.gz [dir1] [dir2]
    -c  # create
    -v  # verbose mode
    -p  # preserving files and directory permissions.
    -z  # compress (tar.gz)
    -f <filename>
    --exclude=<file_or_dir>

Extract archive at current dir:

tar -xf <archive.tar.gz>

PASSWORD GENERATE:

apg
    -t  # print pronunciation for generated pronounceable passwor
    -q  # quiet mode (do not print warnings)
    -n  # count of generated passwords
    -m  # minimum pass length
    -x  # maximum pass length
    -a  # choose algorithm:
        1 - random password generation according to password modes
        0 - pronounceable password generation

# Example:
apg -q -m 8 -x 8 -a 1 -n 5

SECURITY

Find empty passwords:

awk -F: '($2 == "") {print}' /etc/shadow

Find users with uid == 0:

awk -F: '($3 == "0") {print}' /etc/passwd

Port listening check:

netstat -tulpn

Add service to autorun

  • Ubuntu 16.04:

    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable <SERVICE_NAME>.service
    

Linux services

D-Bus - система межпроцессного взаимодействия, которая позволяет приложениям в операционной системе сообщаться друг с другом.

bash - это командная оболочка Unix, написанная для проекта GNU.

init - (сокращение от англ. initialization — инициализация) — cистема инициализации в UNIX и Unix-подобных системах, которая запускает все остальные процессы. Работает как демон и обычно имеет PID 1. Обычно (согласно Filesystem Hierarchy Standard) располагается по пути /sbin/init.

systemd - менеджер системы и сервисов Linux, совместимый с SysV и LSB сценариями. Обеспечивает возможности агрессивного распараллеливания, использует сокет и D-Bus для начала активации сервиса по требованию, запуск демонов в фоновом режиме, отслеживает процессы, использующие Linux cgroups, поддерживает мгновенные снимки и восстановление состояния системы, поддерживает подключения и автоматическое монтирование точек и реализует разработку логики транзакционных зависимостей службы управления. Он может работать как замена для Sysvinit.

systemd-journald - занимается логированием (современный демон)

systemd-udevd - менеджер устройств, являющийся преемником devfs, hotplug и HAL. Его основная задача — обслуживание файлов устройств в каталоге /dev и обработка всех действий, выполняемых в пространстве пользователя при добавлении/отключении внешних устройств, включая загрузку firmware.

atd - демон запуска заданий в указанное время, можно отключать если не используете.

cron - классический демон-планировщик задач в UNIX-подобных операционных системах, использующийся для периодического выполнения заданий в определённое время.

acpid - Это служба интерфейса конфигурирования системы и управления энергопитанием ACPI. ACPI - это открытый промышленный стандарт, определяющий действия по управлению системой - определение устройств по принципу plug-and-play и управлению питанием, в том числе действия при запуске и остановке системы, а также при переводе ее в режим пониженного энергопотребления.

Linux Mint 18 VNC Server

# Remove the default Vino server
sudo apt-get -y remove vino

# Install x11vnc
sudo apt-get -y install x11vnc

# Create the directory for the password file
sudo mkdir /etc/x11vnc

# Create the encrypted password file. You will be asked to enter and verify the password. Then press Y to save the password file.
sudo x11vnc --storepasswd /etc/x11vnc/vncpwd

# Create the systemd service file for the x11vnc service
sudo xed /lib/systemd/system/x11vnc.service

    [Unit]
    Description=Start x11vnc at startup.
    After=multi-user.target

    [Service]
    Type=simple
    ExecStart=/usr/bin/x11vnc -auth guess -forever -noxdamage -repeat -rfbauth /etc/x11vnc/vncpwd -rfbport 5900 -shared

    [Install]
    WantedBy=multi-user.target

# Reload the services
sudo systemctl daemon-reload

# Enable the x11vnc service at boot time
sudo systemctl enable x11vnc.service

# Start the service
sudo systemctl start x11vnc.service

Makefile

https://www.gnu.org/software/make/manual/make.html http://opensourceforu.com/2012/06/gnu-make-in-detail-for-beginners/

Syntax

The general syntax of a Makefile rule is as follows:

target: dependency1 dependency2 ...
[TAB] action1
[TAB] action2
    ...

# Example of Makefile. Run: make default
MESSAGE ?= Hi all

start:
        @echo ${MESSAGE}

default: start

@ (at) suppresses the standard print-action-to-standard-output behaviour of make, for the action/command that is prefixed with @. For example, to echo a custom message to standard output, we want only the output of the echo command, and don’t want to print the echo command line itself. @echo Message will print “Message” without the echo command line being printed.

By convention, all variable names used in a Makefile are in upper-case. A common variable assignment in a Makefile is CC = gcc, which can then be used later on as ${CC} or $(CC). Makefiles use # as the comment-start marker, just like in shell scripts.

?= - conditional assignment statements assign the given value to the variable only if the variable does not yet have a value.

md

mdadm --assemble -scan

mdadm --run /dev/md0
mdadm --run /dev/md1
mdadm --run /dev/md2
mdadm --run /dev/md....

MongoDB

Replication:

# run minimum 3 mongo nodes
# execute next command on the one node
rs.initiate({_id: "rs1", members: [{_id: 0, host: "mongo1:27017" }, {_id: 1, host: "mongo2:27017"}, {_id: 2, host: "mongo3:27017"}]});
# or
mongo --quiet --eval "rs.initiate({_id: 'rs1', members: [{_id: 0, host: 'mongo1:27017' }, {_id: 1, host: 'mongo2:27017'}, {_id: 2, host: 'mongo3:27017'}]});"

# check conf
rs.conf()

# check status
rs.status()

# allow queries from secondary (applied for current session)
rs.slaveOk()

Insert values:

db.<collection>.insertMany([
   { item: "journal", qty: 25, status: "A", size: { h: 14, w: 21, uom: "cm" }, tags: [ "blank", "red" ] },
   { item: "notebook", qty: 50, status: "A", size: { h: 8.5, w: 11, uom: "in" }, tags: [ "red", "blank" ] }
]);

Execute JS code:

mongo [db_name] --quiet --eval "<JS_code>"

Print a list of all databases on the server:

show dbs
# or
db.getMongo().getDBNames()
# or
printjson(db.adminCommand('listDatabases'))

Switch current database to <db>. The mongo shell variable db is set to the current database:

use <db>    # The command will create a new database if it doesn't exist

Backup/restore MongoDB data:

mongodump --host=mongodb:27017 --db <dbname> --archive | mongorestore --host=rs1/mongo1:27017,mongo2:27017,mongo3:27017 --archive

# docker

# dump
docker exec <container> sh -c 'mongodump --archive' > mongodb.dump

# restore
docker exec -i <container> sh -c 'mongorestore --archive' < mongodb.dump

Print a list of all collections for current database:

show collections
# or
db.getCollectionNames()
# or
show tables

Print documents of collection:

db.<collection>.find().pretty()

Get all documents of all collections of current DB:

var c = db.getCollectionNames(); for (var i = 0; i < c.length; i++) { print('[' + c[i] + ']'); db.getCollection(c[i]).find().forEach(printjson); }

Get DBs, collections and docs count:

db.getMongo().getDBNames().forEach((d) => {
  print('[' + d + ']');
  db = db.getSiblingDB(d);
  db.getCollectionNames().forEach((c) => {
    print('\t' + c + ': ' + db.getCollection(c).count());
  });
});

# single line
db.getMongo().getDBNames().forEach((d)=>{print('['+d+']');db=db.getSiblingDB(d);db.getCollectionNames().forEach((c)=>{print('\t'+c+': '+db.getCollection(c).count())})})

MySQL / SQLite

Get all indicies size:

SELECT database_name, table_name, index_name,
round(stat_value*@@innodb_page_size/1024/1024, 2) size_in_mb
FROM mysql.innodb_index_stats
WHERE stat_name = 'size' AND index_name != 'PRIMARY'
ORDER BY 4 DESC;
use DBNAME;

# select rows older than 6 months, show only 5 entries ('created' is timestamp)
SELECT * FROM `logs` WHERE `created` < DATE_SUB(NOW(), INTERVAL 6 MONTH) LIMIT 5;

# delete rows older than 6 months ('created' is timestamp)
DELETE FROM `logs` WHERE `created` < DATE_SUB(NOW(), INTERVAL 6 MONTH);

# get DATE: now - 6 months
SELECT DATE_SUB(NOW(), INTERVAL 6 MONTH);

# check
SHOW VARIABLES LIKE '%innodb_file_per_table%';

# if ON you can reclaim free space
OPTIMIZE TABLE logs;
CREATE TABLE MyGuests (
id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY,
firstname VARCHAR(30) NOT NULL,
lastname VARCHAR(30) NOT NULL,
email VARCHAR(50),
reg_date TIMESTAMP
)
CREATE TABLE `users` (
`id` int(7) NOT NULL auto_increment,
`full_name` varchar(32) collate utf8_unicode_ci NOT NULL default '',
`email` varchar(32) collate utf8_unicode_ci NOT NULL default '',
`type` varchar(12) collate utf8_unicode_ci NOT NULL default '',
PRIMARY KEY  (`id`)
) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

Check tables DB engine:

SELECT TABLE_NAME, ENGINE FROM information_schema.TABLES where TABLE_SCHEMA = '<database_name>';

Backup/restore mysql dump:

# create dump
mysqldump -u USER -pPASSWORD DATABASE > dump.sql

# create dump of several tables (not all database)
mysqldump -u USER -pPASSWORD DATABASE TABLE1 TABLE2 TABLE3 > dump_tables.sql

# create dump and archive it on fly
mysqldump -u USER -pPASSWORD DATABASE | gzip > dump.sql.gz

# with timestamp
mysqldump -u USER -pPASSWORD DATABASE | gzip > `date +dump.sql.%Y%m%d.%H%M%S.gz`


# IMPORTANT: Assuming you are using InnoDB tables, you should add '--single-transaction' option
mysqldump --skip-lock-tables --single-transaction some_database > dump.sql

# restore from dump
mysql -u USER -p DATABASE < dump.sql

# restore from gz dump
zcat dump.sql.gz | mysql -h HOST -u USER -p DATABASE

New ROOT password (tested on - mysql Ver 14.14 Distrib 5.5.52, for linux2.6 (x86_64) using readline 5.1):

mysqladmin -p -u root password
mysqladmin -u root password NEWPASSWORD
# show slow queries
select * from mysql.slow_log\G

# show databases
mysqlshow -p

# connect to remote DB
mysql -u <user>
          -p                # connect with pass
          -h <ip>

# create DB
CREATE DATABASE users;

CONNECT mysqldb;    # connect to db
USE <dbname>;               # change db

SHOW DATABASES;     # show db

SHOW TABLES FROM <db>;

SHOW COLUMNS FROM <table>;

INSERT users VALUES ("1", "vasya", "email@mail.ru", "admin");

SELECT * FROM <table>;

SELECT host, user, password from mysql.user;        # mysql users

Create user and add privileges:

CREATE DATABASE db_name;

CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON db_name.* TO 'newuser'@'localhost';                 # db.table
FLUSH PRIVILEGES;


# Allow only select requests. % - means from everywhere
GRANT SELECT ON *.* TO 'username'@'%' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;

# Allow see everything but not change (inluding system info)
GRANT SELECT, SHOW VIEW, PROCESS, REPLICATION CLIENT ON *.* TO 'username'@'%' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
# "Reading" the definition of views is the SHOW VIEW privilege.
# "Reading" the list of currently-executing queries by other users is the PROCESS privilege.
# "Reading" the current replication state is the REPLICATION CLIENT privilege.

SHOW GRANTS FOR 'username'@'%';

Delete mysql user:

DROP USER ‘demo’@‘localhost’;

Update table cell:

UPDATE <table> SET <key>='<value>' WHERE <key>='<value>';

Clear mysql command history:

> ~/.mysql_history

Show size of databases:

SELECT table_schema "database_name", sum( data_length + index_length )/1024/1024 "Data Base Size in MB" FROM information_schema.TABLES GROUP BY table_schema;

Show size of all tables:

SELECT  table_schema, table_name, (data_length + index_length + data_free)/1024/1024 AS total_mb, (data_length)/1024/1024 AS data_mb, (index_length)/1024/1024 AS index_mb, (data_free)/1024/1024 AS free_mb, CURDATE() AS today FROM information_schema.tables ORDER BY 3 ASC;

Repair crashed table:

# ERROR 144 - Table 'table_name' is marked as crashed and last (automatic?) repair failed
repair table <table_name>;

Rename table:

RENAME TABLE tbl_name TO new_tbl_name
    [, tbl_name2 TO new_tbl_name2] ...

SQLite

Show all tables:

SELECT name FROM sqlite_master WHERE type='table'

Show column list of the table:

PRAGMA table_info(table_name);

nano

Find+replace in nano press next combination:

[Ctrl]+[\]
Enter your search term      [return]
Enter your replacement term [return]
[A]                           # to replace all instances

Show line nums - [CTRL]+[C] (without dynamic change), or:

nano -c <file>

Thus to display line numbers always when using nano:

echo 'set const' >> ~/.nanorc

NGINX

# check the config for syntax errors
sudo nginx -t

# reload nginx
sudo service nginx reload

NGINX config

server {
    # size of upload files
    client_max_body_size 32m;

    # 301 redirect
    return 301 https://example.com$request_uri;
    return 301 https://$host$request_uri;
}

NGINX SSL + auth_basic

Add user/password for base auth:

sudo -v
echo "user:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

Change the Nginx default server block /etc/nginx/sites-available/default:

server {
        listen 80 default_server;
        return 301 https://$host$request_uri;
}

server {
        listen 443 ssl default_server;

        # Replace with your hostname
        server_name elk.company.com;

        ssl on;

        # Certs have to exist
        ssl_certificate /path/to/keys/cert.crt;
        ssl_certificate_key /path/to/keys/key.key;

        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/htpasswd.users;

        location / {
                proxy_pass http://localhost:5601;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_set_header Host $host;
                proxy_cache_bypass $http_upgrade;
        }

}

NGINX log off

Off images access loging in access_log:

location ~* \.(gif|jpg|png) {
        access_log off;
}

Change images access logimg location:

location ~* \.(gif|jpg|png)$ {
        access_log /path/to/access.log;
}

Off access loging to dir and subdir:

location ^~ /images/ {
        access_log off;
}

NGINX template site

server {
        listen   80 default_server;
        server_name  192.168.35.211;
        root   /var/www/;
        access_log  /var/log/nginx/work.access.log;

        location / {
                index  index.php index.html index.htm;
                expires 1m;
        }

        error_page  404  /404.html;
        location = /404.html {
            root   /var/www/nginx-default;
        }

        # Redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = 50x.html {
                root   /var/www/nginx-default;
        }

        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                #fastcgi_pass 127.0.0.1:9000;
                #With php5-fpm:
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
                fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        }

        # images access loging
        location ^~ /images/ {
                access_log /var/log/nginx/work.access.images.log;
        }
}

Official

server{
    #имя сервера:
    server_name mysite.com;

    #логи
    access_log /var/log/nginx/mysite.access.log;
    error_log  /var/log/nginx/mysite.error.log;

    # корневая директория
    root /home/www-data/mysite;

    location ~ \.php$ {
        try_files $uri = 404;
        include fastcgi_params;
        fastcgi_pass  unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;

        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
    }

    # порядок индексов
    location /
    {
        index  index.php index.html index.htm;
    }

    #кастомная страница 404
    location /errors {
         alias /home/www-data/mysite/errors ;
    }
    error_page 404 /errors/404.html;
}

NVME

http://www.nvmexpress.org/wp-content/uploads/NVM_Express_1_2b_Gold_20160603.pdf

To get information about the NVMe devices installed:

nvme list

To get SMART information:

nvme smart-log /dev/nvme0

To get additional SMART information (not all devices support it):

nvme smart-log-add /dev/nvme0

OVH.COM

Kernels

Kernel images by OVH ftp://ftp.ovh.net/made-in-ovh/bzImage/

PostgreSQL

# Xenial sudo apt-get install postgresql

su - postgres

# clear history # > .psql_history

psql

psql -U <user> -W <db>

psql -h localhost test_database test_user

d # tables and others

dt # only tables

q # exit

Так делается дамп (или сжатый дамп): pg_dump -h 127.0.0.1 -p 5432 -U username -F p database > pg_dump.sql pg_dump -h 127.0.0.1 -p 5432 -U username -F p database | gzip > pg_dump.sql.gz pg_dump -h 127.0.0.1 -p 5432 -U username -F t database | gzip > pg_dump.sql.tar.gz

Так дамп заливается в базу, база должна быть пустая. # psql -U username -W secret -d database < pg_dump.sql

CREATE DATABASE dbname OWNER rolename;

CREATE ROLE username LOGIN PASSWORD ‘secret’ SUPERUSER; # суперпользователь CREATE ROLE username LOGIN PASSWORD ‘secret’ NOSUPERUSER INHERIT; # не суперпользователь, наследует свойства от групп CREATE ROLE username LOGIN PASSWORD ‘secret’ NOSUPERUSER NOCREATEROLE; # не суперпользователь, не может создавать пользователей

CREATE USER test_user WITH password ‘qwerty’; GRANT ALL privileges ON DATABASE test_database TO test_user;

DROP USER drupal; # delete user

DROP DATABASE [ IF EXISTS ] name

SELECT * FROM <tbl>;

# Create db and user #

CREATE USER user WITH password ‘PASSWORD’; CREATE DATABASE dbname OWNER user; GRANT ALL privileges ON DATABASE dbname TO user;

Private IP range

IP start IP end Count
192.168.0.0 192.168.255.255 65,536
172.16.0.0 172.31.255.255 1,048,576
10.0.0.0 10.255.255.255 16,777,216

Proxmox

# Инструкция по установке на Debian # https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie

sudo nano /etc/hosts
127.0.0.1 localhost.localdomain localhost <local_ip> prox4m1.proxmox.com prox4m1 pvelocalhost

getent hosts $(hostname) getent hosts 192.168.10.10

echo “deb http://download.proxmox.com/debian jessie pve-no-subscription” > /etc/apt/sources.list.d/pve-install-repo.list wget -O- “http://download.proxmox.com/debian/key.asc” | apt-key add -

apt-get update && apt-get dist-upgrade apt-get install proxmox-ve ntp ssh postfix ksm-control-daemon open-iscsi systemd-sysv update-grub reboot

apt-get remove linux-image-amd64 linux-image-3.16.0-4-amd64 linux-base update-grub

# Connect to the admin web interface (https://youripaddress:8006) # create a bridge called vmbr0, and add your first network interface to it.

# VM’s configs # /etc/pve/qemu-server/VMID.conf # https://pve.proxmox.com/wiki/Manual:_vm.conf

# subscribe message delete # /usr/share/pve-manager/ext6/pvemanagerlib.js # if (data.status !== ‘Active’) {/**/}

# cluster not ready - no quorum? (500) //
pvecm e 1

Python SimpleHTTPServer

sudo python -m SimpleHTTPServer 80

for python 3.x version, you may need

sudo python3 -m http.server 80

Ports below 1024 require root privileges.

As George added in a comment, running this command as root is not a good idea - it opens up all kinds of security vulnerabilities.

However, it answers the question.

regexp

## # .htaccess RexExp ##

^ # начало строки

$ # конец строки

. # любой символ

# ИЛИ
? # ставится после символа/группы символов, который может присутствовать/отсутствовать.
Например, выражению “jpe?g” подойдет и строка “jpg”, и строка “jpeg”. Пример выражения с группой символов: “super-(puper-)?site”.
  • # ставится после символа/группы символов, который может отсутствовать или присутствовать неограниченное число раз подряд. Например, выражению “jpe*g” подойдут строки “jpg”, “jpeg” и “jpeeeeeeg”.
  • # действует аналогично символу * с той лишь разницей, что предшествующий ему символ обязательно должен присутствовать хотя бы один раз. Например, выражению “jpe+g” подойдут строки “jpeg” и “jpeeeeg”, но не “jpg”.
[] # перечисления допустимых символов.
Например, выражение “[abc]” равносильно выражению “a|b|c”, но вариант с квадратными скобками обычно является более оптимальным по быстродействию. Внутри скобок можно использовать диапазоны: например, выражение “[0-9]” равносильно выражению “[0123456789]”. Если символы внутри квадратных скобок начинаются с символа ^, это означает любой символ, кроме перечисленных в скобках. Например, выражение “[^0-9]+” означает строку из любых символов, кроме цифр.
# ставится перед спецсимволами, если они нужны в своем первозданном виде.
Например, выражению “jpe+g” соответствует только одна строка “jpe+g”.
{3,9} # диапозон количества символов, в данном случае допускается от 3 до 9 символов, длина строки из символов должна быть 3<=x<=9 символов.
Применяется в основном в модуле преобразований - синтаксис регулярных выражений преобразований, значение флагов.

Пример, проверяем строку HTTP запроса отправленную браузером серверу на предмет совпадения ее содержания определенному шаблону:

RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}/index.phpHTTP/

предположим что мы запросили индексную сраницу на сервере %{THE_REQUEST} = «GET /index.html HTTP/1.1» в таком случае мы читаем исходную строку содержащую фигурные скобки так “начало_полученных_данныхGETпробел/index.phpпробелHTTP/” т.е. может быть “GET”, а может быть “POST” могут быть и другие значения… просто символы, в зависимости от того чем и как мы запросим файл index.php на сервере.

Resume stopped process

Resume stopped process by CTRL+Z combination.

The general job control commands in Linux are:

jobs                        # list the current jobs
fg                          # resume the job that's next in the queue
fg %[number]                # resume job [number]
bg                          # Push the next job in the queue into the background
bg %[number]                # Push the job [number] into the background
kill %[number]              # Kill the job numbered [number]
kill -[signal] %[number]    # Send the signal [signal] to job number [number]
disown %[number]            # disown the process(no more terminal will be owner), so command will be alive even after closing the terminal

rsync

# Send big file on server
rsync -vr --progress --inplace <src> <user>@<ip>:</path/to/dest>

# Get big file/dir from server
rsync -vr --progress --inplace <user>@<ip>:</path/to/dest> <local/destination>

    -r                        # recursive
    -v                        # verbosity
    -z                        # compress
    --progress                # show progress during transfer
    -n                        # dry mode (list files but not sync)
    --exclude <relative/path> # exclude file or dir

rsyslog

Modern syslog configuration - https://selivan.github.io/2017/02/07/rsyslog-log-forward-save-filename-handle-multi-line-failover.html

Rules path:

/etc/rsyslog.d/*.conf       # num-<name>.conf (num < 50-default.conf)

Config path:

/etc/rsyslog.conf

ON receive logs from UDP /etc/rsyslog.conf:

# uncomment next lines

# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")
sudo service rsyslog restart

Configuration

File create mode (rights):

$umask 0000
*.* /var/log/file-with-0644-default

$FileCreateMode 0600
*.* /var/log/file-with-0600

$FileCreateMode 0644
*.* /var/log/file-with-0644

Remote_syslog

Read txt files and send logs to syslog remote server - https://github.com/papertrail/remote_syslog2

Rotate received logs to files

/etc/rsyslog.d/*.conf:

    :syslogtag, contains, "<tag>" /var/log/<log_name>.log   # tag contains
    :syslogtag, isequal, "<tag>" /var/log/<log_name>.log    # tag equal
    :msg, contains, "<msg>" /var/log/<log_name>.log         # msg contains
    :msg, isequal, "<msg>" /var/log/<log_name>.log          # msg equal

    #########
    # example
    $template myFormat,"%timegenerated:::date-hour%:%timegenerated:::date-minute% %fromhost-ip% %msg%\n"
$template logName,"/var/log/my_logs/%syslogtag%.log"

:hostname, contains, "msgLoud" ?logName; myFormat
& stop # don't save logs also in /var/log/syslog
#########

# time values
%timegenerated:::date-year%
    %timegenerated:::date-month%
    %timegenerated:::date-day%
    %timegenerated:::date-hour%
    %timegenerated:::date-minute%
    %timegenerated:::date-second%

# properties
:hostname - имя хоста\ip из сообщения
:fromhost - имя хоста, от которого пришло сообщение
:fromhost-ip - адрес хоста, от которого пришло сообщения (127.0.0.1 для локальных сообщений)
:syslogtag - имя и номер процесса (" rsyslogd[12125]:"), который выдал сообщение (извлекается из сообщения)
:programname - имя процесса, который выдал сообщение (извлекается из сообщения)
:pri - источник и приоритет, в виде числа
:pri-text - декодированные источник и приоритет (facility.priority, например syslog.emer)
:syslogfacility - только источник в виде числа
:syslogfacility-text - только декодированный источник ("local0")
:syslogseverity - только приоритет в виде числа
:syslogseverity-text - только декодированный уровень ("debug")
:timegenerated - время получения  (с высоким разрешением)
:timereported - время, извлечённое из сообщения
:inputname - имя входного модуля
$hour, $minute - текущее время
$myhostname - имя хоста обработки
sudo service rsyslog restart

RUBY

Basics:

load "second.rb"    # Load code from another file

# One line comment

=begin
Multiline comment
=end

print "Enter"           # print on the stdout
first_num = gets.to_i   # get from stdin

print "second"
second_num = gets.to_i

# + - / % *
puts first_num.to_s + " + " + second_num.to_s + " = " + (first_num + second_num).to_s

# show class of object
puts 1.class
puts 1.2.class
puts "str".class

A_CONSTANT = 23 # constant

num_1 = 1.000
num_2 = 0.999

puts (num_1 - num_2).to_s

Conditions, comparisons:

puts "enter your age:"
age = gets.to_i

# Comparison: == != < > <= >=
# Logical: && || ! and or not

if (age >= 5) && (age <= 7)
  puts "hm"
elsif (age >= 7) && (age <=13)
  puts "ahh"
else
  puts "hah"
end

age = 30
puts (age >= 50) ? "Old" : "Young"

print "Enter:"  # no new line after stdout
res = gets.chomp    # chomp - remove new lines from the string

case res
when "1", "2"
  puts "small"
  exit  # exit from the program
when "3", "4"
  puts "medium"
  exit
else
  puts "big"
end

age = 12
puts "You're Young" if age < 30

age = 3
unless age > 4  # if not
  puts "hm"
else
  puts "hah"
end
# hm

puts (true and false).to_s

Files:

write_handler = File.new("test.out", "w")
write_handler.puts("test").to_s
write_handler.close
data_from_file = File.read("test.out")
puts "data: " + data_from_file

Loops:

a = 1
until a >= 10
  a += 1
  next unless (a % 2) == 0
  puts a
end

y = 1
while y <= 10
  y += 1
  next unless (y % 2) == 0
  puts y
end

x = 1
loop do
  x += 1
  next unless (x % 2) == 0  # next is like contionue in other language
  puts x
  break if x >= 10  # end of loop
end

Each, For:

(0..8).each do |i|  # (0..8) is a range
  puts "- #{i}"
end

g = ["a","b","c"]
g.each do |f|
  puts "Ehhuuu: #{f}"
end

numbers = [1,2,3,4,5]
for number in numbers
  # here are two equal methods:
  print number.to_s + ", "
  print "#{number}, "
end

Functios:

def add_nums(a, b)
  return a.to_i + b.to_i
end

puts add_nums(3,4)

Exceptions:

#
age = 12

def check(age)
  raise ArgumentError, "Enter positive num" unless age > 0
end

begin
  check(-1)
rescue
  puts "Bad!"
end

#
print "Enter num: "
a = gets.to_i
print "Enter num2: "
b = gets.to_i

begin
  res = a / b
rescue
  puts "Devide by zero!"
  exit
end

puts "#{a} / #{b} = #{res}"

Multiline string:

multiline = <<EOF
line1

    line2
#{4 + 5} \n\n
EOF

puts multiline

Strings:

test = "   Yeallow green blue"
puts test.include?("green") # stdout: true
puts test.size              # stdout: 18 (numbers of characters)
puts "count of 'e': #{test.count("e").to_s}"
puts "except the 'e': #{test.count("^e").to_s}"
puts test.start_with?("y")  # stdout: true
puts "index of blue: #{test.index("blue").to_s}"    # index of first string

puts test.equal?test    # compare two OBJECTS (true)
puts "a".equal?"a"      # false (different OBJECTS)

puts "upcase: " + test.upcase
puts "downcase: " + test.downcase
puts "swapcase: " + test.swapcase   # yEALLOW GREEN BLUE

# spaces stripping
puts "lstrip: " + test.lstrip
puts "rstrip: " + test.rstrip
puts "strip: " + test.strip

puts "rjust: " + test.rjust(50, '.')
puts "ljust: " + test.ljust(50, '.')
puts "center: " + test.center(50, '.')  # .............   Yeallow green blue   .............

puts "chop: " + test.chop   # remove last char
puts "chomp: " + test.chomp('ue')    # Yeallow green bl (remove from end)

puts "delete: " + test.delete('e')  # delete chars
puts test.split(/ /)    # split to array by spaces

Arrays:

puts Array.new              # empty
puts Array.new(5)           # empty
puts Array.new(5, "empty")  # empty

arr =  [1, "two", 3, 5.5]

puts arr[2]                 # 3
puts arr[2,2].join(", ")    # 3, 5.5
puts arr.values_at(0,1,3).join(", ")    # 1, two, 5.5


# Whitespace Arrays
# The %w syntax is a Ruby shortcut for creating an array without requiring quotes and commas around the elements.

if %w(debian ubuntu).include?('ubuntu')
  # do ubuntu things with the Ruby array %w() shortcut
end

Hashes:

h = {"key" => 1.2,
     "key2" => "hi"}

puts h["key2"]

h.each do |key, value|
  puts key.to_s + " : " + value.to_s
end

Symbols:

# Simpler that String, doesn't create new object for same string
:derek
puts :derek.to_s

Regular Expressions

Use Perl-style regular expressions:

'I believe'  =~ /I/                       # => 0 (matches at the first character)
'I believe'  =~ /lie/                     # => 4 (matches at the 5th character)
'I am human' =~ /bacon/                   # => nil (no match - bacon comes from pigs)
'I am human' !~ /bacon/                   # => true (correct, no bacon here)
/give me a ([0-9]+)/ =~ 'give me a 7'     # => 0 (matched)

Modules

human.rb:

module Human
  attr_accessor :name, :height, :weight

  def run
    puts self.name + " runs"
  end
end

smart.rb:

module Smart
  def act_smart
    return "E = mc2"
  end
end

base.rb:

require_relative "human"
require_relative "smart"

module Animal2
  def make_sound
    puts "Grrr"
  end
end

class Dog
  include Animal2
end

rover = Dog.new
rover.make_sound

Objects and Classes

Class 1:

class Animal
  def initialize
    puts "Creating a new animal"
  end

  # Getters and setters
  def set_name(new_name)
    @name = new_name
  end

  def get_name
    @name
  end

  def name
    @name
  end

  def name=(new_name)
    if new_name.is_a?(Numeric)
      puts "Name can't be a number"
    else
      @name = new_name
    end
  end
end

cat = Animal.new
cat.set_name("Bob")

puts cat.get_name
puts cat.name
cat.name = "Chaffie"
puts cat.name

Class 2:

class Dog
  # Create getter functions automatically
  attr_reader :name, :height, :weight
  # Create setter functions automatically
  attr_writer :name, :height, :weight
  # Create getter and setter functions automatically
  attr_accessor :name, :height, :weight

  def bark
    return "Generic Bark"
  end
end

rover = Dog.new
rover.name = "Mike"
puts rover.name
puts rover.bark

Class 3:

class GermanShepard < Dog   # child class
  def bark
    return "Lud Bark"
  end
end

max = GermanShepard.new
max.name = "Max"
printf "%s - %s", max.name, max.bark    # Max - Lud Barktrue

Useful scripts

Low disk space alert

#!/bin/bash
CURRENT=$(df /data | grep / | awk '{ print $5}' | sed 's/%//g')
THRESHOLD=90

echo "$CURRENT"

if [ "$CURRENT" -gt "$THRESHOLD" ] ; then
    echo "Low Disk Space Alert: ${CURRENT}% used"
  mail -s 'Disk Space Alert' admin@gmail.com << EOF
Backup server remaining free space is critically low. Used: $CURRENT%
EOF
fi

sed

Команда копирует файлы (по умолчанию со стандартного входа) на стандартный выход, редактирует их в соответствии со своими(!) командами, размещенными в “script” (в командном файле или строке редактора [а не shell!]).

sed [ -n ] [ -e скрипт ] [ -f скрипт-файл ] [ файлы ]
Флаг -n подавляет вывод
-e — указывает на список инструкций, заданный в командной строке.
-f — указывает местонахождение файла-скрипта.

echo "The quick brown fox jumps over the lazy dog" | sed 's/dog/cat/'

cat file | sed '/SOME_TEXT/ c NEW_TEXT'     // заменяет строку с найденным текстом на новую

sed '/TEXT_IN_STRING/d' FILE_NAME   // удаляет строку, которая сожержит TEXT_IN_STRING
        -i  # меняет файл, с которого читает


sed -i 's/OLD/NEW/g' FILE   # replace OLD to NEW in file


sed 's@'"$PWD"'@yo@'        # env

https://ru.wikipedia.org/wiki/Sed http://linuxgeeks.ru/sed.htm http://citforum.ck.ua/operating_systems/articles/sed_awk.shtml

Sendmail

Send mail from console:

(echo "Subject:Hi"; echo "Body contents";) | sendmail -v -F "test"  my@mail

Service name port numbers

source “http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml

Service Name and Transport Protocol Port Number Registry

Last Updated
2015-12-14

Expert(s)

TCP/UDP: Joe Touch; Eliot Lear, Allison Mankin, Markku Kojo, Kumiko Ono, Martin Stiemerling, Lars Eggert, Alexey Melnikov, Wes Eddy, and Alexander Zimmermann SCTP: Allison Mankin and Michael Tuexen DCCP: Eddie Kohler and Yoshifumi Nishida

Reference
[RFC6335]

Note

Service names and port numbers are used to distinguish between different services that run over transport protocols such as TCP, UDP, DCCP, and SCTP.

Service names are assigned on a first-come, first-served process, as documented in [RFC6335].

Port numbers are assigned in various ways, based on three ranges: System Ports (0-1023), User Ports (1024-49151), and the Dynamic and/or Private Ports (49152-65535); the difference uses of these ranges is described in [RFC6335]. System Ports are assigned by IETF process for standards-track protocols, as per [RFC6335]. User Ports are assigned by IANA using the “IETF Review” process, the “IESG Approval” process, or the “Expert Review” process, as per [RFC6335]. Dynamic Ports are not assigned.

The registration procedures for service names and port numbers are described in [RFC6335].

Assigned ports both System and User ports SHOULD NOT be used without or prior to IANA registration.

  • PLEASE NOTE THE FOLLOWING: *
  • ASSIGNMENT OF A PORT NUMBER DOES NOT IN ANY WAY IMPLY AN *
  • ENDORSEMENT OF AN APPLICATION OR PRODUCT, AND THE FACT THAT NETWORK *
  • TRAFFIC IS FLOWING TO OR FROM A REGISTERED PORT DOES NOT MEAN THAT *
  • IT IS “GOOD” TRAFFIC, NOR THAT IT NECESSARILY CORRESPONDS TO THE *
  • ASSIGNED SERVICE. FIREWALL AND SYSTEM ADMINISTRATORS SHOULD *
  • CHOOSE HOW TO CONFIGURE THEIR SYSTEMS BASED ON THEIR KNOWLEDGE OF *
  • THE TRAFFIC IN QUESTION, NOT WHETHER THERE IS A PORT NUMBER *
  • REGISTERED OR NOT. *

SFTP

https://www.digitalocean.com/community/tutorials/sftp-ru

sftp <username>@<hostname>

Some standart BASH commands also work with prefix ‘l’:

lpwd
lls
lcp

Get file from server:

get <remoteFile> [localFile]

Get directory from server:

get -r <someDirectory>

Мы может указать SFTP сохранить соответствующие привилегии и дату и время доступа путем добавления параметров “-P” или “-p”:

get -Pr someDirectory

SFTP logging

http://www.the-art-of-web.com/system/sftp-logging-chroot/
https://access.redhat.com/articles/1374633

Logs writing in /var/log/auth.log

Enable SFTP auth logging in /etc/ssh/sshd_config:

Subsystem sftp internal-sftp -l INFO

Speedup Linux boot time

Check time of running services:

systemd-analyze blame

# also in visual view
systemd-analyze plot > graph.svf

Disable not needed services:

sudo systemctl stop <service_name>
sudo systemctl disable <service_name>
sudo systemctl daemon-reload

Sphinx

Sphinx rtd theme:

https://github.com/snide/sphinx_rtd_theme

Docs:

http://www.sphinx-doc.org/en/1.5.1/
http://www.sphinx-doc.org/en/1.4.9/markup/toctree.html

Webhooks:

http://docs.readthedocs.io/en/latest/webhooks.html
pip install Sphinx
sphinx-build -aE . _build/

or

make html

Syntax

Insert link to download file:

:download:`A Detailed Example <conda-cheatsheet.pdf>`

include:

.. include:: ../README.txt

Some text:

$ first command
$ second command

ase.gui

table
-L tunneling
-N non entering to bash
-f background run
-i <private_key>
-p port
A B A and B
False False False
True False False
False True False
True True True

  • Python 2.6, 2.7, 3.4, 3.5
  • NumPy (base N-dimensional array package)

Optional:

  • For extra functionality: SciPy (library for scientific computing)
  • For ase.gui: PyGTK (GTK+ for Python) and Matplotlib (2D Plotting)
Installation using system package managers

Linux

Major GNU/Linux distributions (including Debian and Ubuntu derivatives, Arch, Fedora, Red Hat and CentOS) have a python-ase package available that you can install on your system. This will manage dependencies and make ASE available for all users.

Note

Depending on the distribution, this may not be the latest release of ASE.

Max OSX (Homebrew)

Mac users may be familiar with Homebrew; while there is not a specific ASE package, Homebrew can be used to install the pyGTK dependency of ase.gui

$ brew install pygtk

before installing ASE with pip as described in the next section. Homebrew’s python package provides an up-to-date version of Python 2.7.x and sets up pip for you:

$ brew install python
Installation using pip

The simplest way to install ASE is to use pip which will automatically get the source code from PyPI:

$ pip install --upgrade --user ase
This will install ASE in a local folder where Python can
automatically find it (``~/.local`` on Unix, see here_ for details).  Some
:ref:`cli` will be installed in the following location:

=================  ============================
Unix and Mac OS X  ``~/.local/bin``
Homebrew           ``~/Library/Python/X.Y/bin``
Windows            ``%APPDATA%/Python/Scripts``
=================  ============================

Make sure you have that path in your PATH environment variable.

Now you should be ready to use ASE, but before you start, please run the tests as described below.

Note

If your OS doesn’t have numpy, scipy and matplotlib packages installed, you can install them with:

$ pip install --upgrade --user numpy scipy matplotlib
Installation from source

As an alternative to pip, you can also get the source from a tar-file or from Git.

Tar-file:
You can get the source as a `tar-file <http://xkcd.com/1168/>`__ for the
latest stable release (ase-3.12.0.tar.gz_) or the latest
development snapshot (`<snapshot.tar.gz>`_).

Unpack and make a soft link::

    $ tar -xf ase-3.12.0.tar.gz
    $ ln -s ase-3.12.0 ase
Git clone:

Alternatively, you can get the source for the latest stable release from https://gitlab.com/ase/ase like this:

$ git clone -b 3.12.0 https://gitlab.com/ase/ase.git

or if you want the development version:

$ git clone https://gitlab.com/ase/ase.git

Add ~/ase to your PYTHONPATH environment variable and add ~/ase/tools to PATH (assuming ~/ase is where your ASE folder is). Alternatively, you can install the code with python setup.py install --user and add ~/.local/bin to the front of your PATH environment variable (if you don’t already have that).

Finally, please run the tests.

.. note::

    We also have Git-tags for older stable versions of ASE.
    See the :ref:`releasenotes` for which tags are available.  Also the
    dates of older releases can be found there.

.. _ase-3.12.0.tar.gz: https://pypi.python.org/packages/ab/d4/
    4fb1a390d6ca8c4b190285eaecbb0349d3989befd5e670dc14751c715575/
    ase-3.12.0.tar.gz
Environment variables
PATH

Colon-separated paths where programs can be found.

PYTHONPATH

Colon-separated paths where Python modules can be found.

Set these permanently in your ~/.bashrc file:

$ export PYTHONPATH=<path-to-ase-package>:$PYTHONPATH
$ export PATH=<path-to-ase-command-line-tools>:$PATH

or your ~/.cshrc file:

$ setenv PYTHONPATH <path-to-ase-package>:${PYTHONPATH}
$ setenv PATH <path-to-ase-command-line-tools>:${PATH}

Note

If running on Mac OSX: be aware that terminal sessions will source ~/.bash_profile by default and not ~/.bashrc. Either put any export commands into ~/.bash_profile or source ~/.bashrc in all Bash sessions by adding

if [ -f ${HOME}/.bashrc ]; then
source ${HOME}/.bashrc
fi

to your ~/.bash_profile.

Test your installation

Before running the tests, make sure you have set your PATH environment variable correctly as described in the relevant section above. Run the tests like this:

$ python -m ase.test  # takes 1 min.

and send us the output if there are failing tests.

SSH

DIY SSH Bastion Host - https://smallstep.com/blog/diy-ssh-bastion-host

AWS Bastion host quick start - https://docs.aws.amazon.com/quickstart/latest/linux-bastion/architecture.html

Security recomendations for OpenSSH - https://infosec.mozilla.org/guidelines/openssh

Generate SSH public key from SSH private key:

# to file
ssh-keygen -f ~/.ssh/id_rsa -y > ~/.ssh/id_rsa.pub

# to stdout
ssh-keygen -f ~/.ssh/id_rsa -y

Disable SSH host key checking

The authenticity of host ‘192.168.0.100 (192.168.0.100)’ can’t be established. RSA key fingerprint is 3f:1b:f4:bd:c5:aa:c1:1f:bf:4e:2e:cf:53:fa:d8:59. Are you sure you want to continue connecting (yes/no)?

Add following lines in ssh config:

Host 192.168.0.*
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null

SSH options anf features

ssh [options] <user>@<host>
    -L                  tunneling
    -N                  non entering to bash
    -f                  background run
    -i  <private_key>   path to private key
    -p  <port>

# remove existing entry
ssh-keygen -R "hostname"

# change connection timeout limit (seconds)
ssh -o ConnectTimeout=10  <hostName>

X11 application

Run GUI app;ication on remote desctop. Connect, export the display in-line and start the application in a way that won’t close it after the ssh session dies:

ssh <user@server> "DISPLAY=:0 nohup firefox"

Run remote GUI application on local desktop:

ssh -XC <user@server> firefox
# or
ssh -YC <user@server> firefox

Multiple SSH private keys on one client

EXPERIMENTAL Add into your ~/.profile to add all id_rsa* keys from your home .ssh dir. Applied after system relogin.:

find $HOME/.ssh/ -type f -name "id_rsa*" ! -name "*.*" -exec ssh-add "{}" \;
# Also you can manually run this command

.ssh/config:

Host myshortname realname.example.com
    HostName realname.example.com
    IdentityFile ~/.ssh/realname_rsa # private key for realname
    User remoteusername

Host myother realname2.example.org
    HostName realname2.example.org
    IdentityFile ~/.ssh/realname2_rsa
    User remoteusername

Or:

ssh-add <path_to_private_key>

SSH tunneling

ssh -NL <local_port>:<remote_address>:<remote_port> <remote_user>@<remote_host>

SSH aliases

Type following in ~/.ssh/config:

Host <name>
  Hostname      <host>
  User          <username>
  IdentityFile  <path_to_private_ssh_key>

Usage:

ssh <name>

SSL

Show info:

openssl x509 -noout -issuer -enddate -in root.pem

# show cert expire time
openssl x509 -in /etc/letsencrypt/live/example.com/fullchain.pem -text -noout | grep -A3 Validity

Validate the certificate even though the protocol used to communicate with server is not based on HTTP. For example:

curl -v --cacert ca.crt https://logs.mycompany.com:5044

Self-signed

Generate on Linux:

sudo openssl req -x509 -newkey rsa:4096 -keyout key.key -out cert.crt -days 365 -nodes -batch [-subj "/CN=frontend.local/O=frontend.local"]


# alternate
openssl req -x509 -newkey rsa:4096 -keyout logstash.key -out logstash.crt -days 3650 -nodes -batch -subj '/CN=test.com/subjectAltName=DNS.1=test.com,DNS.2=test2.com'

Without questions -batch. Add /CN -subj /CN=test. You can also add -nodes if you don’t want to protect your private key with a passphrase, otherwise it will prompt you for “at least a 4 character” password. The days parameter (365) you can replace with any number to affect expiration date. It will then prompt you for things like “Country Name” but you can just hit enter and accept defaults.

Self-signed certs are not validated with any third party unless you import them to the browsers previously. If you need more security, you should use a certificate signed by a CA.

Self-signed with SAN

Create config ssl.conf:

[ req ]
distinguished_name = req_distinguished_name
x509_extensions    = v3_req
prompt             = no

[ req_distinguished_name ]
O = My Organisation

[ v3_req ]
basicConstraints = CA:TRUE
subjectAltName   = @alt_names

[ alt_names ]
DNS.1 = *
DNS.2 = *.*
DNS.3 = *.*.*
DNS.4 = *.*.*.*
DNS.5 = *.*.*.*.*
DNS.6 = *.*.*.*.*.*
DNS.7 = *.*.*.*.*.*.*
IP.1  = 10.10.11.222
IP.2  = 127.0.0.1

Note

In [ alt_names ] section you can describe only one host DNS name or IP.

Run generate command:

sudo openssl req -x509 -newkey rsa:4096 -keyout key.key -out cert.crt -days 3650 -nodes -batch -config ssl.conf

Letsencrypt

Let’s Encrypt - https://letsencrypt.org/getting-started/

https://certbot.eff.org

Add to web-server config (if not exist):

# [Nginx]

location /.well-known {
    root   /var/www/html;
}

Generate certificates:

sudo apt install letsencrypt

# Replace with your webroot and hostname
letsencrypt certonly --webroot -w /var/www/html -d my.company.com

# Letsencrypt will generate certs and show path to them (paste this path to web-server config)

Add CRON tasks (auto renew):

sudo crontab -e

# Check or renew certs twice per day
30 12 * * * letsencrypt renew
0 18 * * * letsencrypt renew
Certbot

Generate certificates (certbot):

sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot

 # Replace with your webroot and hostname
certbot certonly --webroot -w /var/www/html -d my.company.com

# Certbot will generate certs and show path to them (paste this path to web-server config)

Add CRON tasks (auto renew, with reload nginx):

sudo crontab -e

# Check or renew certs twice per day
0 12,18 * * * certbot renew --post-hook "systemctl reload nginx"
Cerbot - Docker
# Get certs (will be saved at /etc/letsencrypt/live)
docker run -p 80:80 -it --rm --name certbot -v "/etc/letsencrypt:/etc/letsencrypt" -v "/var/lib/letsencrypt:/var/lib/letsencrypt" certbot/certbot certonly --standalone --preferred-challenges http -d <dns> -d <www.dns> --email <email@example.com> --non-interactive --agree-tos

# --test-cert   - staging certs

# Delete certs (staging for example)
docker run -p 80:80 -it --rm --name certbot -v "/etc/letsencrypt:/etc/letsencrypt" -v "/var/lib/letsencrypt:/var/lib/letsencrypt" certbot/certbot delete --cert-name <dns>

supervisor

# python < 3

# pip install supervisor

sudo apt-get install supervisor


# config files
/etc/supervisor/supervisord.conf
/etc/supervisor/conf.d/*.conf               # includes configs


supervisord -c supervisord.conf             # run with another config file

echo_supervisord_conf       # echo sample of config


# configuration
[program:foo]
command=/bin/cat


sudo supervisorctl
                                        status              # show list of services
                                        reload              # reload config

# http administration
[inet_http_server]
port = 0.0.0.0:9001
username = user
password = 123

Sample supervisor config file

; Sample supervisor config file.
;
; For more information on the config file, please see:
; http://supervisord.org/configuration.html
;
; Note: shell expansion ("~" or "$HOME") is not supported.  Environment
; variables can be expanded using this syntax: "%(ENV_HOME)s".

[unix_http_server]
file=/tmp/supervisor.sock   ; (the path to the socket file)
;chmod=0700                 ; socket file mode (default 0700)
;chown=nobody:nogroup       ; socket file uid:gid owner
;username=user              ; (default is no username (open server))
;password=123               ; (default is no password (open server))

;[inet_http_server]         ; inet (TCP) server disabled by default
;port=127.0.0.1:9001        ; (ip_address:port specifier, *:port for all iface)
;username=user              ; (default is no username (open server))
;password=123               ; (default is no password (open server))

[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB        ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10           ; (num of main logfile rotation backups;default 10)
loglevel=info                ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false               ; (start in foreground if true;default false)
minfds=1024                  ; (min. avail startup file descriptors;default 1024)
minprocs=200                 ; (min. avail process descriptors;default 200)
;umask=022                   ; (process file creation umask;default 022)
;user=chrism                 ; (default is current user, required if root)
;identifier=supervisor       ; (supervisord identifier, default is 'supervisor')
;directory=/tmp              ; (default is not to cd during start)
;nocleanup=true              ; (don't clean up tempfiles at start;default false)
;childlogdir=/tmp            ; ('AUTO' child log dir, default $TEMP)
;environment=KEY=value       ; (key value pairs to add to environment)
;strip_ansi=false            ; (strip ansi escape codes in logs; def. false)

; the below section must remain in the config file for RPC
; (supervisorctl/web interface) to work, additional interfaces may be
; added by defining them in separate rpcinterface: sections
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL  for a unix socket
;serverurl=http://127.0.0.1:9001 ; use an http:// url to specify an inet socket
;username=chris              ; should be same as http_username if set
;password=123                ; should be same as http_password if set
;prompt=mysupervisor         ; cmd line prompt (default "supervisor")
;history_file=~/.sc_history  ; use readline history if available

; The below sample program section shows all possible program subsection values,
; create one or more 'real' program: sections to be able to control them under
; supervisor.

;[program:theprogramname]
;command=/bin/cat              ; the program (relative uses PATH, can take args)
;process_name=%(program_name)s ; process_name expr (default %(program_name)s)
;numprocs=1                    ; number of processes copies to start (def 1)
;directory=/tmp                ; directory to cwd to before exec (def no cwd)
;umask=022                     ; umask for process (default None)
;priority=999                  ; the relative start priority (default 999)
;autostart=true                ; start at supervisord start (default: true)
;autorestart=unexpected        ; whether/when to restart (default: unexpected)
;startsecs=1                   ; number of secs prog must stay running (def. 1)
;startretries=3                ; max # of serial start failures (default 3)
;exitcodes=0,2                 ; 'expected' exit codes for process (default 0,2)
;stopsignal=QUIT               ; signal used to kill process (default TERM)
;stopwaitsecs=10               ; max num secs to wait b4 SIGKILL (default 10)
;stopasgroup=false             ; send stop signal to the UNIX process group (default false)
;killasgroup=false             ; SIGKILL the UNIX process group (def false)
;user=chrism                   ; setuid to this UNIX account to run the program
;redirect_stderr=true          ; redirect proc stderr to stdout (default false)
;stdout_logfile=/a/path        ; stdout log path, NONE for none; default AUTO
;stdout_logfile_maxbytes=1MB   ; max # logfile bytes b4 rotation (default 50MB)
;stdout_logfile_backups=10     ; # of stdout logfile backups (default 10)
;stdout_capture_maxbytes=1MB   ; number of bytes in 'capturemode' (default 0)
;stdout_events_enabled=false   ; emit events on stdout writes (default false)
;stderr_logfile=/a/path        ; stderr log path, NONE for none; default AUTO
;stderr_logfile_maxbytes=1MB   ; max # logfile bytes b4 rotation (default 50MB)
;stderr_logfile_backups=10     ; # of stderr logfile backups (default 10)
;stderr_capture_maxbytes=1MB   ; number of bytes in 'capturemode' (default 0)
;stderr_events_enabled=false   ; emit events on stderr writes (default false)
;environment=A=1,B=2           ; process environment additions (def no adds)
;serverurl=AUTO                ; override serverurl computation (childutils)

; The below sample eventlistener section shows all possible
; eventlistener subsection values, create one or more 'real'
; eventlistener: sections to be able to handle event notifications
; sent by supervisor.

;[eventlistener:theeventlistenername]
;command=/bin/eventlistener    ; the program (relative uses PATH, can take args)
;process_name=%(program_name)s ; process_name expr (default %(program_name)s)
;numprocs=1                    ; number of processes copies to start (def 1)
;events=EVENT                  ; event notif. types to subscribe to (reqd)
;buffer_size=10                ; event buffer queue size (default 10)
;directory=/tmp                ; directory to cwd to before exec (def no cwd)
;umask=022                     ; umask for process (default None)
;priority=-1                   ; the relative start priority (default -1)
;autostart=true                ; start at supervisord start (default: true)
;autorestart=unexpected        ; whether/when to restart (default: unexpected)
;startsecs=1                   ; number of secs prog must stay running (def. 1)
;startretries=3                ; max # of serial start failures (default 3)
;exitcodes=0,2                 ; 'expected' exit codes for process (default 0,2)
;stopsignal=QUIT               ; signal used to kill process (default TERM)
;stopwaitsecs=10               ; max num secs to wait b4 SIGKILL (default 10)
;stopasgroup=false             ; send stop signal to the UNIX process group (default false)
;killasgroup=false             ; SIGKILL the UNIX process group (def false)
;user=chrism                   ; setuid to this UNIX account to run the program
;redirect_stderr=true          ; redirect proc stderr to stdout (default false)
;stdout_logfile=/a/path        ; stdout log path, NONE for none; default AUTO
;stdout_logfile_maxbytes=1MB   ; max # logfile bytes b4 rotation (default 50MB)
;stdout_logfile_backups=10     ; # of stdout logfile backups (default 10)
;stdout_events_enabled=false   ; emit events on stdout writes (default false)
;stderr_logfile=/a/path        ; stderr log path, NONE for none; default AUTO
;stderr_logfile_maxbytes=1MB   ; max # logfile bytes b4 rotation (default 50MB)
;stderr_logfile_backups        ; # of stderr logfile backups (default 10)
;stderr_events_enabled=false   ; emit events on stderr writes (default false)
;environment=A=1,B=2           ; process environment additions
;serverurl=AUTO                ; override serverurl computation (childutils)

; The below sample group section shows all possible group values,
; create one or more 'real' group: sections to create "heterogeneous"
; process groups.

;[group:thegroupname]
;programs=progname1,progname2  ; each refers to 'x' in [program:x] definitions
;priority=999                  ; the relative start priority (default 999)

; The [include] section can just contain the "files" setting.  This
; setting can list multiple files (separated by whitespace or
; newlines).  It can also contain wildcards.  The filenames are
; interpreted as relative to this file.  Included files *cannot*
; include files themselves.

;[include]
;files = relative/directory/*.ini

tmux

tmux — это менеджер терминалов, к которому удобно подключаться и отключаться, не теряя при этом процессы и историю. Как screen, только лучше (в первую очередь потому, что использует модель клиент—сервер).

Очень хороший способ запустить tmux: tmux attach || tmux new — делая так, вы сперва пытаетесь подключиться к уже существующему серверу tmux, если он существует; если такого ещё нет — создаёте новый.

После этого вы попадаете в полноценную консоль. Ctrl+b d — отключиться. (Точно так же вы отключитесь, если прервётся соединение. Как подключиться обратно и продолжить работу — см. выше.)

Ctrl+b d        — отключиться

Ctrl+b c        — создать окошко;
Ctrl+b 0...9    — перейти в такое-то окошко;
Ctrl+b p        — перейти в предыдущее окошко;
Ctrl+b n        — перейти в следующее окошко;
Ctrl+b l        — перейти в предыдущее активное окошко (из которого вы переключились в текущее);
Ctrl+b &        — закрыть окошко (а можно просто набрать exit в терминале).

В одном окошке может быть много панелей:

Ctrl+b %        — разделить текущую панель на две, по вертикали;
Ctrl+b "        — разделить текущую панель на две, по горизонтали (это кавычка, которая около Enter, а не Shift+2);
Ctrl+b →←↑↓     — переходить между панелями;
Ctrl+b x        — закрыть панель (а можно просто набрать exit в терминале).
Ctrl+b PgUp     — вход в «режим копирования», после чего:
PgUp, PgDown    — скроллинг;
q               — выход из «режима копирования».

One user attach to session:

tmux new-session -s shared
tmux attach-session -t shared

Different users attach to session:

tmux -S /tmp/shareds new -s shared
tmux -S /tmp/shareds attach -t shared

Session can be made read-only for the second user, but only on a voluntary basis. The decision to work read-only is made when the second user attaches to the session:

# tmux -S /tmp/shareds attach -t shared -r

Tomcat

Get static file from tomcat server

Find webapps directory, usually it sutuated in /var/lib/tomcat7/webapps. In this dir create some dir webapps/downloads and put there some files. Files will available at URL <hostname>/downloads/<filename>.

Traefik

sample.toml:

https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml

docker-compose config without traefik.toml:

#    command:
#     - --web
#     - --logLevel=DEBUG
#     - --defaultEntryPoints=http
#     - --defaultEntryPoints=http,https
#     - --entryPoints=Name:http Address::80
#     - --entryPoints=Name:http Address::80 Redirect.EntryPoint:https
#     - --entryPoints=Name:https Address::443 TLS
#     - --docker
#     - --docker.exposedbydefault=false
#     - --docker.domain=docker.localhost
#     - --acme=true
#     - --acme.caserver=https://acme-staging.api.letsencrypt.org/directory
#     - --acme.email=valentinsiryk@gmail.com
#     - --acme.entryPoint=https
#     - --acme.ondemand=true
#     - --acme.onhostrule=true
#     - --acme.storage=/certs/acme.json

ufw

Main commands:

# show apps available profiles
sudo ufw app list

# allow available profile
sudo ufw allow OpenSSH

sudo ufw enable
sudo ufw status

Log analysis:

Oct 20 11:02:27 bbb kernel: [ 7980.724380] [UFW BLOCK]
IN=eth1 OUT=

# MAC |server (2x6) |client (2x6)     |
MAC=   00:1e:8c:7b:cf:ca:fc:75:16:56:e4:78:08:00

# client
SRC=192.168.35.67

# server
DST=192.168.35.10

LEN=60
TOS=0x00
PREC=0x00
TTL=64
ID=61984
DF

# type of protocol TCP/UDP
PROTO=TCP

SPT=37252

# PORT
DPT=80

WINDOW=29200
RES=0x00
SYN
URGP=0

vim

Main commands:

# open file
vi /file_folder/filename

# open file and go to line 25
vi /file_folder/filename ++25

# ON modify mode
i

# OFF modify mode
[Esc]

# undo
u

# save changes
:w [Enter]

# exit without saving
:q! [Enter]

# save changes and exit
:wq [Enter]

# delete line
dd

# delete symbol
x

# copy line
[y]
# paste line
[P]

# find 'text'
/text [Enter]

# find and replace
:%s/OLD/NEW/gc

# show line numbers
:set nu
# hide line numbers
:set nonu

VirtualBox decrease disk size

Host OS — Windows. Guest OS — Linux

Unmount disk which need to resize or start from LiveCD.

On VM:

sudo apt-get install zerofree
sudo zerofree /dev/sda1

# /dev/sda1 - your partition

On host Windows:

cd C:\Program Files\Oracle\VirtualBox
VBoxManage modifyhd <path\to\vdi_file.vdi> --compact

Host OS and guest OS — Linux

On guest:

# fill zeros all free space
# 'bs=' - sector size
dd if=/dev/zero of=zero bs=512k

# remove zero file
rm zero

On host:

vboxmanage modifyhd </path/to/thedisk.vdi> --compact

visudo

Allow user to run command (without password):

sudo visudo
    <username>   ALL=(ALL)       NOPASSWD: <command_1>, <command_2>

VMWare

Fix Segmentation fault on 5.8+ Kernel

VMWare v15.5.7 does not work with Linux Kernel 5.8+. To resolve issue, do following:

git clone https://github.com/mkubecek/vmware-host-modules.git
cd vmware-host-modules/
git checkout workstation-15.5.7
git pull
make
sudo make install
sudo /etc/init.d/vmware restart

sudo nano /usr/bin/vmware

# Replace this:
if "$BINDIR"/vmware-modconfig --appname="VMware Workstation" --icon="vmware-workstation" &&
   vmware_module_exists $vmmon; then
   exec "$libdir"/bin/"vmware" "$@"
fi

# With this:
if vmware_module_exists $vmmon; then
   exec "$libdir"/bin/"vmware" "$@"
fi

Networking

Permanent IP address/etc/vmware/vmnet8/dhcpd.conf:

host test {
    hardware ethernet 0:50:56:21:49:12;
    fixed-address 192.168.65.111;
}

Custom domain name networking_config:

...
VNET_8_DHCP_PARAM_DOMAIN_NAME "example.net"
...

## Migrate config
# vmware-networks --migrate-network-settings networking_config

VMWare VM configuration

VM config file:

    # CD-ROM prefferences
    sata0:1.present = "TRUE"
    sata0:1.fileName = "/root/tmp/unlocker-master/tools/darwin.iso"
    sata0:1.deviceType = "cdrom-image"

# disable VMWare tools time sync
tools.syncTime = "FALSE"

# disable gathering debugging information
vmx.buildType = "release"

# disable page sharing
sched.mem.pshare.enable = "FALSE"

# disable memory tremming
MemTrimRate = "0"

# disable isolation tools (drug and drop, etc)
isolation.tools.copy.disable = "TRUE"
isolation.tools.dnd.disable = "TRUE"
isolation.tools.paste.disable = "TRUE"

# enable pass CPU performance countes to guest
vpmc.enable = "TRUE"

# enable virtual machine execution mode to VT-x/EPT or AMD-RVI (run virtual machines virtual machine - allow nested virtualization)
vhv.enable = "TRUE"

# Virtualization engine mode
#
# Intel VT-x or AMD-V
monitor.virtual_mmu = "software"
monitor.virtual_exec = "hardware"
#
# Automatic
monitor.virtual_mmu = "automatic"
monitor.virtual_exec = "automatic"
#
# Intel VT-x/EPT or AMD-RVI
monitor.virtual_mmu = "hardware"
monitor.virtual_exec = "hardware"

Linux installation

https://pubs.vmware.com/workstation-12/index.jsp#com.vmware.ws.using.doc/GUID-1F5B1F14-A586-4A56-83FA-2E7D8333D5CA.html

sh VMware-Workstation.bundle --console --custom --eulas-agreed --set-setting=vmware-workstation serialNumber <SN>
    --required  # silent installation

# deafults:
# /var/lib/vmware/Shared VMs

# if linux without graphic
apt install xfonts-base libxinerama-dev libxcursor-dev libxtst-dev

# uninstall command
vmware-installer -u vmware-workstation

VMWare command line

# increase virtual disk space (shut down VM, make sure that there are no snapshots)
vmware-vdiskmanager -x <VALUE>GB <path_to.vmdk>

# show running VM's
vmrun list

# start VM in background
# If "Error: The file is already in use vmrun" delete *.lck dir (may be happen if vm was moved)
vmrun start <path_to_vmx> [gui|nogui]

# stop VM
stop <path_to_vmx> [hard|soft]

# clonning VM
vmrun clone <path_to_vmx> <path_to_dest_vmx> full|linked [-snapshot=from_snapshot_name] [-cloneName=Name]

# open GUI net configuration
vmware-netcfg

# NAT config (port forwarding, etc.)
/etc/vmware/vmnet8/nat/nat.conf

# reload network (apply network config changes)
vmware-networks --stop && vmware-networks --start

# create cnapshot
vmrun snapshot <path_to.vmx> <snapshot_name>

# show snapshots
vmrun listSnapshots <path_to.vmx> [showTree]

# set VM state to a snapshot
vmrun revertToSnapshot <path_to.vmx> <snapshot_name>

# remove a snapshot from a VM
vmrun deleteSnapshot <path_to.vmx> <snapshot_name> [andDeleteChildren]

VNC

xtightvncviewer

xtightvncviewer <IP> -quality 0

wget via proxy

Setup in /etc/wgetrc:

http_proxy = http://<IP>:<PORT>/
ftp_proxy = http://<IP>:<PORT>/
use_proxy = on

Zabbix

Zabbix Apache enable SSL

a2enmod ssl
a2ensite default-ssl
service apache2 reload

PSK Zabbix 3.0/3.2

https://www.zabbix.com/documentation/3.0/ru/manual/encryption/using_pre_shared_keys

  • Generate secret psk. Create some file on zabbix-agent host. Example /etc/zabbix/zabbix_agentd.psk. Put generated psk in created file at first line:

    # min 16, max 256 (lvl of encrypt)
    openssl rand -hex 64 > /etc/zabbix/zabbix_agentd.psk
    
  • Change owner and rules:

    sudo chown zabbix:zabbix /etc/zabbix/zabbix_agentd.psk
    sudo chmod 400 /etc/zabbix/zabbix_agentd.psk
    
  • Change next parametrs in config /etc/zabbix/zabbix_agentd.conf:

    TLSConnect=psk
    TLSAccept=psk
    TLSPSKFile=/etc/zabbix/zabbix_agentd.psk
    TLSPSKIdentity=PSK 001
    
  • Restart agent:

    sudo service zabbix-agent restart
    
  • Configure host in Zabbix-server to use PSK. Need to wait some time, before Zabbix cache update.

Zabbix server 3.2

Configuration

/usr/share/zabbix/conf/zabbix.conf.php

Zabbix agent 3.0/3.2

Installation

https://www.zabbix.com/documentation/3.0/manual/installation/install_from_packages

  • Ubuntu 16.04:

    # Zabbix Agent 3.0
    wget http://repo.zabbix.com/zabbix/3.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.0-1+xenial_all.deb
    sudo dpkg -i zabbix-release_3.0-1+xenial_all.deb
    
    # Zabbix Agent 3.2
    wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb
    sudo dpkg -i zabbix-release_3.2-1+xenial_all.deb
    
    sudo apt update
    
    sudo apt install zabbix-agent
    sudo service zabbix-agent start
    
    # add to autostart
    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable zabbix-agent.service
    
Configuration

/etc/zabbix/zabbix_agentd.conf:

Server=127.0.0.1,<ZABBIX_SERVER_IP>

# ServerActive=127.0.0.1,<ZABBIX_SERVER_IP>
# Hostname=<HOSTNAME>
sudo service zabbix-agent restart