Autodeploy you docker images to AWS (git push = deploy)

So I have a lot of small project and some large. To buil in quality into my code i need to run test in my code. And my code in a prod like env.
I always uses docker so my dev env are verly like my prod.
One key thing that i do is that when i push code to my master branch i do a release do server. This is so that i can verify that everything is working and i can run test on it.

 

So what do you need to autodeploy you code to an AWS server.

 

The code and docker

I have build a small app for an hackaton i did and it here https://github.com/mattiashem/cars (look in the autobuild branch ) so it this code that i want to run on my server.
Login into your docker hub account go “settings > linked account and settings” and link you docker account with you dockerhub account.
Now we can make a autobuild for our code. This will when i push code to github build my docker image (I have i dockerfile in my git repo base )

 

Test that when you push some new stuff to git the autobuild starts (You can also trigger an build manuall from the dockerhub page)

 

Deploying into AWS

Now i have a aws server running and port 8555 open to that server.
download the python code from https://github.com/schickling/docker-hook that will be my hock. When I trigger my hock it will run i script for me.

So first I need to create the  script i run i docker-compose so i create i docker-compose file

 

web1:
 image: mattiashem/cars
 links:
 - db
web2:
 image: mattiashem/cars
 links:
 - db
web3:
 image: mattiashem/cars
 links:
 - db
web4:
 image: mattiashem/cars
 links:
 - db
lb:
 image: mattiashem/cars-lb
 ports:
 - "80:80"
 - "443:443"
 links:
 - web1
 - web2
 - web3
 - web4
db:
 image: mongo

 

This docker-compose it NOT like the one i use for dev this ONLY has image and do not build aything.

 

This is a test this script it stops my app and pulls down the latest images, and then start my app again.

#!/bin/bash
docker-compose stop
docker pull mattiashem/cars
docker pull mattiashem/bars-lb
docker-compose start
echo "Cars is now deployd" | mail -s "I have autodeployd cars" mattias.hemmingsson@gmail.com

So lets trigger it when a build is done

 

Start the python script for autodeploys

curl https://raw.githubusercontent.com/schickling/docker-hook/master/docker-hook > /usr/local/bin/docker-hook; chmod +x /usr/local/bin/docker-hook

docker-hook -t asdasdhh11wweddds  -c sh /opt/matte/run.sh

Test it

curl -X POST http://luckyluke.madeatfareoffice.com:8555/asdasdhh11wweddds

 

You should see your command bean triggerd.
It it woks stop the docker-hook and start it in background

docker-hook -t asdasdhh11wweddds  -c sh /opt/matte/run.sh &

And go to your dockerhub and klick webhook. Give it a name and past in you url http://luckyluke.madeatfareoffice.com:8555/asdasdhh11wweddds

Now you have a working push to git = deploy system

 

 

WordPress multisite to wordpress singelsite (Easy linux)

So i hade to slip up my wordpress multisite to singel sites and it was not that hard when i found out how.
First start with setting up the new wordpress and then we migrate over the old wordpress site into the new.

1. Setup the new wordpress site
Install and setup the new wordpress site. You can run the instalation we will clean out the instalaltion later.

2. In the old multisite find the site id. It will be a number like 1 ore 2. When you are in network admin and site check the links when you hold the mouse over i link and look for the digits 🙂

3. Migrate all the themes and data into the site. Copy wp-content from the old into the new and give it the correct permissions.
then go in the old wp-content/sites/YOUR SITE ID/ and copy the content in there to the new site wpcontent/upload

Now we have all the files from the old wordpress multisite inti the new wordpress singel site

4. SQL get the databas

1. In the old server run a mysqldump database > databas.sql

Now we have i sql dump with the wordpress multisite

Then run the following sed to fix the sql dump to be ready.

#This will move the standard tables away
sed -i ‘s/wp_commentmeta/wp_com2/g’ databas.sql
sed -i ‘s/wp_comments/wp_comments2/g’ databas.sql
sed -i ‘s/wp_links/wp_links2/g’ databas.sql
sed -i ‘s/wp_options/wp_options2/g’ databas.sql
sed -i ‘s/wp_postmeta/wp_postmeta2/g’ databas.sql
sed -i ‘s/wp_posts/wp_posts22/g’ databas.sql
sed -i ‘s/wp_term_relationships/wp_rs22/g’ databas.sql
sed -i ‘s/wp_term_taxonomy/wp_ta22/g’ databas.sql
sed -i ‘s/wp_terms/wp_terms22/g’ databas.sql

#This will move the site tables into the base
sed -i ‘s/wp_13_/wp_/g’ databas.sql ENTER YOUR SITE IT HERE

Then install the sql into you new database in the new wordpress installation

mysql database < database.sql And now you will have your old multiwordpress site as a singel site

Roll you own Docker Registry with nginx (In Docker)

When yor private numbers of docker images grow is time to setup you own private repo.
Do have you own docker repo you need 1. the docker registry 2. nginx to handel users 3. tls so that all conenctions are encrypted.

So here is what yu do to have you own docker repo running.

 

 

  1. Install docker-compsoe and setup the followin docker-compose file
storage: 
 image: busybox 
 volumes: 
 - /backup/docker/registry:/var/lib/docker/registry 
cache: 
 image: redis 
registry: 
 image: registry 
 ports: 
 - 127.0.0.1:5000:5000 
 links: 
 - cache 
 - storage 
 volumes_from: 
 - storage
 environment: 
 STANDALONE: true
 SETTINGS_FLAVOR: local 
 STORAGE_PATH: /var/lib/docker/registry 
 SEARCH_BACKEND: sqlalchemy 
 CACHE_REDIS_HOST: cache 
 CACHE_REDIS_PORT: 6379 
 CACHE_LRU_REDIS_HOST: cache 
 CACHE_LRU_REDIS_PORT: 6379
webb:
 #image: mattiashem/nginx-registry
 build: registry-front/
 ports:
 - 443:443
 - 80:80
 links:
 - registry

 

create the folder registry-front in that folder we are going to add our users and our certs for the tls.
So create a Dockerfile and add the following

 

#Base docker file for lifeandshell.com
FROM mattiashem/nginx-registry
MAINTAINER "Mattias Hemmingsson" <matte.hemmingsson@gmail.com>

EXPOSE 80
EXPOSE 443

ADD nginx.htpasswd /etc/nginx/nginx.htpasswd
ADD cert.pem /etc/nginx/ssl/nginx.crt
ADD privkey.pem /etc/nginx/ssl/nginx.key
ADD fullchain.pem /etc/nginx/ssl/fullchain.pem

CMD nginx -g "daemon off;"

The file I get from using letsencrypt that are free but you can  get from any source. Chnage so that the source files are mathcing with you certs.

 

Now time for setting up some users create the file add_user.sh and add the followin content to it

 

docker run --rm --entrypoint htpasswd registry:2 -bn user1 password > nginx.htpasswd
docker run --rm --entrypoint htpasswd registry:2 -bn user2 password >> nginx.htpasswd
docker run --rm --entrypoint htpasswd registry:2 -bn user3 password >> nginx.htpasswd
docker run --rm --entrypoint htpasswd registry:2 -bn user4 password >> nginx.htpasswd

 

make the script x and run intt In the registry-front folder

chmod +x add_user.sh
./add_user.sh

 

Now we are ready to start our registry do the the base filer with the docker-compose.yml and run

 

 

docker-compose build
docker-compose up

 

And now when everthing is wokring run

 

 

docker-compose start

 

 

And now you docker-registry should be up and running

 

 

 

Maxscale Sql scaling with mariadb Cluster on Centos in Docker

So scaling sql server has now bean easy with mariadb maxscale. Here i uses it to connect to my mariadb cluster and setup two new servers. One is a loadbalanser and onw is a read/write splitter

1.First prep your mariadb servers with som users for you maxscale

CREATE user 'maxscale'@'%' identified by 'maxscaleW222';
GRANT SELECT ON mysql.user TO 'maxscale'@'%';
GRANT SELECT ON mysql.db TO 'maxscale'@'%';
GRANT SHOW DATABASES ON *.* TO 'maxscale'@'%';

 

2. Install maxscale on your centos host

 

echo -e "[mariadb] \nname = MariaDB \nbaseurl = http://yum.mariadb.org/10.1/centos7-amd64 \ngpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB \ngpgcheck=1 \n "  >  /etc/yum.repos.d/MariaDB.repo

yum install MariaDB-server MariaDB-devel -y

rpm -i http://downloads.mariadb.com/enterprise/64mr-1jgt/generate/10.1/mariadb-enterprise-repository.rpm

yum install maxscale -y

 

3. Setup you maxscale to work

 

[maxscale]
threads=4
 
[Galera Monitor]
type=monitor
module=galeramon
servers=sql1,sql2,sql3
user=maxscale
passwd=maxscaleW222
monitor_interval=10000
disable_master_failback=1
 
[qla]
type=filter
module=qlafilter
options=/tmp/QueryLog
 
[fetch]
type=filter
module=regexfilter
match=fetch
replace=select
 
[RW]
type=service
router=readwritesplit
servers=sql1,sql2,sql3
user=maxscale
passwd=maxscaleW222
max_slave_connections=100%
router_options=slave_selection_criteria=LEAST_CURRENT_OPERATIONS
 
[RR]
type=service
router=readconnroute
router_options=synced
servers=sql1,sql2,sql3
user=maxscale
passwd=maxscaleW222
 
[Debug Interface]
type=service
router=debugcli

[CLI]
type=service
router=cli
 
[RWlistener]
type=listener
service=RW
protocol=MySQLClient
port=3307
 
[RRlistener]
type=listener
service=RR
protocol=MySQLClient
port=3308
 
[Debug Listener]
type=listener
service=Debug Interface
protocol=telnetd
address=127.0.0.1
port=4442
 
[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
address=127.0.0.1
port=6603
 
 
[sql1]
type=server
address=mariadb-cluster-master
port=3306
protocol=MySQLBackend
 
[sql2]
type=server
address=mariadb-cluster-slave_1
port=3306
protocol=MySQLBackend
 
[sql3]
type=server
address=mariadb-cluster-slave_2
port=3306
protocol=MySQLBackend

So the servername here are matching the names from my docker-compose and my post about mariadb-cluster setup.

So you need to replace maria-cluster-slave_1 and so on with you mariadb nodes.

 

4. Dockerfile and Docker-compose

Dockerfile

from fareoffice/base

MAINTAINER Fareoffice

LABEL name="Mattias Hemmingsson MaxScale Server"
LABEL vendor="Lifeandshell"

RUN echo -e "[mariadb] \nname = MariaDB \nbaseurl = http://yum.mariadb.org/10.1/centos7-amd64 \ngpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB \ngpgcheck=1 \n " > /etc/yum.repos.d/MariaDB.repo

RUN yum install MariaDB-server MariaDB-devel -y
RUN yum -y install make gcc gcc-c++ ncurses-devel bison glibc-devel openssl-devel libaio libaio-devel telnet

#Install maxscale
RUN rpm -i http://downloads.mariadb.com/enterprise/64mr-1jgt/generate/10.1/mariadb-enterprise-repository.rpm
ADD config/maxscale.cnf /etc/maxscale.cnf
RUN yum install maxscale -y

CMD maxscale -d

docker-compose.yml

 

maxscale:
 build: maxscale/
 links:
 - mariadb-cluster-master
 - mariadb-cluster-slave
mariadb-cluster-master:
 build: mariadb-cluster-master/
mariadb-cluster-slave:
 build: mariadb-cluster-slave/
 links:
 - mariadb-cluster-master

To verify that you maxscale is wokring run the command

 

maxadmin (mariadb is the defult password)

Maxscale>run show servers

MariaDB cluster with Dynamic Nodes on Centos 7 in Docker

So running sql in docker is a big qestion now. To make some test i have setup two mariadb cluster docker containers. The first one is the mariadb cluster master. This will setup a master mariadb sql node running.

The second one is the MariaDB cluster slave. This docker will connect to the master and rsync the database over to the slave. Then en database is rsynced over it will start the sql and can process sql data.

You can spinn up multi nodes of the Mariadb Slave and they will connect and join the cluster.

 

NOTE When running in docker I don’t mount any local disk so ALL DATA will be lost if you redploy the  cluster. And the slave will copy the master database over rsync  to the slave so if you database is big this is not a good ide.

 

 

 

  1. Installing stuff and config (run on all nodes master and slave)

echo -e "[mariadb] \nname = MariaDB \nbaseurl = http://yum.mariadb.org/10.1/centos7-amd64 \ngpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB \ngpgcheck=1 \n "  >  /etc/yum.repos.d/MariaDB.repo
yum install MariaDB-server MariaDB-client -y
yum install which rsync -y
vi /etc/my.cnf.d

And add the following content

 

#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#
# * Galera-related settings
#
[galera]
query_cache_size=0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=0
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_on=ON
#wsrep_provider=
binlog_format=row
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"
# Galera Cluster Configuration
wsrep_cluster_name="docker_cluster"
wsrep_cluster_address="gcomm://cluster_master"
# Galera Synchronization Congifuration
wsrep_sst_method=rsync

[embedded]
[mariadb]
[mariadb-10.1]

 

Setup so that nodes now the master

vi /etc/hosts
ip_to_master cluster_master

 

2. Start the master node

su mysql -s/bin/bash -c "/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --wsrep-new-cluster"

Ore

/etc/init.d/mysql boostrap

 

3. start the slave

su mysql -s/bin/bash -c "/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql

Ore

 /etc/init.d/mysql start

Rerun this on as many MariaDB slaves as you want to run in you cluster

 

 

4. Get it into docker

Here are the dockerfiles that build this setup for you

 

MariaDB-cluster-Master

 

from fareoffice/base
MAINTAINER Fareoffice
#
#
# This will start the first node and boostrap the cluster
# You must always need ONE cluster node starting up 
#
# docker run -i -t --name cluster_master mariadb-cluster-master
#
# Give it the name cluster_master so that the slave cluster can add to that name

LABEL name="Fareoffice Mariadb Cluster Master tester Server"
LABEL vendor="System Operations"

RUN echo -e "[mariadb] \nname = MariaDB \nbaseurl = http://yum.mariadb.org/10.1/centos7-amd64 \ngpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB \ngpgcheck=1 \n " > /etc/yum.repos.d/MariaDB.repo

RUN yum install MariaDB-server MariaDB-client -y
RUN yum install which -y

ADD config/server.cnf /etc/my.cnf.d/
RUN chmod 644 /etc/my.cnf.d/server.cnf

#Starting the mysql as mysql user and starting the cluster
CMD su mysql -s/bin/bash -c "/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --wsrep-new-cluster"

 

MariaDB-cluster-Slave

 

from fareoffice/base

MAINTAINER Fareoffice

#
# This will bring upp mariadb cluster nodes.
# Every new node will goin the cluster and get the db rsync over (keep data small)
# The node will look for the host cluster_master to conenct and get cluster info from and sync 
#
#
# docker run -it --link cluster_master:cluster_master mariadb-cluster-slave
#
#

LABEL name="Fareoffice Mariadb Cluster Slave Server"
LABEL vendor="System Operations"

RUN echo -e "[mariadb] \nname = MariaDB \nbaseurl = http://yum.mariadb.org/10.1/centos7-amd64 \ngpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB \ngpgcheck=1 \n " > /etc/yum.repos.d/MariaDB.repo

RUN yum install MariaDB-server MariaDB-client -y
RUN yum install which -y

ADD config/server.cnf /etc/my.cnf.d/
RUN chmod 644 /etc/my.cnf.d/server.cnf

CMD su mysql -s/bin/bash -c "/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql "

When starting check the startup command in the comments in the dockerfile. To work you need to start the master with name and link the slave to the master contanier.