IT Cloud - Shtoltc Eugeny 3 стр.


base = https: //github.com/docker/machine/releases/download/v0.14.0 &&

curl -L $ base / docker-machine – $ (uname -s) – $ (uname -m)> / usr / local / bin / docker-machine &&

chmod + x / usr / local / bin / docker-machine

Group of related applications

We already have several different applications, let's say NGINX, MySQL and our application. We isolated them in different containers, now they do not conflict with each other and NGINX and MySQL, we did not waste time and effort on making our own configuration, but simply downloaded: Docker run mysql , docker run Nginx , and for the application docker build.; docker run myapp -p 80:80 bash . As you can see, it would seem that everything is very simple, but there are two points: control and interconnection.

To demonstrate control, we will take the container of our application, and we will implement two possibilities – start and creation (bulkhead). For manual start, when we know that the container has already been created, but just stopped, it is enough to execute docker start myapp , but for automatic mode this is not enough, and we need to write a script that would take into account whether the container already exists, whether there is an image for it :

if $ (docker ps | grep myapp)

then

docker start myapp

else

if! $ (docker images | grep myimage)

docker build.

fi

docker run -d –name myapp -p 80:80 myimage bash

fi

… And to create it, you need to delete the container, if it exists:

if $ (docker ps | grep myapp)

docker rm -f myapp

fi

if! $ (docker images | grep myimage)

docker build.

fi

docker run -d –name myapp -p 80:80 myimage bash

… It is clear that you need to general parameters, the name of the image, the container to be displayed in variables, to check that the Dockerfile is there, it is valid, and only after that delete the container and much more. To understand the real scale, without going into the interaction of containers, about cloning (scaling) these groups and the like, but I will just mention that the Docker run command can exceed one to two dozen lines. For example, a dozen of forwarded ports, mountable folders, memory and processor limits, connections with other containers, and a few more specific parameters. Yes, this is not good, but it is difficult to divide into many containers in this version, due to the lack of a container interaction map. But the question arises: Isn't there a lot to do to just provide the user with the opportunity to start the container or rebuild? Often, the answer of the system administrator boils down to the fact that only a select few can be given access. But even here there is a solution: Docker-compose is a tool for working with a group of containers:

# docker-compose

version: v1

services:

myapp:

container-name: myapp

images: myimages

ports:

– 80:80

build:.

… For start docker-compose up -d , and for bulkhead docker down; docker up -d . Moreover, when changing the configuration, when a complete bulkhead is not needed, it will simply be updated.

Now that we simplify the process of managing a single container, let's work with a group. But here, for us, only the config itself will change:

# docker-compose

version: v1

services:

mysql:

images: mysql

Nginx:

images: nginx

ports:

– 80:80

myapp:

container-name: myapp

build:.

depence-on: mysql

images: myimages

link:

– db: mysql

– Nginx: Nginx

… Here we see the whole picture as a whole, the containers are connected by one network, where the application can access mysql and NGINX via the db and NGINX hosts, respectively, the myapp container will be created only when after raising the mysql database, even if it takes some time.

Service Discovery

With the growth of the cluster, the probability of nodes falling increases and manual detection of what has happened becomes more complicated; Service Discovery systems are designed to automate the detection of newly appeared services and their disappearance. But in order for the cluster to be able to detect the state, given that the system is decentralized – the nodes must be able to exchange messages with each other and choose a leader, examples are Consul, ETCD and ZooKeeper. We will consider Consul based on its following features: the whole program is one file, it is extremely easy to use and configure, has a high-level interface (ZooKeeper does not have it, it is believed that over time, third-party applications that implement it should appear), is written in a non-demanding language to computing machine resources (Consul – Go, ZooKeeper – Java) and neglected its support in other systems, such as, for example, ClickHouse (supports ZooKeeper by default).

Let's check the distribution of information between the nodes using a distributed key-value storage, that is, if we added records to one node, then they should spread to other nodes, and it should not have a hard-coded Master node. Since Consul consists of one executable file, download it from the official website at the link https://www.consul.io/downloads. html on each node:

wget https://releases.hashicorp.com/consul/1.3.0/consul_1.3.0_linux_amd64.zip -O consul.zip

unzip consul.zip

rm -f consul.zip

Now you need to start one node, for now, as master consul -server -ui , and others as slave consul -server -ui and consul -server -ui . After that, we will stop Consul, which is in master mode, and launch it as an equal, as a result of Consul – they will re-elect the temporary leader, and in case of a yoke of failure, they will re-elect again. Let's check the work of our cluster consul members :

consul members;

And so let's check the distribution of information in our storage:

curl -X PUT -d 'value1' .....: 8500 / v1 / kv / group1 / key1

curl -s .....: 8500 / v1 / kv / group1 / key1

curl -s .....: 8500 / v1 / kv / group1 / key1

curl -s .....: 8500 / v1 / kv / group1 / key1

Let's set up service monitoring, for more details see the documentation https://www.consul.io/docs/agent/options. html #telemetry, for that .... https://medium.com/southbridge/monitoring-consul-with-statsd-exporter-and-prometheus-bad8bee3961b

In order not to configure, we will use the container and mode for development with the already configured IP address at 172.17.0.2:

essh @ kubernetes-master: ~ $ mkdir consul && cd $ _

essh @ kubernetes-master: ~ / consul $ docker run -d –name = dev-consul -e CONSUL_BIND_INTERFACE = eth0 consul

Unable to find image 'consul: latest' locally

latest: Pulling from library / consul

e7c96db7181b: Pull complete

3404d2df15cb: Pull complete

1b2797650ac6: Pull complete

42eaf145982e: Pull complete

cef844389e8c: Pull complete

bc7449359c58: Pull complete

Digest: sha256: 94cdbd83f24ec406da2b5d300a112c14cf1091bed8d6abd49609e6fe3c23f181

Status: Downloaded newer image for consul: latest

c6079f82500a41f878d2c513cf37d45ecadd3fc40998cd35020c604eb5f934a1

essh @ kubernetes-master: ~ / consul $ docker inspect dev-consul | jq '. [] | .NetworkSettings.Networks.bridge.IPAddress'

"172.17.0.4"

essh @ kubernetes-master: ~ / consul $ docker run -d –name = consul_follower_1 -e CONSUL_BIND_INTERFACE = eth0 consul agent -dev -join = 172.17.0.4

8ec88680bc632bef93eb9607612ed7f7f539de9f305c22a7d5a23b9ddf8c4b3e

essh @ kubernetes-master: ~ / consul $ docker run -d –name = consul_follower_2 -e CONSUL_BIND_INTERFACE = eth0 consul agent -dev -join = 172.17.0.4

babd31d7c5640845003a221d725ce0a1ff83f9827f839781372b1fcc629009cb

essh @ kubernetes-master: ~ / consul $ docker exec -t dev-consul consul members

Node Address Status Type Build Protocol DC Segment

53cd8748f031 172.17.0.5:8301 left server 1.6.1 2 dc1

8ec88680bc63 172.17.0.5:8301 alive server 1.6.1 2 dc1

babd31d7c564 172.17.0.6:8301 alive server 1.6.1 2 dc1

essh @ kubernetes-master: ~ / consul $ curl -X PUT -d 'value1' 172.17.0.4:8500/v1/kv/group1/key1

true

essh @ kubernetes-master: ~ / consul $ curl $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / v1 / kv / group1 / key1

[

{

"LockIndex": 0,

"Key": "group1 / key1",

"Flags": 0,

"Value": "dmFsdWUx",

"CreateIndex": 277,

"ModifyIndex": 277

}

]

essh @ kubernetes-master: ~ / consul $ firefox $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / ui

With the determination of the location of the containers, it is necessary to provide authorization; for this, key stores are used.

dockerd -H fd: // –cluster-store = consul: //192.168.1.6: 8500 –cluster-advertise = eth0: 2376

* –cluster-store – you can get data about keys

* –cluster-advertise – can be saved

docker network create –driver overlay –subnet 192.168.10.0/24 demo-network

docker network ls

Simple clustering

In this article, we will not consider how to create a cluster manually, but will use two tools: Docker Swarm and Google Kubernetes – the most popular and most common solutions. Docker Swarm is simpler, it is part of Docker and therefore has the largest audience (subjectively), and Kubernetes provides much more capabilities, more tool integrations (for example, distributed storage for Volume), support in popular clouds, and more easily scalable for large projects (large abstraction, component approach).

Let's consider what a cluster is and what good it will bring us. A cluster is a distributed structure that abstracts independent servers into one logical entity and automates work on:

* In the event of a server crash, containers are dropped (new ones created) to other servers;

* even distribution of containers across servers for fault tolerance;

* creating a container on a server suitable for free resources;

* Expanding the container in case of failure;

* unified management interface from one point;

* performing operations taking into account the parameters of servers, for example, the size and type of disk and the characteristics of containers specified by the administrator, for example, associated containers with a single mount point are placed on this server;

* unification of different servers, for example, on different OS, cloud and non-cloud.

We will now move from looking at Docker Swarm to Kubernetes. Both of these systems are orchestration systems, both work with Docker containers (Kubernetes also supports RKT and Containerd), but the interactions between containers are fundamentally different due to the additional Kubernetes abstraction layer – POD. Both Docker Swarm and Kubernetes manage containers based on IP addresses and distribute them to nodes, inside which everything works through localhost, proxied by a bridge, but unlike Docker Swarm, which works for the user with physical containers, Kubernetes for the user works with logical – POD. A logical Kubernetes container consists of physical containers, the networking between which occurs through their ports, so they are not duplicated.

Both orchestration systems use an Overlay Network between host nodes to emulate the presence of managed units in a single local network space. This type of network is a logical type that uses ordinary TCP / IP networks for transport and is designed to emulate the presence of cluster nodes in a single network to manage the cluster and exchange information between its nodes, while at the TCP / IP level they cannot be connected. The fact is that when a developer develops a cluster, he can describe a network for only one node, and when a cluster is deployed, several of its instances are created, and their number can change dynamically, and in one network there cannot be three nodes with one IP address and subnets (for example, 10.0.0.1), and it is wrong to require the developer to specify IP addresses, since it is not known which addresses are free and how many will be required. This network takes over tracking the real IP addresses of nodes, which can be allocated randomly from the free ones and change as the nodes in the cluster are re-created, and provides the ability to access them via container IDs / PODs. With this approach, the user refers to specific entities, rather than the dynamics of changing IP addresses. Interaction is carried out using a balancer, which is not logically allocated for Docker Swarm, but in Kubernetes it is created by a separate entity to select a specific implementation, like other services. Such a balancer must be present in every cluster and, but within the Kubernetes ecosystem, is called a Service. It can be declared either separately as a Service or in a description with a cluster, for example, as a Deployment. The service can be accessed by its IP address (see its description) or by its name, which is registered as a first-level domain in the built-in DNS server, for example, if the name of the service specified in the my_service metadata , then the cluster can be accessed through it like this: curl my_service; … This is a fairly standard solution, when the components of the system, along with their IP addresses, change over time (re-created, new ones are added, old ones are deleted) – send traffic through a proxy server, IP or DNS addresses for the external network remain constant, while internal ones can change, leaving taking care of their approval on the proxy server.

Both orchestration systems use the Ingress overlay network to provide access to themselves from the external network through the balancer, which matches the internal network with the external one based on the Linux kernel IP address mapping tables (iptalbes), separating them and allowing information to be exchanged even if there are identical IP addresses in internal and external network. And, here, to maintain the connection between these potentially conflicting networks at the IP level, an overlay Ingress network is used. Kubernetes provides the ability to create a logical entity – an Ingress controller, which will allow you to configure the LoadBalancer or NodePort service depending on the traffic content at a level above HTTP, for example, routing based on address paths (application router) or encrypting TSL / HTTPS traffic, like GCP does and AWS.

Kubernetes is the result of evolution through internal Google projects through Borg, then through Omega, based on the experience gained from experiments, a fairly scalable architecture has developed. Let's highlight the main types of components:

* POD – regular POD;

* ReplicaSet, Deployment – scalable PODs;

* DaemonSet – it is created in each cluster node;

* services (sorted in order of importance): ClusterIP (by default, basic for the rest), NodePort (redirects ports open in the cluster, for each POD, to ports from the range 30000-32767 for accessing specific PODs from the external), LoadBalancer ( NodePort with the ability to create a public IP address for Internet access in public clouds such as AWS and GCP), HostPort (opens ports on the host machine corresponding to the container, that is, if port 9200 is open in the container, it will also be open on the host machine for forward traffic) and HostNetwork (the containers in the POD will be in the host's network space).

The wizard contains at least: kube-APIserver, kube-sheduler and kube-controller-manager. Slave composition:

* kubelet – checking the health of a system component (nodes), creating and managing containers. It is located on each node, accesses the kube-APIserver and matches the node on which it is located.

* cAdviser – node monitoring.

Let's say we have hosting and we have created three AVS servers. Now you need to install Docker and Docker-machine on each server, how to do this was described above. Docker-machine itself is a virtual machine for Docker containers, we will only build an internal driver for it – VirtualBox – so as not to install additional packages. Now, from the operations that must be performed on each server, it remains to create Docker machines, the rest of the operations for setting up and creating containers on them can be performed from the master node, and they will be automatically launched on free nodes and redistributed when their number changes. So, let's start the Docker-machine on the first node:

Назад Дальше