Облачная экосистема - Евгений Сергеевич Штольц 21 стр.


Terraform has been successfully initialized!


You may now begin working with Terraform. Try running "terraform plan" to see

any changes that are required for your infrastructure. All Terraform commands

should now work.


If you ever set or change modules or backend configuration for Terraform,

rerun this command to reinitialize your working directory. If you forget, other

commands will detect it and remind you to do so if necessary.

Добавлю виртуальную машину:

essh@kubernetes-master:~/node-cluster$ cat main.tf

provider "google" {

credentials = "${file("kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_compute_instance" "cluster" {

name = "cluster"

zone = "europe-north1-a"

machine_type = "f1-micro"


boot_disk {

initialize_params {

image = "debian-cloud/debian-9"

}

}


network_interface {

network = "default"

access_config {}

}


essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply


An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

+ create


Terraform will perform the following actions:


# google_compute_instance.cluster will be created

+ resource "google_compute_instance" "cluster" {

+ can_ip_forward = false

+ cpu_platform = (known after apply)

+ deletion_protection = false

+ guest_accelerator = (known after apply)

+ id = (known after apply)

+ instance_id = (known after apply)

+ label_fingerprint = (known after apply)

+ machine_type = "f1-micro"

+ metadata_fingerprint = (known after apply)

+ name= "cluster"

+ project = (known after apply)

+ self_link = (known after apply)

+ tags_fingerprint = (known after apply)

+ zone= "europe-north1-a"


+ boot_disk {

+ auto_delete = true

+ device_name = (known after apply)

+ disk_encryption_key_sha256 = (known after apply)

+ source = (known after apply)


+ initialize_params {

+ image = "debian-cloud/debian-9"

+ size = (known after apply)

+ type = (known after apply)

}

}


+ network_interface {

+ address = (known after apply)

+ name = (known after apply)

+ network = "default"

+ network_ip = (known after apply)

+ subnetwork = (known after apply)

+ subnetwork_project = (known after apply)


+ access_config {

+ assigned_nat_ip = (known after apply)

+ nat_ip = (known after apply)

+ network_tier = (known after apply)

}

}


+ scheduling {

+ automatic_restart = (known after apply)

+ on_host_maintenance = (known after apply)

+ preemptible = (known after apply)


+ node_affinities {

+ key = (known after apply)

+ operator = (known after apply)

+ values = (known after apply)

}

}

}


Plan: 1 to add, 0 to change, 0 to destroy.


Do you want to perform these actions?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.


Enter a value: yes


google_compute_instance.cluster: Creating

google_compute_instance.cluster: Still creating [10s elapsed]

google_compute_instance.cluster: Creation complete after 11s [id=cluster]


Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Добавлю к ноде публичный статический IP-адрес и SSH-ключ:

essh@kubernetes-master:~/node-cluster$ ssh-keygen -f node-cluster

Generating public/private rsa key pair.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in node-cluster.

Your public key has been saved in node-cluster.pub.

The key fingerprint is:

SHA256:vUhDe7FOzykE5BSLOIhE7Xt9o+AwgM4ZKOCW4nsLG58 essh@kubernetes-master

The key's randomart image is:

+[RSA 2048]+

|.o. +. |

|o. o . = . |

|* + o . = . |

|=* . . . + o |

|B + . . S * |

| = + o o X + . |

| o. = . + = + |

| .= . . |

| ..E. |

+[SHA256]+

essh@kubernetes-master:~/node-cluster$ ls node-cluster.pub

node-cluster.pub

essh@kubernetes-master:~/node-cluster$ cat main.tf

provider "google" {

credentials = "${file("kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}


resource "google_compute_address" "static-ip-address" {

name = "static-ip-address"

}


resource "google_compute_instance" "cluster" {

name = "cluster"

zone = "europe-north1-a"

machine_type = "f1-micro"


boot_disk {

initialize_params {

image = "debian-cloud/debian-9"

}

}


metadata = {

ssh-keys = "essh:${file("./node-cluster.pub")}"

}


network_interface {

network = "default"

access_config {

nat_ip = "${google_compute_address.static-ip-address.address}"

}

}

}essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply

Проверим подключение SSH к серверу:

essh@kubernetes-master:~/node-cluster$ ssh -i ./node-cluster essh@35.228.82.222

The authenticity of host '35.228.82.222 (35.228.82.222)' can't be established.

ECDSA key fingerprint is SHA256:o7ykujZp46IF+eu7SaIwXOlRRApiTY1YtXQzsGwO18A.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '35.228.82.222' (ECDSA) to the list of known hosts.

Linux cluster 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u2 (2019-05-13) x86_64


The programs included with the Debian GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.


Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

essh@cluster:~$ ls

essh@cluster:~$ exit

logout

Connection to 35.228.82.222 closed.

Установим пакеты:

essh@kubernetes-master:~/node-cluster$ curl https://sdk.cloud.google.com | bash

essh@kubernetes-master:~/node-cluster$ exec -l $SHELL

essh@kubernetes-master:~/node-cluster$ gcloud init

Выберем проект:

You are logged in as: [esschtolts@gmail.com].


Pick cloud project to use:

[1] agile-aleph-203917

[2] node-cluster-243923

[3] essch

[4] Create a new project

Please enter numeric choice or text value (must exactly match list

item):

Please enter a value between 1 and 4, or a value present in the list: 2


Your current project has been set to: [node-cluster-243923].

Выберем зону:

[50] europe-north1-a

Did not print [12] options.

Too many options [62]. Enter "list" at prompt to print choices fully.

Please enter numeric choice or text value (must exactly match list

item):

Please enter a value between 1 and 62, or a value present in the list: 50

essh@kubernetes-master:~/node-cluster$ PROJECT_I="node-cluster-243923"

essh@kubernetes-master:~/node-cluster$ echo $PROJECT_I

node-cluster-243923

essh@kubernetes-master:~/node-cluster$ export GOOGLE_APPLICATION_CREDENTIALS=$HOME/node-cluster/kubernetes_key.json

essh@kubernetes-master:~/node-cluster$ sudo docker-machine create driver google google-project $PROJECT_ID vm01

sudo export GOOGLE_APPLICATION_CREDENTIALS=$HOME/node-cluster/kubernetes_key.json docker-machine create driver google google-project $PROJECT_ID vm01

// https://docs.docker.com/machine/drivers/gce/

// https://github.com/docker/machine/issues/4722

essh@kubernetes-master:~/node-cluster$ gcloud config list

[compute]

region = europe-north1

zone = europe-north1-a

[core]

account = esschtolts@gmail.com

disable_usage_reporting = False

project = node-cluster-243923


Your active configuration is: [default]


Добавим копирование файла и выполнение скрипта:

essh@kubernetes-master:~/node-cluster$ cat main.tf

provider "google" {

credentials = "${file("kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}


resource "google_compute_address" "static-ip-address" {

name = "static-ip-address"

}


resource "google_compute_instance" "cluster" {

name = "cluster"

zone = "europe-north1-a"

machine_type = "f1-micro"


boot_disk {

initialize_params {

image = "debian-cloud/debian-9"

}

}


metadata = {

ssh-keys = "essh:${file("./node-cluster.pub")}"

}


network_interface {

network = "default"

access_config {

nat_ip = "${google_compute_address.static-ip-address.address}"

}

}

}


resource "null_resource" "cluster" {


triggers = {

cluster_instance_ids = "${join(",", google_compute_instance.cluster.*.id)}"

}


connection {

host = "${google_compute_address.static-ip-address.address}"

type = "ssh"

user = "essh"

timeout = "2m"

private_key = "${file("~/node-cluster/node-cluster")}"

# agent = "false"

}


provisioner "file" {

source = "client.js"

destination = "~/client.js"

}


provisioner "remote-exec" {

inline = [

"cd ~ && echo 1 > test.txt"

]

}


essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply

google_compute_address.static-ip-address: Creating

google_compute_address.static-ip-address: Creation complete after 5s [id=node-cluster-243923/europe-north1/static-ip-address]

google_compute_instance.cluster: Creating

google_compute_instance.cluster: Still creating [10s elapsed]

google_compute_instance.cluster: Creation complete after 12s [id=cluster]

null_resource.cluster: Creating

null_resource.cluster: Provisioning with 'file'

null_resource.cluster: Provisioning with 'remote-exec'

null_resource.cluster (remote-exec): Connecting to remote host via SSH

null_resource.cluster (remote-exec): Host: 35.228.82.222

null_resource.cluster (remote-exec): User: essh

null_resource.cluster (remote-exec): Password: false

null_resource.cluster (remote-exec): Private key: true

null_resource.cluster (remote-exec): Certificate: false

null_resource.cluster (remote-exec): SSH Agent: false

null_resource.cluster (remote-exec): Checking Host Key: false

null_resource.cluster (remote-exec): Connected!

null_resource.cluster: Creation complete after 7s [id=816586071607403364]


Apply complete! Resources: 3 added, 0 changed, 0 destroyed.


esschtolts@cluster:~$ ls /home/essh/

client.js test.txt


[sudo] password for essh:

google_compute_address.static-ip-address: Refreshing state [id=node-cluster-243923/europe-north1/static-ip-address]

google_compute_instance.cluster: Refreshing state [id=cluster]

null_resource.cluster: Refreshing state [id=816586071607403364]


Enter a value: yes


null_resource.cluster: Destroying [id=816586071607403364]

null_resource.cluster: Destruction complete after 0s

google_compute_instance.cluster: Destroying [id=cluster]

google_compute_instance.cluster: Still destroying [id=cluster, 10s elapsed]

google_compute_instance.cluster: Still destroying [id=cluster, 20s elapsed]

google_compute_instance.cluster: Destruction complete after 27s

google_compute_address.static-ip-address: Destroying [id=node-cluster-243923/europe-north1/static-ip-address]

google_compute_address.static-ip-address: Destruction complete after 8s

Для деплоя всего проекта можно добавить его в репозиторий, а загружать его в виртуальную машину будем через копирование установочного скрипта на эту виртуальную машину с последующим его запуском:

Переходим к Kubernetes

В минимальном варианте создание кластера из трёх нод выглядит примерно так:

essh@kubernetes-master:~/node-cluster/Kubernetes$ cat main.tf

provider "google" {

credentials = "${file("../kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}


resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

initial_node_count = 3

}

essh@kubernetes-master:~/node-cluster/Kubernetes$ sudo ../terraform init

essh@kubernetes-master:~/node-cluster/Kubernetes$ sudo ../terraform apply

Кластер создался за 2:15, а после того, как я добавил europe-north1-a две дополнительные зоны europe-north1 -b, europe-north1-c и установили количество создаваемых инстансев в зоне в один, кластер создался за 3:13 секунде, потому что для более высокой доступности ноды были созданы в разных дата-центрах: europe-north1-a, europe-north1-b, europe-north1-c:

provider "google" {

credentials = "${file("../kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}


resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

node_locations = ["europe-north1-b", "europe-north1-c"]

initial_node_count = 1

}

Теперь разделим наш кластер на два: управляющий кластер с Kubernetes и кластер для наших POD. Все кластера будет распределены по трём дата центрам. Кластер для наших POD может авто масштабироваться под нагрузкой до 2 на каждой зоне (с трёх до шести в общем):

essh@kubernetes-master:~/node-cluster/Kubernetes$ cat main.tf

provider "google" {

credentials = "${file("../kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}


resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

node_locations = ["europe-north1-b", "europe-north1-c"]

initial_node_count = 1

Назад Дальше