0%

k8s 安裝筆記

 

k8s master install 安裝事前準備

RAM 2GB
CPU 2
禁用 swap

在 hyper-v 上安裝 ubuntu

(https://linuxhint.com/install_ubuntu_1804_lts_hyperv/)

新增虛擬機 => 下一步 => 名稱: k8s-master => 第一代 => 2048MB => Default Switch
=> k8s-master.vhdx => 從可開機 CD/DVD-ROM 安裝作業系統 => iso => Done => CUP * 2
name:master
pwd:gg

Ubuntu 安裝 docker

1
2
3
4
5
6
7
8
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update
apt-cache policy docker-ce
sudo apt install docker-ce
sudo systemctl status docker

k8s 事前準備

這邊比較雷是防火牆 , 一般的書籍或文件指介紹 swapoff 命令 , 沒寫要修改 /etc/fstab 關閉的方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#更新
sudo apt update
sudo apt upgrade

#關防火牆
sudo ufw disable
sudo ufw status

#關閉 swap
sudo swapoff -a
sudo vim /etc/fstab

#永久關閉
#註解這段
##/swapfile

以 kubeadm 安裝 k8s

安裝 k8s

1
2
3
4
5
6
7
8
9
10
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

安裝 zsh 補全

參考說明
萬一出現靈異事件 , 登出以後應該就會生效

1
2
3
4
source <(kubectl completion zsh)
echo 'alias k=kubectl' >>~/.zshrc
echo 'complete -F __start_kubectl k' >>~/.zshrc
source ~/.zshrc

另外記得 vim 開 zshrc 找到 plugins 加入 kubectl

1
2
3
4
5
vim ~/.zshrc
#plugins=(
# git
# kubectl
#)

用 kubeadm 建立 cluster

參考印度仔
官方

直接執行會炸 error 只要加 sudo 即可

1
2
3
4
5
6
7
8
9
kubeadm init --apiserver-advertise-address 10.1.25.123 --pod-network-cidr 10.244.0.0/16


#[init] Using Kubernetes version: v1.21.1
#[preflight] Running pre-flight checks
#error execution phase preflight: [preflight] Some fatal errors occurred:
# [ERROR IsPrivilegedUser]: user is not running as root
#[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
#To see the stack trace of this error execute with --v=5 or higher

初始化 kubeadm 大概要等個 1-2 分鐘 , 搞定後會 dump 下面的訊息
注意這段訊息 , 沒裝過的話超黑人問號

1
2
detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
https://chengdol.github.io/2019/03/09/k8s-systemd-cgroup-driver/

修正上述 systemd 的問題

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
docker info | grep Cgroup

#Cgroup Driver: cgroupfs

sudo vim /etc/docker/daemon.json
sudo service docker restart
#加入這段
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}

實際執行結果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
sudo kubeadm init --apiserver-advertise-address 10.1.25.123 --pod-network-cidr 10.244.0.0/16 --token-ttl 0

[init] Using Kubernetes version: v1.21.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [test01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.25.123]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [test01 localhost] and IPs [10.1.25.123 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [test01 localhost] and IPs [10.1.25.123 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.505356 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node test01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node test01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: coba61.otmz27gm3ztjlzuw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.1.25.123:6443 --token coba61.otmz27xxxxxx \
--discovery-token-ca-cert-hash sha256:29ce1ab9fc6dfa017xxxxxxx

比較重要的是要在 master 機器執行這段指令

1
2
3
4
5
6
7
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

export KUBECONFIG=/etc/kubernetes/admin.conf

#echo $KUBECONFIG

接著設定網路 這邊用常用的 flannel 模組 , 不過比較新的文件這個 flannel 好像沒在用了!?

1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

接著在 node 上面執行 , 記得一樣要用 sudo , 不然會炸 error , 注意是 node 機器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
kubeadm join 10.1.25.123:6443 --token 45g0g0.avd3ynlzt8pxz9ag \
--discovery-token-ca-cert-hash sha256:4996f8aab643b1ac908d3bdd1a6bda7b5086f457db61bd62f8cef23e073e9aa6

[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

node 加入以後在 master 機器執行以下 command 驗證 node 是否 ok , 萬一炸以下 error
退出 ssh 再次登入看看 , 或是等一陣子因為加入 node 需要時間 , 或是檢查上面的步驟是否有漏掉

1
2
kubectl get nodes
#The connection to the server localhost:8080 was refused - did you specify the right host or port

萬一狀態是 NotReady 的話 , 可能是網路沒有進行 config

token 問題 , 注意 token 預設只有 24 小時 , 可能會過期 cluster node 就掛了?
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

kubelet 錯誤排除
萬一完全解決不了直接移除參考這篇 (https://gist.github.com/meysam-mahmoodi/fc014053d984dcc5d1c0d6709773e199)
-/etc/default/kubelet
安裝參考

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cd /etc/systemd/system/kubelet.service.d
sudo vim

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env

EnvironmentFile="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice

# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CGROUP_ARGS

萬一遇到一堆 Error 最快解法 rest 全部 node

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
sudo kubeadm reset

[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "test01" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

萬一一直搞不定 cgroup 直接移除

1
2
3
4
kubeadm reset
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
sudo apt-get autoremove
sudo rm -rf ~/.kube

token 的問題可以參考這篇
http://blog.51yip.com/cloud/2404.html

https://blog.johnwu.cc/article/kubernetes-nodes-notready.html
兩個超重要的檔案
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
/var/lib/kubelet/kubeadm-flags.env

https://juejin.cn/post/6844903689572843534
/lib/systemd/system/kubelet.service

CGROUP 設定

參考自

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#用 vim 修改 CGROUP 方法 1
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CGROUP_ARGS

#用 vim 修改 CGROUP 方法 2
vim /var/lib/kubelet/kubeadm-flags.env
#KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.4.1
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --pod-infra-container-image=k8s.gcr.io/pause:3.4.1"


#更新
sudo systemctl daemon-reload
sudo systemctl restart kubelet

#查結果
sudo systemctl status kubelet -l
ps aux | grep kubelet

完整安裝過程 (cn)

關閉 swap

因為跟之前寫得狀況不太一樣所以特別筆記
可以參考老外 https://graspingtech.com/disable-swap-ubuntu/

1
2
3
4
5
sudo swapoff -a
sudo vim /etc/fstab

#註解這個咚咚
#/swap.img

k8s 安裝(阿里雲)

參考強國人

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
sudo vim /etc/apt/sources.list
#加在最後面
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main

#這邊會跳類似的沒 key 的 error
#Err:4 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease
#The following signatures couldn't be verified because the public key is not avail

#加入 key , 莫名其妙 , 主要是看他跳啥 key 給你 , 要自己調整對應的 key
gpg --keyserver keyserver.ubuntu.com --recv-keys BA07F4FB
gpg --export --armor BA07F4FB | sudo apt-key add -

#這裡主要是看他跳啥 key 給你 , 要自己調整對應的 key
#https://www.cnblogs.com/yickel/p/12319317.html
#FEEA9169307EA071
#apt-key adv --recv-keys --keyserver keyserver.ubuntu.com FEEA9169307EA071
#gpg --keyserver keyserver.ubuntu.com --recv-keys FEEA9169307EA071
#gpg --export --armor FEEA9169307EA071 | sudo apt-key add -

#更新
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

設定 docker cgroup

mirror 參考強國人

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#注意這邊設定中國的 mirror
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"registry-mirrors": [
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF

注意 docker 如果賭爛一直用 sudo 的話加入下面這句

1
2
3
sudo usermod -aG docker ${USER}
su - ${USER}
id -nG

讓 config 生效

1
2
3
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

查目前 k8s images 要得 image

1
2
3
4
5
6
7
8
9
10
kubeadm config images list
#Kubernetes version: v1.22.2

k8s.gcr.io/kube-apiserver:v1.22.2
k8s.gcr.io/kube-controller-manager:v1.22.2
k8s.gcr.io/kube-scheduler:v1.22.2
k8s.gcr.io/kube-proxy:v1.22.2
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

正式啟動 k8s
注意這邊有擋所以需要加上指定的 repository 阿里雲 –image-repository registry.aliyuncs.com/google_containers

1
2
3
#網路沒擋的話這句可以免加
#--image-repository registry.aliyuncs.com/google_containers
sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address 10.78.1.123 --pod-network-cidr 10.244.0.0/16 --token-ttl 0

因為直接 pull 網路會擋 , 用 k8s 拉看看阿里雲
參考這篇 https://www.cnblogs.com/hellxz/p/13204093.html

1
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.22.2

拉下來的結果

1
2
3
4
5
6
7
8
9
10
11
docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest feb5d9fea6a5 4 weeks ago 13.3kB
registry.aliyuncs.com/google_containers/kube-apiserver v1.22.2 e64579b7d886 5 weeks ago 128MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.22.2 5425bcbd23c5 5 weeks ago 122MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.22.2 b51ddc1014b0 5 weeks ago 52.7MB
registry.aliyuncs.com/google_containers/kube-proxy v1.22.2 873127efbc8a 5 weeks ago 104MB
registry.aliyuncs.com/google_containers/etcd 3.5.0-0 004811815584 4 months ago 295MB
registry.aliyuncs.com/google_containers/coredns v1.8.4 8d147537fb7d 4 months ago 47.6MB
registry.aliyuncs.com/google_containers/pause 3.5 ed210e3e4a5b 7 months ago 683kB

需要 retage 的話才用下面這個 sh retag , 參數有改過感謝強國人的貢獻

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#!/bin/bash
# Script For Quick Pull K8S Docker Images
# by Hellxz Zhang <hellxz001@foxmail.com>

# please run kubeadm for get version msg. e.g `kubeadm config images list --kubernetes-version v1.18.3`
# then modified the Version's ENV, Saved and Run.

KUBE_VERSION=v1.22.2
PAUSE_VERSION=3.5
CORE_DNS_VERSION=v1.8.4
ETCD_VERSION=3.5.0-0

# pull aliyuncs mirror docker images
#docker pull registry.aliyuncs.com/google_containers/kube-proxy:$KUBE_VERSION
#docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:$KUBE_VERSION
#docker pull registry.aliyuncs.com/google_containers/kube-apiserver:$KUBE_VERSION
#docker pull registry.aliyuncs.com/google_containers/kube-scheduler:$KUBE_VERSION
#docker pull registry.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
#docker pull registry.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
#docker pull registry.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

# retag to k8s.gcr.io prefix
docker tag registry.aliyuncs.com/google_containers/kube-proxy:$KUBE_VERSION k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION


# untag origin tag, the images won't be delete.
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:$KUBE_VERSION
docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:$KUBE_VERSION
docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:$KUBE_VERSION
docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:$KUBE_VERSION
docker rmi registry.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

接著敲這段 kubectl 才可以正確看到目前 k8s 的 pods

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

這串必留給節點加入 node 用的

1
2
sudo kubeadm join 10.78.1.123:6443 --token dwmvvg.q8ri8ygnpttgfgfi \
--discovery-token-ca-cert-hash sha256:2358a5cb7019e1xx0f9e3969bxx244179356c19ee57531b3aaf5de9xxefdce64

看看 master 節點狀態

1
2
3
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 58s v1.22.2

看看 pod 狀態

1
2
3
4
5
6
7
8
9
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-1f6cbbb7b8-7vvp2 0/1 Pending 0 50s
kube-system coredns-1f6cbbb7b8-tx2dr 0/1 Pending 0 50s
kube-system etcd-master 1/1 Running 2 64s
kube-system kube-apiserver-master 1/1 Running 2 64s
kube-system kube-controller-manager-master 1/1 Running 2 64s
kube-system kube-proxy-nxbq1 1/1 Running 0 50s
kube-system kube-scheduler-master 1/1 Running 2 65s

網路安裝
這裡直接用 calico
https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises

還是參考強國人

注意這句
If you are using pod CIDR 192.168.0.0/16, skip to the next step. If you are using a different pod CIDR with kubeadm, no changes are required - Calico will automatically detect the CIDR based on the running configuration. For other platforms, make sure you uncomment the CALICO_IPV4POOL_CIDR variable in the manifest and set it to the same value as your chosen pod CIDR
這邊會要求要設定網段 , 之前用的網段是 10.244.0.0/16

1
2
3
4
5
6
7
8
9
10
11
curl https://docs.projectcalico.org/manifests/calico.yaml -O

vim calico.yaml
#修改成你的網段
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"

kubectl apply -f calico.yaml

因為 calico 裝壞掉 , 移除時怪怪的 , 最後有 reboot 一次
萬一悲劇要 reset 的話執行下面這些指令
https://cloud.tencent.com/developer/article/1728766

1
2
3
4
5
6
kubeadm reset
sudo rm -rf $HOME/.kube /etc/kubernetes
sudo rm -rf /var/lib/cni/ /etc/cni/ /var/lib/kubelet/*

#這串說實在的我不太曉得意思 , 我 reset 以後 reboot 就沒事了可加可不加
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X

完整安裝過程 (tw)

因為之前邊安裝邊摸索 , 導致順序有點混亂 , 以下方法是我 try 出來整個正常的流程 , 先開啟 powershell 建立兩個資料夾

k8s-lab-master 192.168.137.123
k8s-lab-slave1 192.168.137.124

hyperv 設定固定 ip

設定 hyperv 的 ip 為 192.168.137.123 , 當 netplan apply 以後會斷線是正常現象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
sudo bash -c "cat > /etc/netplan/01-netcfg.yaml << EOF
network:
version: 2
ethernets:
eth0:
addresses: [192.168.137.123/24]
gateway4: 192.168.137.1
dhcp4: no
nameservers:
addresses: [8.8.8.8]
EOF
"

#需要驗證看看格式對不對的話可以用以下指令
sudo netplan generate

#讓網路生效
sudo netplan apply

設定關閉 swap 跟防火牆

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#更新
sudo apt update
sudo apt upgrade

#關防火牆
sudo ufw disable
sudo ufw status

#關閉 swap
sudo swapoff -a
sudo vim /etc/fstab

#永久關閉
#註解這段
##/swapfile

設定 docker cgroup

參考官方

1
2
3
4
5
6
7
8
9
10
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF

讓 config 生效

1
2
3
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

安裝 k8s

參考官方

1
2
3
4
5
6
7
8
9
10
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

啟動 k8s

apiserver-advertise-address 為你的 ip
pod-network-cidr 設定你自己想要的內網網段
token-ttl 讓 token 永久生效

1
sudo kubeadm init --apiserver-advertise-address 192.168.137.123 --pod-network-cidr 10.244.0.0/16 --token-ttl 0

萬一要改 master 的 ip 請參考這個老外
萬一你真的悲劇前面有敲錯可以加參數 --ignore-preflight-errors=all 來逃避看看錯誤

接著敲這段 kubectl 才可以正確看到目前 k8s 的 pods

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

這串是要給 k8s node 節點加入用的暫時先保留起來重要

1
2
kubeadm join 192.168.137.123:6443 --token vo3p0d.zy3m6u3pc3dxk2kl \
--discovery-token-ca-cert-hash sha256:c6bf7cbd96ad5a2b444f5a467f828d18380098cb29079a20ceec710fad39184d

撈看看現在系統上的 pods 可以看到 coredns pending 是因為還沒插上網路 cni 設定

1
2
3
4
5
6
7
8
9
10
kubectl get pods -o wide -n kube-system

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-78fcd69978-65wpf 0/1 Pending 0 17m <none> <none> <none> <none>
coredns-78fcd69978-hnn7k 0/1 Pending 0 17m <none> <none> <none> <none>
etcd-docker-lab 1/1 Running 0 17m 192.168.137.123 docker-lab <none> <none>
kube-apiserver-docker-lab 1/1 Running 0 17m 192.168.137.123 docker-lab <none> <none>
kube-controller-manager-docker-lab 1/1 Running 0 17m 192.168.137.123 docker-lab <none> <none>
kube-proxy-zzsvp 1/1 Running 0 17m 192.168.137.123 docker-lab <none> <none>
kube-scheduler-docker-lab 1/1 Running 0 17m 192.168.137.123 docker-lab <none> <none>

另外看看 cluster 是否 ok

1
2
3
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-lab-master NotReady control-plane,master 6m43s v1.22.1

因為 flannel 已經宣判陣亡 , 所以這邊用 calico 當作 cni 組件 , 這裡可以看到 k8s 佈署東西的方法就是透過 kubectl apply yaml 檔案
https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises

1
2
curl https://docs.projectcalico.org/manifests/calico-typha.yaml -o calico.yaml
kubectl apply -f calico.yaml

沒意外的話應該是 ok

1
2
3
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-lab-master Ready control-plane,master 28m v1.22.1

這次看到 coredns 動了 , 並且多了兩個 calico 組件

1
2
3
4
5
6
7
8
9
10
11
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-58497c65d5-82nd9 1/1 Running 0 68s
kube-system calico-node-mptq5 1/1 Running 0 68s
kube-system coredns-78fcd69978-nzfph 1/1 Running 0 29m
kube-system coredns-78fcd69978-wg46c 1/1 Running 0 29m
kube-system etcd-k8s-lab-master 1/1 Running 0 29m
kube-system kube-apiserver-k8s-lab-master 1/1 Running 0 29m
kube-system kube-controller-manager-k8s-lab-master 1/1 Running 0 29m
kube-system kube-proxy-gggcx 1/1 Running 0 29m
kube-system kube-scheduler-k8s-lab-master 1/1 Running 0 29m

最後可以看看 describe 命令

1
kubectl describe node k8s-lab-master

最特別的就是這個汙點訊息 , master 一般會是打上汙點 , 只要 node 上面是汙點的話 , 就不會承擔運算工作 , 等等佈署 slave/node 就可以知道

1
Taints: node-role.kubernetes.io/master:NoSchedule

KIND 安裝

注意這個暴雷 , 會直接幫你安裝 docker desktop

1
choco install kind

建立 cluster 可以參考[老外](https://mcvidanagama.medium.com/set-up-a-multi-node-kubernetes-cluster-locally-using-kind-eafd46dd63e5]
或是鐵人賽大神

1
2
3
4
5
6
7
8
9
nvim kind.yaml
kind create cluster --config kind.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

蓋好了以後開始查看看 , 注意預設不會安裝 kubectl , 需要自行安裝 , 我機器上何時安裝的已不可考

1
2
3
4
5
k get nodes
#NAME STATUS ROLES AGE VERSION
#kind-control-plane Ready control-plane,master 2m21s v1.21.1
#kind-worker Ready <none> 115s v1.21.1
#kind-worker2 Ready <none> 115s v1.21.1

k8s plugin Krew 安裝

基本上參考官網無腦安裝

1
2
3
4
5
6
7
8
9
(
set -x; cd "$(mktemp -d)" &&
OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.tar.gz" &&
tar zxvf krew.tar.gz &&
KREW=./krew-"${OS}_${ARCH}" &&
"$KREW" install krew
)

加入以下到環境變數 , 並且執行 echo 看看

1
2
3
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
#看看有無成功
echo $PATH

最好直接加入到 zshrc or bashrc

1
2
3
4
5
vim ~/.zshrc
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

#讓 zshrc 生效
source ~/.zshrc

執行這段有 run 就代表 ok

1
kubectl krew

特別注意的是外掛的使用一般都是這樣

1
2
3
kubectl pluginname command
kubectl-pluginname command
k pluginname command

接著安裝感覺好像比較有用的 plugin

tree

1
2
3
4
5
kubectl krew install tree
kubectl tree --help

#用法
kubectl tree deployment kubia

sql

1
2
3
4
5
6
7
8
9
10
11
12
kubectl krew install sql

#用法

#找狀態是 Running 的
k sql get po where "phase = 'Running'"

#找狀態不是 Running 的 , 表示有問題啦
k sql get po where "phase != 'Running'"

#找名字 , 可以直接用 like
kubectl-sql get po where "name like '%w%'"

覺得不滿意的話可以自己寫 plugin , 以喇低賽的 cowsay 為例 , 只要把程式命名為 kubectl- 開頭複製到 /usr/local/bin/ 底下即可
詳細可以參考官方說明

1
2
3
4
5
6
7
8
9
10
11
12
13
14
sudo apt-get install cowsay
cowsay "helloworld"

#查 cowsay 安裝在哪
which cowsay

#確認內容
cat /usr/games/cowsay

#複製並且重新命名
sudo cp /usr/games/cowsay /usr/local/bin/kubectl-cowsay

#不想用要移除的話
sudo rm /usr/local/bin/kubectl-cowsay

k8s cluster 升級

升級過程主要參考官網

某天在測試機器 master 節點執行以下命令 , 不曉得啥時被更新成 v1.21.2

1
2
3
4
5
k get nodes
#NAME STATUS ROLES AGE VERSION
#master Ready control-plane,master 26d v1.21.2
#node02 Ready <none> 26d v1.21.1
#node03 Ready <none> 26d v1.21.1

更新 master 節點

1
2
3
sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm=1.21.2-00 && \
sudo apt-mark hold kubeadm

檢查升級計畫

1
sudo kubeadm upgrade plan

以下為印出的訊息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.1
[upgrade/versions] kubeadm version: v1.21.2
[upgrade/versions] Target version: v1.21.2
[upgrade/versions] Latest version in the v1.21 series: v1.21.2

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 2 x v1.21.1 v1.21.2
1 x v1.21.2 v1.21.2

Upgrade to the latest version in the v1.21 series:

COMPONENT CURRENT TARGET
kube-apiserver v1.21.1 v1.21.2
kube-controller-manager v1.21.1 v1.21.2
kube-scheduler v1.21.1 v1.21.2
kube-proxy v1.21.1 v1.21.2
CoreDNS v1.8.0 v1.8.0
etcd 3.4.13-0 3.4.13-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.21.2

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no

接著執行以下命令 master 即可完成升級

1
sudo kubeadm upgrade apply v1.21.2

node 升級 (注意是在 master 執行此命令)

1
kubectl drain node02 --ignore-daemonsets

中間出現以下錯誤

1
2
3
4
5
6
node/node02 cordoned
error: unable to drain node "node02", aborting command...

There are pending nodes to be drained:
node02
error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/helloworld

暫時把 pod default/helloworld 幹掉

1
k delete po helloworld

或是直接使用 --force 命令亦可

1
kubectl drain node02 --ignore-daemonsets --force

更新 kubectl & kubelet

1
2
3
4
sudo apt-mark unhold kubelet kubectl
sudo apt-get update
sudo apt-get install -y kubelet=1.21.2-00 kubectl=1.21.2-00
sudo apt-mark hold kubelet kubectl

restart kubelet

1
2
sudo systemctl daemon-reload
sudo systemctl restart kubelet

取消對節點的保護

1
kubectl uncordon node02

至此則更新成功

1
2
3
4
5
6
k get nodes

#NAME STATUS ROLES AGE VERSION
#master Ready control-plane,master 26d v1.21.2
#node02 Ready <none> 26d v1.21.2
#node03 Ready <none> 26d v1.21.1

docker 常用命令 vs k8s 常用命令

刪除指令

這滿容易就搞混的 docker 使用 rm , k8s 使用 delete

docker 刪除

1
docker container rm c4bbaab0f7fd

k8s 刪除

1
k delete po xxoo

查容器或是 pod 狀態

這也是常常會打錯的問題 docker 使用 inspect , k8s 使用 describe

docker 查狀態

1
docker inspect 3e3bee70d0b

k8s 查狀態

1
docker describe po asp-net-core-helloworld-k8s-66f969cb87-69jss

跳到容器執行命令

docker

1
2
docker exec -it 54a /bin/bash
docker exec -it 54a ls

特別注意到使用 kubectl 進入容器時最好加上兩條橫槓 -- 然後才接參數

k8s

1
2
k exec -it asp-net-core-helloworld-k8s-66f969cb87-69jss -- /bin/bash
k exec -it asp-net-core-helloworld-k8s-66f969cb87-69jss -- ls

複製 cp

docker 的 cp

1
2
3
4
5
#複製 host 檔案到 container
docker cp "C:\Program Files\Git\etc\vimrc" 404b2c:/root/vimrc

#複製 container 檔案到 host
docker cp 404b2c:/root/.vimrc ${PWD}/vimrcnew

k8s 的 cp

1
2
3
4
5
#k8s 複製 haha 這個檔案到 pod 內
k cp haha java-sfc-k8s-dns-7cfc8db5f6-djjlw:/var/log/

#k8s 複製 pod 內的 qoo 到 host
k cp java-sfc-k8s-dns-7cfc8db5f6-djjlw:/qoo qoo

查 log

難得有個統一的 , 注意 -f 參數為 follow 會一直輸出 , 退出使用 Ctrl + c
docker 查 log

1
2
docker logs 54a
docker logs 54a -f

docker 查 log

1
2
k logs asp-net-core-helloworld-k8s-66f969cb87-69jss
k logs asp-net-core-helloworld-k8s-66f969cb87-69jss -f

dashboard 安裝

下載 dashboard 的 yaml 檔 , 接著修改 Service 讓他從原本的 ClusterIP 變成 NodePort 方便我們訪問

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
mkdir dashboard
cd dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
vim recommended.yaml

#kind: Service
#apiVersion: v1
#metadata:
# labels:
# k8s-app: kubernetes-dashboard
# name: kubernetes-dashboard
# namespace: kubernetes-dashboard
#spec:
# ports:
# - port: 443
# targetPort: 8443
# selector:
# k8s-app: kubernetes-dashboard
# type: NodePort #補上這個

k apply -f recommended.yaml

用 chrome 開啟 dashboard 測試看看 , 可能長這樣子 https://10.1.30.191:31061/#/login
此時他會跟你要 token , 問題我們沒有 token , 主要參考這篇

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

先看看 ServiceAccount , 可以用以下指令取得 token name
這邊有個技巧就是用 jq 撈 -r 代表 raw 可以去掉雙引號
注意撈這些東西後面看到 % 符號的話通常表示結尾符號 , 是不需要填的

1
2
3
4
5
6
7
8
9
10
11
k get sa -n kubernetes-dashboard
k describe sa admin-user -n kubernetes-dashboard

#撈 token 名稱
k get sa -n kubernetes-dashboard admin-user -o jsonpath="{.secrets[0].name}"

#jq 撈 token 名稱
k get sa -n kubernetes-dashboard admin-user -o json | jq -r '.secrets[0].name'

#yq 撈 token 名稱
k get sa -n kubernetes-dashboard admin-user -o yaml | yq e '.secrets[0].name' -

接著看看目前的 secret , 這邊組合了原生撈法 , jq , yq , 另外注意 secret 撈出來是 base64 需要做 decode 動作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
k get secret -n kubernetes-dashboard

#組合 token 名稱與 secret 看看有啥資訊
TNAME=$(k get sa -n kubernetes-dashboard admin-user -o json | jq -r '.secrets[0].name')

#用 yq 撈
#TNAME=$(k get sa -n kubernetes-dashboard admin-user -o yaml | yq e '.secrets[0].name' -)
k get secret -n kubernetes-dashboard $TNAME -o yaml

#會長這樣
#apiVersion: v1
#data:
# ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EWXdNekF4TkRreU9Gb1hEVE14TURZd01UQXhORGt5T0Zvd0ZURVRNQ
#kVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTCs2CnJxaysrcmI4RUJ0YUZWS0QwaW5CSGRFc2RDdTBNVm9jV0JCMmVDYXEwUDhrUlUyTFIwMVlmZW52K3BpVnlPVlIKc2ZvTGt6Nnl3L3IwVUR5bEJLK2hB
#RUVBcStHRUxJTVhJU2FqMzErS2FwV1R1ejNac0ZVUENxeENDZFMrMW9FMQpZdWNaRXhxRUdxS3JNa2wrQWF0dGtuOTFwYngvc2ZzeHdRdGo0ZVl0ZC8yVjhaNzJiYVh2Nncxeisvb3dUZ3lqClVZYUY1Wm4wMnVOeXZmTW9tWnhhMTVBNUZMSituRFlXYVRrVzNZS2NKY2UraDNQQWM
#wdjU0M0RmK21kbExiVTkKNk5pSmFVZEhTUjFLQUo0aVhQM1BDNGRxMUZFTnVnN1JUN3VnRmxRN3B1NnBTOElUN2NOUnp4TFEwcC93clpBYwp6cmZtbjRkYmxSWjBMQ1UzMzgwQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0
#hRWURWUjBPQkJZRUZLQkV0UFdydVJYMDFkSHREZXF6L0V1cFhpbUdNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCOXNyQXgySmt5b3FERXhDSjg5Tk52R0JIL01TcmswbHZoRGhxaEJDbmd4QURGazZEWQpLbzQyUG5rMHdMY3BOVzF0bUc5T080T3FGa2VMUEdWK29wR3NKMjEwZ
#FV6dnJSbzdVUk5PQVlRQjVJQU04bklJClRpZk5TWHFsRzJkQzNPVGNqbHoxYVVwUzRqZGRUUEtESmoxU0ZFT0o0OVRBdko5VEpqRjhvOElDc3NQcDJnRWQKL1VRZEtmTlRodDRSaEd2TVdjcWxGT3l6MkFySTFhRUphWFFUSS9teXBIdXhiSERveDB1UVg1aG43K3dMYTZDTgpFZ0dQ
#YnZ4d2ZlWUltY25uL1dHQU5kQkJLenVYbExnQzJRanFCdDdEeWJ5enorMkE5V2liM2x5TEx6VnpiWGRNCmh5b0ZZSjcvTmlKOXQzMlZuRXh2NUg4ZEFvZ0RFbWV5RXhJKwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
# namespace: a3ViZXJuZXRlcy1kYXNoYm9hcmQ=
# token: XXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqUnRjMGh3U0U1S2JVUklRMVU0WHpsb2NHRlNWRE5XVEZsalpHYzNUVm95V1d4U2NWSkhjMHBYVDFraWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbG
#N5NXBieTl6WlhKMmFXT1xZV02qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUpyZFdKbGNtNWxkR1Z6TFdSaGMyaGliMkZ5WkNJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZqY21WMExtNWhiV1VpT2lKcmRXSmxjbTVsZEdWekxXUmhjMmhpYjJGeVpDM
#TBiMnRsYmkxMlpEVmpjaUlzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmXyVnlkbWxqWlMxaFkyTnZkVzUwTG01aGJXVWlPaUpyZFdKbGNtNWxkR1Z6TFdSaGMyaGliMkZ5WkNJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZj
#MlZ5ZG1salpTMWhZMk52ZFc1MExuVnBaQ0k2SWpnd09USTNNek15TFRKa1ltSXROR0kxTmkwNE9UQmtMV1V6TTJVd016ZzNNR0kzWmlJc0luTjFZaUk2SW5ONWMzUmxiVHB6WlhKMmFXTmxZV05qYjNWdWREcHJkV0psY201bGRHVnpMV1JoYzJoaWIyRnlaRHByZFdKbGNtNWxkR1Z
#6TFdSaGMyaGliMkZ5WkNKOS5TcTYwVjBBcGM3VnUtNno1VVVQV2dvTVl4X0t1dWNoTHhNajc3SEFmdjdrTGxaNmlKckY5aGl1UVR6UkJhWXdxOUNrQkt3NHFod2tPUWZLX0ZWYjhNbkVVV2hsTURyRUltOWVYalVDU3MteEFiV1dmMzNCNGV2Zl8xNEFWMklOTTAzaGRSNS1jWk1aQX
#ZlMG80TDNvTU9OamhJMWNIdW9kN0hjd2ZJMEtxMUFGemxid1ZYQlI0NUlUYXV0U1JWVl9walhjZ2puR0o2aW5oYll121EwUXpDakpkRGdaX3FXdk9KQUVFd3V4WXdsNk1oazJBaVBGb0xRMUdRWDZMaUgxNEQ5RVU2T0U4OHJzd2hOZU0tTjBkZ0NaeVZwbTBkRWlBMldqczlUd0cxc
#kc2M2FrTEZVQkEzeXo4X0JWTEIwWFdWNnIyc1V0bEplN1dINzluQ0oyZ0E=
#kind: Secret
#metadata:
# annotations:
# kubernetes.io/service-account.name: kubernetes-dashboard
# kubernetes.io/service-account.uid: 80927332-2dbb-4b56-890d-e33e03870b7f
# creationTimestamp: "2021-07-13T02:07:07Z"
# name: kubernetes-dashboard-token-vd5cr
# namespace: kubernetes-dashboard
# resourceVersion: "4934766"
# uid: 6d0d9a76-2653-48c2-8ba2-cf9c62e312ca
#type: kubernetes.io/service-account-token


#最後可以這樣撈出正確的 token , 注意因為是 base64 所以要 decode
k get secret -n kubernetes-dashboard $TNAME -o json | jq -r '.data.token' | base64 -d

接著建立 Read-Only user , 一樣自己撈 token 即可測試 , 就不多寫了
dashboard-read-only.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
apiVersion: v1
kind: ServiceAccount
metadata:
name: read-only-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
name: read-only-clusterrole
namespace: default
rules:
- apiGroups:
- ""
resources: ["*"]
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources: ["*"]
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources: ["*"]
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-only-binding
roleRef:
kind: ClusterRole
name: read-only-clusterrole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: read-only-user
namespace: kubernetes-dashboard

客製化 prompt

因為使用 zsh , 所以用 zsh 當 example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cd ~/.oh-my-zsh/themes
echo $ZSH_THEME
cp robbyrussell.zsh-theme myrobbyrussell.zsh-theme
vim myrobbyrussell.zsh-theme

#多加這行取得 IP
#PROMPT=$(hostname -I | awk '{print $1}')
#這行取得目前使用的 context
#PROMPT+=kubectl config current-context
#...其他內容

vim ~/.zshrc
#修改ZSH_THEME
#ZSH_THEME="myrobbyrussell"

#重新 load config
source ~/.zshrc

好用工具 yq 地雷

因為看大神使用 yq 及 jq 實際操作 yq 已經升級為 v4 , 命令差滿多的 , 被雷的不要不要的 , 狂炸 permission denied
翻官網最雷的就是這句 yq installs with strict confinement in snap, this means it doesn't have direct access to root files. To read root files you can:
萬一檔案是屬於 root 要這樣操作才能 work
- 切身分 root 的意思

1
sudo cat /etc/myfile | yq e '.a.path' -

不然平常只要這樣寫就可以

1
yq e '.apiVersion' curl.yaml

第二個雷就是以 . 當作文件的 root , v3 寫的時候應該不用先寫 . , 詳細還是看看官方升級說明
以 k8s 拿 ca 的 example 會變成要這樣寫

1
2
sudo cat controller-manager.conf | yq e '.users[0].user.client-certificate-data' -
sudo cat controller-manager.conf | yq e '.apiVersion' -

最後是補全 , 好像沒那麼實用 , 我是用 zsh

1
2
echo "autoload -U compinit; compinit" >> ~/.zshrc
yq shell-completion zsh > "${fpath[1]}/_yq"

從 k8s 的 worker node 操控 cluster

某天在節點使用 kubectl 出現以下錯誤訊息 , 以為機器又炸裂

1
2
k get nodes
#The connection to the server localhost:8080 was refused - did you specify the right host or port?

檢查節點上的 ~/.kube/config 是否存在 , 從 master 上透過 nfs 或 scp 複製到 worker node 裡面

1
2
3
4
5
6
7
8
9
10
11
12
13
cat ~/.kube/config
#cat: /home/ladisai/.kube/config: No such file or directory

#這段在 master 上執行
#cd /var/nfs/
cp ~/.kube/config .

#這段在 worker node 上執行
cd /nfs
cp config ~/.kube/

#或是直接用 scp 複製
scp config ladisai@10.1.25.125:/home/ladisai/.kube/

另外如果開發使用的是 windows , 也可以在 windows 上直接安裝 kubectl 然後自己手動修改 config 內容把 cluster 加進去 , 也是可以 work
想要自動補全的話看這篇老外 , 不過好像不支援 alias 暈

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLSxtCk1JSUM1ekNxDQWMrZ0F3SUJBZ0lCxRBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXhOekE1TWpVME9Gb1hEVE14TURVeE5UQTVNalUwT0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTTEzClVkSGVvUmtURyswN21XM2tjdlhUSXYydG1YSFpZeFV4NWV2MkFiU0NOZFhPMUdIelRtdDl4d0VJRjRzMVJrTjcKL2s2MElNM3dJeE1mNjNVZk9wMzJ5d1pKOTBzWHVXK0NyMHNMTjdGYWFtS20xTmkzMTNDZU4yV04wMHh4ZFVOegozWlZzZ3VsS2x2bEJYZENvMCt0K0RENUR2cEdTbkVUMXlKa3NLN0lrekR6bnBDcm01UlpqVTJiVXBQWUNFNVhWCmppWEUyVzFSOFVYc3gvd3pqKzlZa2Q1NS9jcXIzMEV3cTYxeVBhekJjeFFNUFU3Y0Q2dHpFdkV0MGdORFpGRmIKcjlSSHVHZ2MyaTZZNkNjN29FSDBFdWhHWGVtOThOZEM0ZndBZ2xSUnRpWG1ud0VEMGdySG0rbXVkcDlhMzU1UApkbDczUkdIK0V0YmJJa0tPSWNjQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNSkNpbUtTSGhJTjdWM2FCUnBuRWl3NmtXaXlNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCcEgzY21OdWFoWWNGTEVzM3BUY1J6bWswNnB1UHZ1aEY0VjE3dXpLNnRxT2JzaDMxawo1ZUVSRXVCQ25LVVVsa3NZYm1TM0VLMWJqWThRQWgwbEtHeVRmMFU3dEg5SUFzekdMM1BaYnRKVjFvOEhnUWhaCnJtOWFPVVliU0RkVHk5VU5jaW1HYXBJY1pwQXloQk54T0Rua1ZDNEVaNGYzZ2llRWVUQmFxUnoyUVJRUUhZVW4KR3VuZFZkR3N2aXN3VWdtVUZVa2xtUTV3azBVWUtnTTRLc01YU1ZNN0hWdWEveEcxQWI5N1NxeW11R05acGJkTQpoN3c0V1B0MTdxNWxNenJPOGN4M3FjMFBsd05rZnlsYURJaG40cHdhRWVieWtsMksvbC9QK0M3aG0vVThPZDQvCklZbGlrckZwQ2U4cHV3NFFKYWJxNFhvVVkrM0hiUERPUkRiMQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://kubernetes.docker.internal:6443
name: docker-desktop
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUxRS0tLS0tCk1JSUM1ekNDQWMrZ0xSUJBZ0lCQURBTkJnaxa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EWXlPVEE0TkRBek5Wb1hEVE14TURZeU56QTROREF6TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlJnCktqK3cvMkdvcnlmWW4xNmdkdzJpdG9VSldUMndITWgwU2lQOUM4NXRqYXBQbVZQYVFvWWNHTkFLUDdiVHlsSjMKUkNhK2h0eTU2NjFJTTNabU9YZlpnbEY2aTFka2ZRaC8xS1YxQ3ZqWUpCT0tRWXZOVzRETXdFU2htdDF3ZytQZQo5bFJQRk1HZGs3WlppcFhqaXZrRHMzV2ZRa2poWUFDMkpNMlBYSWdFOXJVb0lQU3FxK3hnYkY3RDcwYThRblBuClNXU29vTDEyZ01oV0lRdjVZZm82L1ZSeFZpK3o1ckJ1eWgvYnpsSnM3NHNDRDB5QkF4WWZlNWRYSjdvWktTVjUKVXdhNkw3SUVkNnJocG1uSWFYWWpCYnowUlpHd01Eb1ZXTit1ZzdORXVXODl6WFpTMUJCWFRNU0VmVkEwYTB1bgowNjNFTTduT0tzR1BkREVydkdjQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZBTEtzQXh5NU8vQTVSRmkwV05WTkFxbGp6TUhNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDSEJrQmhkbzZ3TFdWZUlxR2FkbjYvaVhyWDIyL3JIVFFYSzErZDZIUzBNVlk0U0kyegprc1lwNEJpaWlpaGZSNlV5d3RreSsxWTZlaitzRGh5Q3hhYmdzb0dzeGhtY1VCWTJQWlZhWmVhdG04amdhbkE1CmR4b3hYdjdHQlZiOTc0SkdqMnByY2xoNm9VYldLRzJKQlpxYmlFS2NJaVJQTWtRbUJZUC8wdHk4Q0NpRldGdHEKcnZ6MU9YWmxQZEZNYW1FRjJUaVhMS1hQQitOU0NLYjFRZWhPK2tMQURLWUZ6cGgzRjhENGU0UVZVS041bkpjagpUOVNjNTNDdVk3SUFYdmtUb3RDdXh6VzZYL0NYcmkxOE8rRDdOZFJFYzlJOWRqZU1oYy8zYUhxMHhUYlJPVnBTCjlmWS9QZ3hVK01BRzdTSkI5Ynl5TlljeE4vSFN1L2RaL2swUAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://127.0.0.1:64102
name: kind-kind
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0x0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURxa3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EWXdNekF4TkRreU9Gb1hEVE14TURZd01UQXhORGt5T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTCs2CnJxaysrcmI4RUJ0YUZWS0QwaW5CSGRFc2RDdTBNVm9jV0JCMmVDYXEwUDhrUlUyTFIwMVlmZW52K3BpVnlPVlIKc2ZvTGt6Nnl3L3IwVUR5bEJLK2hBRUVBcStHRUxJTVhJU2FqMzErS2FwV1R1ejNac0ZVUENxeENDZFMrMW9FMQpZdWNaRXhxRUdxS3JNa2wrQWF0dGtuOTFwYngvc2ZzeHdRdGo0ZVl0ZC8yVjhaNzJiYVh2Nncxeisvb3dUZ3lqClVZYUY1Wm4wMnVOeXZmTW9tWnhhMTVBNUZMSituRFlXYVRrVzNZS2NKY2UraDNQQWMwdjU0M0RmK21kbExiVTkKNk5pSmFVZEhTUjFLQUo0aVhQM1BDNGRxMUZFTnVnN1JUN3VnRmxRN3B1NnBTOElUN2NOUnp4TFEwcC93clpBYwp6cmZtbjRkYmxSWjBMQ1UzMzgwQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZLQkV0UFdydVJYMDFkSHREZXF6L0V1cFhpbUdNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCOXNyQXgySmt5b3FERXhDSjg5Tk52R0JIL01TcmswbHZoRGhxaEJDbmd4QURGazZEWQpLbzQyUG5rMHdMY3BOVzF0bUc5T080T3FGa2VMUEdWK29wR3NKMjEwZFV6dnJSbzdVUk5PQVlRQjVJQU04bklJClRpZk5TWHFsRzJkQzNPVGNqbHoxYVVwUzRqZGRUUEtESmoxU0ZFT0o0OVRBdko5VEpqRjhvOElDc3NQcDJnRWQKL1VRZEtmTlRodDRSaEd2TVdjcWxGT3l6MkFySTFhRUphWFFUSS9teXBIdXhiSERveDB1UVg1aG43K3dMYTZDTgpFZ0dQYnZ4d2ZlWUltY25uL1dHQU5kQkJLenVYbExnQzJRanFCdDdEeWJ5enorMkE5V2liM2x5TEx6VnpiWGRNCmh5b0ZZSjcvTmlKOXQzMlZuRXh2NUg4ZEFvZ0RFbWV5RXhJKwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://10.1.25.123:6443
name: kubernetes
- cluster:
certificate-authority: C:\Users\yourname\.minikube\ca.crt
extensions:
- extension:
last-update: Wed, 02 Jun 2021 14:42:07 CST
provider: minikube.sigs.k8s.io
version: v1.20.0
name: cluster_info
server: https://127.0.0.1:54032
name: minikube
contexts:
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
- context:
cluster: minikube
extensions:
- extension:
last-update: Wed, 02 Jun 2021 14:42:07 CST
provider: minikube.sigs.k8s.io
version: v1.20.0
name: context_info
namespace: default
user: minikube
name: minikube
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURxCk1JSURGVENDQWYyZ0F3SUJBZ0lJZlBadTBLYzMwN0V3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1UY3dPVEkxTkRoYUZ3MHlNakExTWpFd05UQTJORFZhTURZeApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sc3dHUVlEVlFRREV4SmtiMk5yWlhJdFptOXlMV1JsCmMydDBiM0F3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzZKNGZRMXV5cnFkQk8KT25GMUV0UVJ0YWVrRnptMVdTSUd5b3FETlo5MXZnQnYwaUFNa253dHFtZkRxcWk1eVhXSGxwcVlMN1pXZU8rbgpNbXB0ZDRzKy9lbk1XT0kxbkhybyt6OWt0TTlSUy9RUFdJM0JYUkhwbktyWjlwajMxam5KNDdrRnNpZC9NbTZJCkJIK2NrWTlEbzl6RWVjS1E4cUpVcHlId1V5d2VqUUcrL2ZPUkZzZzhUNXREL0pCemF3bFgvQ0FzR08xeEpxa2wKVmlSTTVrWnhqS2dxWlRhUSt1WHJMYXVwUUZpalRVcGt2cVFhdlhxSGNLN2d5cGN3NnBBNDVBVE5taWNhWi9qTApzajVaU3JjbjU5WExIc0dKenVvZXVkYjNhMXhXMlMxa25hcE5HVEc5azY0U1g5Z1lpTnJ6TW1vTWVYa1gxNE1UCjRMbDFkU21QQWdNQkFBR2pTREJHTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUYKQlFjREFqQWZCZ05WSFNNRUdEQVdnQlRDUW9waWtoNFNEZTFkMmdVYVp4SXNPcEZvc2pBTkJna3Foa2lHOXcwQgpBUXNGQUFPQ0FRRUFPWUVjM2dIQkJBWjNkVm8ra3RteGJwbUt2WE5NMFhCbEszbGkwem9tYUFITTBUN2tFTnpwCmE3d3lRMVlQRitQZ2U5UTFXL1VJTTE1T3puM3J5U0VGdGE3N0luSTk2SURHNTRNQmVseWFsU3J0N0ljWFZoTEEKWFpzRWVMenlkWCtzOHNIb0J0Z1pMc0tTYUg4bllNTXJYdEVMSDRteFR0cVErdElGbTBrYmhMMGpETUhvR3JrOQpHdjZHbTN2Y1lvMHpyWng1Yi81VkNWVGtxRjRxeTA0MWZBQkJuSXNjZW1wR0FvSzZWOENRaUQvWFQ2TkMrTG5SClYrSmljdkNnTnZiNlVKcmd0TG81VDlCbHZpeXdPcTlOcmF4bmk4Y1JkYk1OWjVJWHlFa0NoZW41eGJOTzRTSTUKZ1J3NXVqYkg1blpDODNzczBEVUt2QXZvTGplT1NHRis0QT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFxtLS0tLQpNSUlFb2dJQkFBS0NBUUVBdWllSDBOYnNxNm5RVGpweGRSTFVFYlducEJjNXRWa2lCc3FLZ3pXZmRiNEFiOUlnCkRKSjhMYXBudzZxb3VjbDFoNWFhbUMrMlZuanZwekpxYlhlTFB2M3B6RmppTlp4NjZQcy9aTFRQVVV2MEQxaU4Kd1YwUjZaeXEyZmFZOTlZNXllTzVCYkluZnpKdWlBUi9uSkdQUTZQY3hIbkNrUEtpVktjaDhGTXNIbzBCdnYzegprUmJJUEUrYlEveVFjMnNKVi93Z0xCanRjU2FwSlZZa1RPWkdjWXlvS21VMmtQcmw2eTJycVVCWW8wMUtaTDZrCkdyMTZoM0N1NE1xWE1PcVFPT1FFelpvbkdtZjR5N0krV1VxM0orZlZ5eDdCaWM3cUhyblc5MnRjVnRrdFpKMnEKVFJreHZaT3VFbC9ZR0lqYTh6SnFESGw1RjllREUrQzVkWFVwandJREFRQUJBb0lCQUFiQjVwazdKQTQ3Tk5lUwpJWW81YTc5VTA4Z09HOGNzZkNLNCtYdzMxeGtFRTZuN2U3UlpJTzdiYjdiWG5CWmFiTXpHTjhoc2V2YjZudUIzCjRRc21Pc1RIbk5RUktleitTQ3ZxNnVzeDhSQ25iQzJlYms3bG5QL1k4dzdFZDlzUFNMdSthM244ZEppV2NSSzQKN3hUMDU3bHgybEs3aE1lVU56WlJkdGJ0ZmYyQjJ2UDlhanRNZHk4YnN2bTA3L0ZpMnAyc0JpZ1E2VXpwMHFlcgpFSUVDQlJXQ1V5cXQzSXF1UW8wNFRMYXhrUmNGeFJWcDlXdEJvZEVoN3c2R215TkcvVUo1V010RUgvK3RldHVxCmNBdjl4UWJPOFFGeEhhVXkxVllwb1RLWGdPSERZTmtzeVlQemFUdmxBd2lCaHZ0dVVJKzlMaklmN3FGajZXNkgKVHg3TzFIRUNnWUVBN3BqaW5GZk1aNWVnNkVET0M1dnFDdGVxWWd6UEFPTTRRbjUxdmg2N2lZRWl0c3pRckNDYgpBMmNuZW4wSjhmMGlNcGFiR3F0THhSdzk2cWRSK3VHcUhhZjl2MVBUSUhsVld5bk1JM1J0dzM2V0N3R0tqa0NYClhOdCt1bU9pV29MbmhxQzdSOXYrVVo3YWlUb3VNSFRJY1N2aGFpUzJkb1hLSlFYaFhXcXUxUWNDZ1lFQXg3dHQKYkFYaTZmOU5GdS9TVmhyamRKazNMcEFwbWlVM0ZUOUhyejBSY0xJUWppWjdWakRNcHRsZm55WW9BVGYrR2ZBSAo1a2N4ZFdyTGNUV0dvc2RVTUkxa0NWMUZIVCtITkpkU1NFSXQzd3ZJMDYxNjV5eFRsSGtKNVZtMWVxU3RDZ2N0CnZsNFE5S1BxYjY4aWlZT0xlSVBmc2ZiUUtpSzZHUWZJdTNwZXJUa0NnWUJFL0pXQkNPM0VBaFozTU0yaWs2a2YKQzI1clBUTFpHZG1aZUVFSkFJL08yVFMxVUJFQnc4ZXVPelF4K1ZkWHpZNEd2SDhLUGY4QmRnSDlCL1h2S1RKcgpzcmZ1aXdrZmVaV1JiMHRqOFBVUHNsa2x3NE5SVUNHenFvOUF5ekFWSllaVjZjRmNyS0lpN1dCWWp5YnR3Y1oyCjJtNHBwNFhPVFM2K2Q2M0t1ZDdsSHdLQmdFY0FJS0N5NHZ3dHJqakdIZTVQOXFWZlJkZCtsZHRlK1ZyTE9POVoKZFJhcnBlanlVd3ZMb3lSNHgxNHEwVFBGdE1XQnB6MDc5NS8yeThVOXN0T3dxZ1BzYnpCSkFLV3FES1VzV2FxbwpJK2hUSnh2Z1luMUZLNXp1L2c2U3VrbVR1cE9EQThiVlo0K2ZxVm4wVndHdFNtb1g3dkF6ZmNKTXYvemY0SUtNCnVKVTVBb0dBYThsNS8vaFpCanY4V3RTZUxOT1JhYlppUlZZMFZmS3pRTXRyL2FJTjMwMUtnNUI1RUFiQm1rNXcKKzJ4L3RYTERmSzlrd0VIUy9GT2xQdk52TzlFL2FMVmJEM0xPSWk2aTJYQnp3V2JLRHpJM2xnUVlZbFgvQlhOVAphQmpDMDV4aFFFdEZ1UEg0RmNPamNTSURWWVBReStIeWx0bTFKRDYvdWVhcnFDaEQ5Uk09Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
- name: kind-kind
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUxQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJT1pQQ2htczJTNUF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMk1qa3dPRFF3TXpWYUZ3MHlNakEyTWprd09EUXdNemRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTYyZk5zRXprc01wUVE4bVgKcVRNRzhldzgwU1RjVkJnZjRHSUhpTlM5S0pwOHFOcTF6SlB3VW1lMjJjUUFtR2ZXTEhqcmh4RG5abE1jaWxyVwpCdkFFUFpDdUxzZi9xemk2SEQ4THdTbWd1NHBsTElBb2lwUjlqejQzMlV4YzNyNklha0pBYzVHeWdSRXc4TklsClJSbW1FTEI0TGEwbnltaFk0YUNrYURYWFQwOWNsMzNDK0dWTmc1MmZXdFFSTTJ3WjNrWXRzTVZKc3VZbFhmT3QKY3BhWExRSGZac2ZZY3I1MkYrNXN3SEJFd0lORmNaM2lTeUpFdGl1L2o0bGxiYllxTDFyd0NVR1hxZjN3VzdtSApzTWYzR2owV29vUzFGSWI0dUxIc3FLRFljY1VnbG5qQzliaWgyWTE4RFFrcW9seE9KaHVObkxmRzJpNnMycE5XCitxRWJ5d0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JRQ3lyQU1jdVR2d09VUll0RmpWVFFLcFk4egpCekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBdCs1MEdJOFA0bkx1cC9QK0hITGUwVVI1R3BtNGVoZERJNWl3Ci9KN1liZ0dCY3N0UHp2b2JvQ3k5U0hCNEtQemlWOVlOaGUzUURhN1dSdVJtWEw2RTRyZlVzd0U1K0FLTmJ1S2cKVUNVZlBMMm1FeDl5dVMrc055U09EemtEcWVjM2ZQNWZGdlZoOW0vb242bHEreDl2NGUzZEw4L2dGQWtiOEptdgpSd0FsSjBVNnZaMFZIMHd0U2N1S0txN0lNK0szMkFOVUdyWVZsQTF4ZkNzQXlYdWF1R0l4QUFpQ3dTMTMvQ1JqCjRRT25rQm5STEZpM0cvRHlucGh0SnNhWWhqTWdkYVdzUkZ2YW5BY0FReXlkUkRRYWNwVzhlTmpiRXJhaVRGRHMKVXJXYW1pMm53Wmhrd2w1Tnd2L2NUaVdQQ1RZbTJEakRjbTdNWm9CblhKWlU3emNQR0E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRxQpNSUlFb3dJQkFBS0NBUUVBNjJmTnNFemtzTXBRUThtWHFUTUc4ZXc4MFNUY1ZCZ2Y0R0lIaU5TOUtKcDhxTnExCnpKUHdVbWUyMmNRQW1HZldMSGpyaHhEblpsTWNpbHJXQnZBRVBaQ3VMc2YvcXppNkhEOEx3U21ndTRwbExJQW8KaXBSOWp6NDMyVXhjM3I2SWFrSkFjNUd5Z1JFdzhOSWxSUm1tRUxCNExhMG55bWhZNGFDa2FEWFhUMDljbDMzQworR1ZOZzUyZld0UVJNMndaM2tZdHNNVkpzdVlsWGZPdGNwYVhMUUhmWnNmWWNyNTJGKzVzd0hCRXdJTkZjWjNpClN5SkV0aXUvajRsbGJiWXFMMXJ3Q1VHWHFmM3dXN21Ic01mM0dqMFdvb1MxRkliNHVMSHNxS0RZY2NVZ2xuakMKOWJpaDJZMThEUWtxb2x4T0podU5uTGZHMmk2czJwTlcrcUVieXdJREFRQUJBb0lCQVFEQlNGKzRhOG94NWt0MAovU2JMUkJ4bHNxUlV6TUVqUXhPWk5xUWRFeCtsSVFOTjJSWUFQVS9MT1dFRytFbk0yU1VmS3NHb0NwY1VpeFVaCi9HOVREdXRNYVdpNi9IZk43Q3ZUV1dpYlYwU2o5NFFPdjhPSjFWWXFzTmxHVDg3SkRRUVF5d2tFV3hLSHFzZlcKVTVWS1lUN2E0U29yeHNxdkJISkYvNUkrQmtjYzArZHBabXRkSEZOcjdQNEErNkZQYTVRZFNic1RMeElGTmwrVApTbUJzNUJnZU1hTlZBb1RFYk9mNk1LeEVzeGxLcTZETzVaNEpMZ2h4aGx1U096M3R5OC9qZHpBMm84akRlUUxICnlDY3lJazdkdjhOTytJU1hLYXlpNXlTY1VwaXJ2UnhrMCt5cjV3aDJQRVJWYjBGeTBZSUFlNWpVT0hVYTFNelAKNzFhTWJhYmhBb0dCQVBxdjBNNkFiQUw3by9MMFBMdzRIc1FtQ1dKQ3V1eUI5L0ZBTVZ4VVFycXd0MytwSFFkago0VUZiN2lvTVBaU1F4bUl2Y3VscERQeFdKTlRnUG1KUEJRb2ErM2hQYnhFY0txZlNjVVI1NVlSTTQvdEhkMU05CjBNcFVidHZkN0Q2bjBiaU10aUNrOHZ1UGRwSUcvckJyaURybXJldlk0MHhabG5RMHh2NHJBT1RqQW9HQkFQQmwKRXZZUncveGRVOGZMWUJZeWF3T2p0dVg5dzJWT1dVOUs1Zk9sb0xwVFQ0MW8zSEtlR3BYSUlTYlBHL3hPcURiVgoyMmRXcVB3NEx4OHFuTS9BMFl0VjdRWDAwS3hMT3k5OU5uNVVXR0d4cUJOMGE0UUplUUt2eXZONElQLzh5aytICkl5emFxZzN3ZVdHN2xDeDNrcnRuMi9ZTW9lR1V6Z29pa1RNYXQ0bjVBb0dBYTEwSkxLZkxtcXR6V0FaS1RNSXMKU3cyUFQwb05ER1hOYnNGelludWo2Smp1dmZvTHVMS0tNcGZRdEtseFprTnE4M29tMk5obysxbFpoT0pWVlgxSwpSejJ2SGFQSGlhaHFqRjJRclNjWHFVWFZEalZaWVlsRDlxT2Fwd2V3dWxUZGVSQ3FuK2lGT0VBRkpCMWl6dVAvCkFGcnplZUwxMWlrNFNxU2Y1Uk05MnNrQ2dZQjkrdW9oN0pPQjZNTGtQSStoY2xDa3VxSTZDMi9mNGx4cGNuM3AKM3MzSmQ2bUVHUVVXU0FiMG9jbkYxZG43c3BqekM4WU1kTnpnT08xdzd0cjVBVHFQUTd1UVdJa1hFZUgxZERBZgpxa0liQ0lobGthaGFyTUF2Q1VOWnJvWFV3WHlnaXRpRFJDREVaMWFsUWpGWDBGNGtPanlLeUhuNWh3c25RcEJICmNPUG91UUtCZ0ZweU1QcWZ2Nk9kaDJOVnpmQ0hFYXk1dFhuQkgrQStLRGRRSTMrUThCSGI0cE8rejRDa29aWDgKN3B1aEtQaDE5MkpBNm1TVTI0YTVGbHVWZmduTHFXd09wUmZmUzFNelpKNzRNOTlNV1htRFdaRFB6RWRqL2VpOApERmY1ekZxRXhySjZxR0xyeStkYUxwd2NSSC8vNWxGVlBhTFhDS0g0Nk9RZ2ZRbHY1bzhDCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQxBZ0lJT1V6V2RGMGdmQTB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMk1ETXdNVFE1TWpoYUZ3MHlNakEyTURNd01UUTVNelZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXcyd3hoUS94a25qd3ZmbmsKUFhJbE1udVVEMjFxNjdSMEZBQ201WUdOaWZGYWUwZHNJSmZMaVdTOVlZdVlWcFhmNGNHYjZjdG5TNEJhakJoSApMcUhIUW1RR2JGZUNzRkpKaGI4RWQ3N1NiRGFPcVNiYzFLOHdrSlhsbGVhSVd1RUJsWW5SUHdsN3dxL3lFTUE0CldudUZsMnNMRTdGR1RKeEJDcEdEcmtsM0xHRVJGelljREJNdUE1K3dkbktUUDY5ak5BVndqYjlXRnJQNW55aTcKalZRcU1CYU1DMS9LWmVTZk9jMEVOQ1ZubEFOa3hzbEtueE1GZG83bDJlMWZTSjliOUE2NlJ5Ni9qUTFRdTRRTQpiVUh1K0ZHVlZzaVB6VktXc3VLR3hjZllHdEhxZzBNcHdrL29XQWZ6R090cDMwcmV0WnllWTR6RFRpZzFadDR3CmI4d2lkd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JTZ1JMVDFxN2tWOU5YUjdRM3FzL3hMcVY0cApoakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBRmowdUhhMFV1dmdZeXljVW1NV3BYTDNld0I1MzE0MWc1QTY5CnhGbFR0ZTVmclFIb1A0c1I2ZlZ6OHRlMCttaDM4Um9HTDVVMC81MTVaejMxalJvcXd4ajFuSk1YNWk2WmY1U2oKa1VwbjJBSnpsaFpxSXRXWkFUZVFzcTQ2SUU4cHlqY2ZPQUxRNnF4elBRNzBBeVFjQ0tYeVJ4RnF0Y00yUzh0SgpacXJNOEN3Q1M0NFhrMDF5d0l5SFRqdm43SXp0YUVqRS8vZWNwNHMvL3ovN0J6TVJUOWIrci9HYmRVVzlFWDZLCnFFbUczQm1pcUVlYzY0eS9IVUVTMy9KV2JVK1ZjOXUxYy9KMm44c2RXd1N1eU9DR0ZoSFZLWDhianpHeTZNUWEKaXRlY2NhTjBlT0FuOHRjM0dlMTFFT0VsQTQ2dlFuNlJWNFVIb1ZPVFJGbFNBdms0blE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBxBYSWxNbnVVRDIxcTY3UjBGQUNtNVlHTmlmRmFlMGRzCklKZkxpV1M5WVl1WVZwWGY0Y0diNmN0blM0QmFqQmhITHFISFFtUUdiRmVDc0ZKSmhiOEVkNzdTYkRhT3FTYmMKMUs4d2tKWGxsZWFJV3VFQmxZblJQd2w3d3EveUVNQTRXbnVGbDJzTEU3RkdUSnhCQ3BHRHJrbDNMR0VSRnpZYwpEQk11QTUrd2RuS1RQNjlqTkFWd2piOVdGclA1bnlpN2pWUXFNQmFNQzEvS1plU2ZPYzBFTkNWbmxBTmt4c2xLCm54TUZkbzdsMmUxZlNKOWI5QTY2Unk2L2pRMVF1NFFNYlVIdStGR1ZWc2lQelZLV3N1S0d4Y2ZZR3RIcWcwTXAKd2svb1dBZnpHT3RwMzByZXRaeWVZNHpEVGlnMVp0NHdiOHdpZHdJREFRQUJBb0lCQUN1SnlrcVQ3OFVyVHE5MApvaVlTYlRrZkVUQ1N0eFNHWXFvbUx3akk0VWpQVGRKVGFrS2tyd01RUDZVZzNiTEV0MWxyc2huWGFFOEk3S056CnNVQXhhTnhndnBHYXVaSWc4eUpxR1V1NFp0Y1hISmVSQWZnY2c5eGltUURabUoxdXJkU3NITU5Ia0p3aWFQTFUKY0htd05XWXp3Z2NFSXQ1a25aVUdNR2svRXQ3KzZLMHgwbm5yU1ZBVHg3OUtVMFpkN1pKNU1rd3ZuNXJkWGNDQgpLRFVRM2dqWHFUQWk5eHZjcUQyaHA3QXJIeVNJamtDL3V5c2FpcDNDVFhNcnlrTnIya0xkbHpYeDlNQ1hEY0tsCjZTZHRXbE5kQjk3bitjS1o0RmpIZWlEUkZybzRkOTBzRFpFS0wvMUNtMXliQWU0S21WSVJhNmp2MVljY1Vpb0QKSmxTdVJ4a0NnWUVBLzNTc1cyWVRBdXQrK2l4QlRPdVN0bzJmcTIwdnVVVFczVk8vMG8xaUtheExYMHdpdzYzRQo5TzE2TW5zZ1ZnQVBQY1c0ckN6K1ZReWNZbkRwM2FGTmI0Yy9aSVFGM1JUaE93V0tiOTVEZjhKRGwwSVZhZU5yCkx5ZDJ0czRIOVgvd1UydlYreFV3cGlBVU1TTTlMaVR5NUdsY1NQVWxkTUNNRUJqeElMUnlZMDBDZ1lFQXc5YkgKSDd3VU5GemtDSVRBWjVPYzl2dXFNQi9wNU5aRDErYUNPNkVPSDQycmkyM1VhM2VXc2YxMk1mL3h2ZVlWYzR4UQpDWk5QbjJKTjZOQWpqSnVvazlGbVN3N0xXZVVid1dIU2V6UnRROS9iSUFJbkRDRkhsM2Z0YjVEaEhrZzFuK1kvCmJQZ3o4bmh0b3ZDa1kweVQyTU5Tb0IyalBvTHEzVUZaYlZ4OGN0TUNnWUJRVHVLY2ZUTisySC83c0F2N1haZXEKOGt6Ky9IMWpWaVBpUXFEc1ZXeEZ3NWVTWndJSzJFY3g1TEpreWxaNUV0MjN3ci95eU5aUDhIMzlhSmZ0Qi9lcgpGeTZ6cjltVURpdGNmYnB1dnNZamxQUGd5bkttN2tyVThTZ2VBaGw0Y1hjaEVxYWJuNmJDb3hVVitZa1RSNlJnCmNFc0YyS09rMTU5d3RCYWgvSGgxaFFLQmdRQy9GL3VUVnNYc1ZsdllpQmpxdUpvb1VtZTlyOVplQ2ttSENaRTQKdUMzODBoTjY2UCttb2JtMUVscmI3U0FwS2JMeTNnNVhXWndQTFRCU3BZNmFyR1R4WUJuTjBiRFJsZ0xnVHlEQQpRZWNBblJYSGhQSXZIdVlwd2NjNDN3a2JzR0JMRjdQNkU3TTB2UmhXTHpScEJKY2JvM1FqY3VnUW5sU284eFJjCjV5czBLd0tCZ0hqYzRGc3hIMlZOeERFbnlxM1VWOTUwdXkzRWQrdmd3R2V4Smh6VTZUc1ZFdnRqY09WbmJ0M0oKMGEvVEVGVFZranAvVmExQ1pXVjM0MDlEK1lNc3Y1QTZmN2E0TEVJN2Y3SE0rNG1ZTGJDcVZkOU5rZU1XVSttdQpEbmptWUdFc0k2UHhYbFRTdlNsOE1qOVZoOHFxTkUvdDNSd0tkcGluMGNYNUlsajFZMEF4Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
- name: minikube
user:
client-certificate: C:\Users\yourname\.minikube\profiles\minikube\client.crt
client-key: C:\Users\yourname\.minikube\profiles\minikube\client.key

使用上可以切換 context 像以下這樣操作

1
2
3
4
5
6
7
8
k config get-contexts
#CURRENT NAME CLUSTER AUTHINFO NAMESPACE
# docker-desktop docker-desktop docker-desktop
# kind-kind kind-kind kind-kind
#* kubernetes-admin@kubernetes kubernetes kubernetes-admin
# minikube minikube minikube default

k config use-context kubernetes-admin@kubernetes

最後用久了覺得預設的工具很麻煩的話可以混搭 k9s 進行管理
設定 skin

1
2
3
cd ~/.k9s
wget https://raw.githubusercontent.com/derailed/k9s/master/skins/dracula.yml
mv dracula.yml skin.yml
關閉