kubernetes-v1.18.0 Addons quickstart — Jevic

kubernetes-v1.18.0 Addons quickstart

2020/06/03 Kubernetes

kubernetes 安装此处省略,具体详情参考之前的文档以及对应版本脚本:

kubernetes 1.13.8 二进制手动部署

kubernetes 二进制手动安装脚本 :

分支及版本信息:

  • v1.18.0 (master)
  • v1.13.8
  • v1.14.0 (只包含启动配置文件)

Role添加

  • 首次安装完成后,role没有被打标签
# kubectl get node
NAME   STATUS   ROLES    AGE    VERSION
k1     Ready    <none>   1h   v1.18.0
k2     Ready    <none>   1h   v1.18.0
k3     Ready    <none>   1h   v1.18.0
kubectl label nodes k1 node-role.kubernetes.io/master=
kubectl get node --show-labels
kubectl label nodes k2 node-role.kubernetes.io/node=
# 设置 master 一般情况下不接受负载
kubectl taint nodes k1 node-role.kubernetes.io/master=true:NoSchedule

master运行pod
kubectl taint nodes k1 node-role.kubernetes.io/master=

master不运行pod
kubectl taint nodes k1 node-role.kubernetes.io/master=:NoSchedule
# kubectl get node
NAME   STATUS   ROLES    AGE    VERSION
k1     Ready    master   1h   v1.18.0
k2     Ready    node     1h   v1.18.0
k3     Ready    node     1h   v1.18.0

calico

https://docs.projectcalico.org/getting-started/kubernetes/quickstart
https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises

coredns

curl -O https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
curl -O https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh

./deploy.sh -i 10.254.0.2 -d cluster.local. > coredns.yaml
sed -i 's#coredns/coredns:1.6.7#registry.aliyuncs.com/google_containers/coredns:1.6.7#g' coredns.yaml

dns hpa

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns-horizontal-autoscaler

metrics-server

部署的YAML 文件获取方式:

  • gitlab 官网
  • k8s源码包deploy目录: cluster/addons/metrics-server
  • 修改image 为阿里云镜像,然后执行apply 部署即可;

报错处理

  • metrics-server 401 Unauthorized
### 临时解决创建匿名用户-不太建议此操作;
kubectl create clusterrolebinding the-boss --user system:anonymous --clusterrole cluster-admin

nginx-ingress

https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml

修改 网络模式-host

template:
  spec:
    hostNetwork: true

dashboard v2.0

dashboard

apiserver 配置

# grep basic /etc/kubernetes/apiserver
		--basic-auth-file=/etc/kubernetes/basic-auth.csv \

用户密码文件

# cat /etc/kubernetes/basic-auth.csv
admin,admin,1
password123,test,2
  • 说明: password,user,userID

dashboard yaml

.....
template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: reg.yl.com/jk8s/kubernetesui/dashboard:v2.0.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            - --authentication-mode=basic
....

授权用户

kubectl create clusterrolebinding  login-on-dashboard-admin --clusterrole=cluster-admin --user=admin
kubectl get clusterrolebinding login-on-dashboard-admin

ingress 域名配置

添加证书
# kubectl create secret tls dashboard-secret-jk8s --namespace=kubernetes-dashboard --cert jevic.com.pem --key jevic.com.key
nginx 配置
# cat ui-ing.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  annotations:
    ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  tls:
  - hosts:
    - jk8s.jevic.com
    secretName: dashboard-secret-jk8s
  rules:
  - host: jk8s.jevic.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 443

lxcfs (不建议生产使用)

https://yq.aliyun.com/articles/566208

https://github.com/lxc/lxcfs
yum install -y fuse fuse-lib fuse-devel libtool
./bootstrap.sh && ./configure && make && make install
lxcfs /var/lib/lxcfs &>/dev/null &


git clone https://github.com/denverdino/lxcfs-admission-webhook

说明

    1. lxcfs 服务建议直接make 编译安装在宿主机,而不是通过daemonset 方式以容器方式运行;
    1. 当lxcfs 服务异常重启后, 原有的Pod 执行exec 命令登录后,无法使用free/top 等系统命令;并且通过prometheus 等监控工具也无法获取到Pod信息;但是并不影响Pod 服务;
    1. 宿主机直接运行lxcfs服务后,当宿主机异常重启后,需要先清理掉 /var/lib/lxcfs 目录下的所有数据;否则服务无法正常启动且会导致容器也无法正常运行; 但是这样又会出现另外一个问题,当把目录数据删除后,之前运行的Pod的挂载信息就将全部丢失;同样也会出现上述第2点 所描述的异常情况;

kube-debug(可选)

https://github.com/aylei/kubectl-debug

添加配置:
https://github.com/aylei/kubectl-debug#configuration

Search

    Post Directory