Sealos+tkeauth 轻量化安装TKEStack

Sealos+tkeauth 轻量化安装TKEStack

Sealos安装k8s

参考文档:https://www.sealos.io/zh-Hans/docs/Intro

安装sealos

wget http://hub.fastgit.org/labring/sealos/releases/download/v4.0.0/sealos_4.0.0_linux_amd64.tar.gz \\
&& tar zxvf sealos_4.0.0_linux_amd64.tar.gz sealos \\
&& chmod +x sealos && mv sealos /usr/bin

创建Clusterfile文件

  1. 如果已经启动了cluster,请重置cluster:sealos reset
  2. 生成Clusterfile文件:
    单节点的创建方式:
sealos gen labring/kubernetes:v1.24.0 \\
labring/calico:v3.22.1 \\
--masters xx.xxx.195.138 \\
--port 36000 \\
--passwd xxx \\
> Clusterfile

多节点的创建方式:

sealos gen labring/kubernetes:v1.24.0 \\
labring/calico:v3.22.1 \\
--masters 192.168.0.2 \\
--nodes 192.168.0.5 \\
--passwd xxx \\
> Clusterfile
  1. 添加calico ClusterfileClusterfile,并修改networking.podSubnetspec.data.spec.calicoNetwork.ipPools.cidr的值
apiVersion: apps.sealos.io/v1beta1
kind: Cluster
metadata:
  creationTimestamp: null
  name: default
spec:
  hosts:
  - ips:
    - xx.xxx.195.138:36000
    roles:
    - master
    - amd64
  image:
  - labring/kubernetes:v1.24.0
  - labring/calico:v3.22.1
  ssh:
    passwd: xxx
    pk: /root/.ssh/id_rsa
    port: 36000
    user: root
status: {}
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
  podSubnet: 10.160.0.0/12
---
apiVersion: apps.sealos.io/v1beta1
kind: Config
metadata:
  name: calico
spec:
  path: manifests/calico.yaml
  data: |
    apiVersion: operator.tigera.io/v1
    kind: Installation
    metadata:
      name: default
    spec:
      # Configures Calico networking.
      calicoNetwork:
        # Note: The ipPools section cannot be modified post-install.
        ipPools:
        - blockSize: 26
          # Note: Must be the same as podCIDR
          cidr: 10.160.0.0/12
          encapsulation: IPIP
          natOutgoing: Enabled
          nodeSelector: all()
        nodeAddressAutodetectionV4:
          interface: "eth.*|en.*"

启动集群

sealos apply -f Clusterfile

设置master节点可以部署pod

因为kubernetes出于安全考虑默认情况下无法在master节点上部署pod。因此需要设置。

kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-

安装tkeauth

参考文档:https://tkestack.github.io/web/zh/blog/2022/02/23/tkestack-installer-chart/

安装Helm

https://helm.sh/zh/docs/intro/install/

helm命令:

# 列表
helm list --all
# 删除
helm uninstall tke-gateway
# 更新
helm upgrade --install -f bin/auth-chart-values.yaml tke-auth tke-auth/

创建指定 namespace

tke-auth、tke-platform、tke-gateway 三个 chart 需要运行在指定的 namespace 下,执行如下命令:

kubectl create namespace tke

安装 chart

本文提供了二进制可执行程序来生成 tke-auth、tke-platform、tke-gateway 三个 chart 的 values 文件

执行如下命令拉取 TKEStack 项目代码

git clone --branch v1.9.0 --depth=1 https://github.com/tkestack/tke.git

在 TKEStack 项目的charts/bin目录放置了可执行文件bin和需要填写的yaml文件customConfig.yamlcustomConfig.yaml文件中一些注释“必填”的参数,需要填写,其余的参数可根据需要选填,选填部分为空时会自动填充默认值。customConfig.yaml内容如下:

# 必填,etcd访问地址,形式如https://172.19.0.2:2379
etcd:
  host: https://xx.xxx.195.138:2379
# 必填,服务器ip,数组形式
serverIPs:
  - xx.xxx.195.138
# 访问的域名,数组形式
dnsNames:
  - tke.gateway
# 必填,集群front-proxy-ca.crt文件地址,默认位置为/etc/kubernetes/pki/front-proxy-ca.crt
frontProxyCaCrtAbsPath: /etc/kubernetes/pki/front-proxy-ca.crt
# 必填,集群etcd的ca.crt文件地址,默认位置为/etc/kubernetes/pki/etcd/ca.crt
etcdCrtAbsPath: /etc/kubernetes/pki/etcd/ca.crt
# 必填,集群etcd的ca.key文件地址,默认位置为/etc/kubernetes/pki/etcd/ca.key
etcdKeyAbsPath: /etc/kubernetes/pki/etcd/ca.key
tke-auth:
  api:
    # 必填
    replicas: 1
    # 必填
    image: tkestack/tke-auth-api-amd64:74592a3bceb5bebca602bea21aaebf78007a3bb2
    # 必填,数组形式,auth的重定向访问地址,包括集群服务器ip地址(必填)、tke-gateway的域名(可选)、集群高可用的VIP地址(可选)和>集群的公共可访问域名(可选)
    redirectHosts:
      - xx.xxx.195.138
    enableAudit:
    # tke-auth-api组件在node上的对外暴露端口,默认31138
    nodePort:
    # tke集群的租户id,默认default
    tenantID:
    # OIDC认证方式的secret,默认自动生成
    oIDCClientSecret:
    # authentication用户名,默认为admin
    adminUsername:
  controller:
    # 必填
    replicas: 1
    # 必填
    image: tkestack/tke-auth-controller-amd64:74592a3bceb5bebca602bea21aaebf78007a3bb2
    # tke集群的用户名,默认为admin
    adminUsername:
  controller:
    # 必填
    replicas: 1
    # 必填
    image: tkestack/tke-auth-controller-amd64:74592a3bceb5bebca602bea21aaebf78007a3bb2
    # tke集群的用户名,默认为admin
    adminUsername:
    # tke集群的密码,默认自动生成
    adminPassword:
tke-platform:
  # 必填 VIP,或者公网可访问的集群IP
  publicIP: xx.xxx.195.138
  metricsServerImage: tkestack/metrics-server:v0.3.6
  addonResizerImage: tkestack/addon-resizer:1.8.11
  api:
    # 必填
    replicas: 1
    # 必填
    image: tkestack/tke-platform-api-amd64:bc48bed59bff2022d87db5e1484481715357ee7c
    enableAuth: true
    enableAudit:
    # OIDC认证方式客户端id,默认为default
    oIDCClientID:
    # OIDC认证方式的issuer_url,默认为https://tke-auth-api/oidc
    oIDCIssuerURL:
    # 是否开启OIDC认证,默认不开启,值为空
    useOIDCCA:
  controller:
    # 必填
    replicas: 1
    # 必填
    providerResImage: tkestack/provider-res-amd64:v1.21.4-1
    # 必填
    image: tkestack/tke-platform-controller-amd64:bc48bed59bff2022d87db5e1484481715357ee7c
    # 默认为docker.io
    registryDomain:
    # 默认为tkestack
    registryNamespace:
    # 监控存储类型,默认为influxdb
    monitorStorageType:
    # 监控存储地址,为tke集群master ip地址加8086端口
    monitorStorageAddresses:
tke-gateway:
  # 必填
  image: tkestack/tke-gateway-amd64:bc48bed59bff2022d87db5e1484481715357ee7c
  # 默认为docker.io
  registryDomainSuffix:
  # tke集群的租户id,默认default
  tenantID:
  # OIDC认证方式的secret,默认自动生成
  oIDCClientSecret:
  # 是否开启自签名,默认为true
  selfSigned: true
  # 第三方cert证书,在selfSigned为false时需要填值
  serverCrt:
  # 第三方certKey,在selfSigned为false时需要填值
  serverKey:
  enableAuth: true
  enableBusiness:
  enableMonitor:
  enableRegistry:
  enableLogagent:
  enableAudit:
  enableApplication:
  enableMesh:

切换到项目的charts/目录,接下来进行chart的安装:

# tke-auth的安装
helm install -f bin/auth-chart-values.yaml tke-auth tke-auth/
# tke-platform的安装
helm install -f bin/platform-chart-values.yaml tke-platform tke-platform/
# tke-gateway的安装
helm install -f bin/gateway-chart-values.yaml tke-gateway tke-gateway/

通过如下命令如果能查询到三个组件对应的 api-resources,则表示chart安装成功

kubectl api-resources | grep tke
kubectl get svc -n tke
kubectl get svc -n tke

添加labels

查看labels

kubectl get nodes --show-labels

查看svc的label

kubectl get pods -n tke
kubectl describe -n tke pod tke-gateway-42pzw
kubectl get svc -n tke
kubectl describe svc tke-gateway -n tke
pod信息
svc信息

添加labels

kubectl label node vm-195-138-centos app.kubernetes.io/managed-by=Helm
kubectl label node vm-195-138-centos node-role.kubernetes.io/master=

vm-195-138-centos Ready control-plane,master 96m v1.24.0 app.kubernetes.io/managed-by=Helm,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vm-195-138-centos,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=

修改集群 apiserver 配置

在对应的目录/etc/kubernetes/pki/下新建文件tke-authz-webhook.yaml,文件内容如下(其中cluster.server参数中的IP地址需要修改为master的IP地址):

apiVersion: v1
kind: Config
clusters:
  - name: tke
    cluster:
      server: https://xx.xxx.195.138:31138/auth/authz
      insecure-skip-tls-verify: true
users:
  - name: admin-cert
    user:
      client-certificate: /etc/kubernetes/pki/webhook.crt
      client-key: /etc/kubernetes/pki/webhook.key
current-context: tke
contexts:
- context:
    cluster: tke
    user: admin-cert
  name: tke

将二进制执行文件生成的webhook.crtwebhook.key(位置在二进制执行文件同级目录/data内)同时放到对应位置/etc/kubernetes/pki/

修改 k8s 集群中/etc/kubernetes/manifests/kube-apiserver.yaml的内容,在spec.containers.command字段增加以下两条:

# 如果已有这两个参数,则将其按照以下内容修改
- --authorization-mode=Node,RBAC,Webhook
- --authorization-webhook-config-file=/etc/kubernetes/pki/tke-authz-webhook.yaml
kube-apiserver配置

访问tkestack

访问地址http://xx.xxx.195.138/tkestack,出现登陆界面,输入之前设置的用户名adminusername和密码adminpassword,如无设置,默认用户名为admin,密码为YWRtaW4=

导入master集群

一般这种方式搭建的tkestack是没有添加集群的,此时需要手动添加集群。

  1. master节点服务器下载~/.kube/config文件
  2. tkestack界面上导入集群。导入config文件
  3. 修改API Server
本站文章资源均来源自网络,除非特别声明,否则均不代表站方观点,并仅供查阅,不作为任何参考依据!
如有侵权请及时跟我们联系,本站将及时删除!
如遇版权问题,请查看 本站版权声明
THE END
分享
二维码
海报
Sealos+tkeauth 轻量化安装TKEStack
参考文档:https://www.sealos.io/zh-Hans/docs/Intro
<<上一篇
下一篇>>