好方
好方
发布于 2023-09-01 / 0 阅读 / 0 评论 / 0 点赞

kubesphere部署

参考

Autok3s安装:功能介绍 | Rancher文档

Kubesphere安装:kubesphere/README_zh.md at master · kubesphere/kubesphere (github.com)

创建K3d集群:创建K3d集群 | Rancher文档

K8s参数优化:kubernetes安装前的一些优化措施 - SKlinux服务器维护

环境

k8s v1.26.4基于autok3s创建

创建容器集群,以及参数优化

需要设置一些内核优化参数,不然kubesphere安装起来会报错,特别是其中关于fs.file-max以及hardnofile和softnofile的优化

➜  autok3s git:(master) ✗ cat install.sh
#!/bin/bash
curl -sS https://rancher-mirror.rancher.cn/autok3s/install.sh  | INSTALL_AUTOK3S_MIRROR=cn sh

# uninstall
#/usr/local/bin/autok3s-uninstall.sh

cat sys-limit.conf >> /etc/security/limits.conf && cat kernel-sysctl.conf >> /etc/sysctl.conf

➜  autok3s git:(master) ✗ cat sys-limit.conf
*   hardnofile  65536
*   softnofile  65536
*   hardnproc   65536
*   softnproc   65536
➜  autok3s git:(master) ✗
➜  autok3s git:(master) ✗ cat kernel-sysctl.conf
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce=2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
net.ipv4.ip_local_port_range= 45001 65000
net.ipv4.ip_forward=1
net.ipv4.tcp_max_tw_buckets=6000
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_synack_retries=2
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.netfilter.nf_conntrack_max=2310720
net.ipv6.neigh.default.gc_thresh1=8192
net.ipv6.neigh.default.gc_thresh2=32768
net.ipv6.neigh.default.gc_thresh3=65536
net.core.netdev_max_backlog=16384
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_max_syn_backlog = 8096
net.core.somaxconn = 32768
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=524288
fs.file-max=52706963
fs.nr_open=52706963
kernel.pid_max = 4194303
net.bridge.bridge-nf-call-arptables=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
vm.max_map_count = 262144

环境安装

脚本如下,需要注意下,为了后续web访问方便,我将k3s集群的30080映射到了集群node节点的80,30443是用来映射443的,33444是留着备用的

?  autok3s git:(master) ? cat run.sh
#!/bin/bash
autok3s create -p k3d create --name dev --master 1 --worker 2 --image docker.io/rancher/k3s:v1.26.4-k3s1 --ports 30443:443@server:0 --ports 30080:80@server:0 --ports 33444:33444@server:0 --ports 36443:6443@server:0

安装

验证

确认kubesphere状态

查看ks-install,运行正常如下

安装后集群pod,如下

kubesphere的页面svc如下

代理kubesphere到traefik

查看traefik的入口
➜  ~ kk get deploy -n kube-system traefik -o yaml

可以看到如下traefik的4个entrypoint,其中web和websecure分别是http流量和https流量的入口

这里需要有一点关于traefik的知识——traefik的整理流量是分别经过entrypoint,route,middleware,service的,示意图(地址:Routing & Load Balancing Overview |Traefik Docs - Traefik)如下,

创建ingresroute

yaml如下,注意其中entryPoint要填写为http或者https的入口,即上文中查看的web或者websecure,services为上文中查看的kubesphere的web界面

?  kubesphere-3.4 git:(master) ? cat ingressroute.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  labels:
    createdby: fang
  name: ks
  namespace: kubesphere-system
spec:
  entryPoints:
  - web
  routes:
  - kind: Rule
    match: Host(`ks.dev.home`)
    services:
    - name: ks-console
      port: 80

创建后如下

验证traefik代理

在hosts文件中写入上文Host中的host

浏览器登录域名,访问如下,这里解释下,因为k8s集群是在家里的服务器上,通过云上的服务器进行内网穿透,相当于将家里服务器的30080端口映射到了家里服务的30080端口


评论