k8s
2026.02.10
k8s 单机部署记录:
安装kubernetes
sudo yum install kubernetes安装kubectl
`// https://kubernetes.io/zh-cn/docs/home/
- curl -LO https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl // 下载
- sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl // 安装
- kubectl version –client // 测试
`
安装kind(和kuberctl类似)
`// https://kubernetes.io/zh-cn/docs/home/
下载
For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.31.0/kind-linux-amd64
For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.31.0/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind安装
sudo install -o root -g root -m 0755 kind /usr/local/bin/kind测试
kind
`
minikube 比kind操作更简单方便
`
curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64
`
- 操作失败告终 具体原因不明
最终还是使用kind 启动集群
docker pull kindest/node:v1.18.2
kind create cluster –image=kindest/node:v1.18.2
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
✗ Preparing nodes 📦
RROR: failed to create cluster: command “docker run –name kind-control-plane –hostname kind-control-plane –label io.x-k8s.kind.role=control-plane –privileged –security-opt seccomp=unconfined –security-opt apparmor=unconfined –tmpfs /tmp –tmpfs /run –volume /var –volume /lib/modules:/lib/modules:ro -e KIND
EXPERIMENTAL_CONTAINERD_SNAPSHOTTER –detach –tty –label io.x-k8s.kind.cluster=kind –net kind –restart=on-failure:1 –init=false –cgroupns=private –publish=127.0.0.1:36039:6443/TCP -e KUBECONFIG=/etc/kubernetes/admin.conf kindest/node:v1.18.2” failed with error: exit status 125
ommand Output: unknown flag: –cgroupns
ee ‘docker run –help’.
可以过image
cgroupns 提示内核版本太低
需要重启系统 暂不操作测试环境机器
``
原文地址 :https://blog.csdn.net/a1369760658/article/details/136618992
#1 更新系统,确保所有安装的包都是最新的
$ sudo yum update
#2安装 elrepo 仓库,该仓库提供了最新的稳定内核
$ sudo rpm –import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
$ sudo yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
#3 安装新的内核(例如,最新的稳定版本是 kernel-ml)
$ sudo yum –enablerepo=elrepo-kernel install kernel-ml -y
#4 更新GRUB引导菜单
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
#5 修改默认引导顺序,使新内核成为默认引导
$ sudo grub2-set-default 0
#6 重新启动系统,确认新内核成功安装并生效
$ sudo reboot
``
参考此文 可以解决系统内核版本过低问题
https://blog.csdn.net/zyplanke/article/details/146979876
更新系统内核后 原先docker无法正常运行
尝试更新docker 20以上版本 看是否可以解决
CentOS 安装 Docker 20.10+
卸载旧版本:bash sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
设置稳定存储库(阿里云):bash sudo yum install -y yum-utils sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安装指定版本 (以20.10.24为例):bash sudo yum install -y docker-ce-20.10.24 docker-ce-cli-20.10.24 containerd.io
启动与检查:bash sudo systemctl start docker sudo systemctl enable docker docker --version
更新后
docker info
Cgroup Driver: cgroupfs
Cgroup Version: 1
再次执行 kind create cluster
第二步通过了[root@i-698afbd7cee176e6ce8a84dd ~]# kind create cluster Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.35.0) 🖼 ✓ Preparing nodes 📦 ✗ Writing configuration 📜 Deleted nodes: ["kind-control-plane"] ERROR: failed to create cluster: failed to generate kubeadm config content: failed to get kubernetes version from node: failed to get file: command "docker exec --privileged kind-control-plane cat /kind/version" failed with error: exit status 126 Command Output: OCI runtime exec failed: exec failed: unable to start container process: read init-p: connection reset by peer: unknown [root@i-698afbd7cee176e6ce8a84dd ~]#
尝试使用cgroup v2版本 不行
尝试使用 centos:centos7镜像 对应docker版本 20.10.7 不行
问题点确认:centos 系统内核版本和 docker版本不匹配导致
- 要么内核版本通过
- 要么docker正常
最终kind create cluster 会卡在一个报错点上
最终实现方案:
使用k3s快速部署
AI我怀疑你在耍我啊!!!
kubernets –> kubectl –> kind –> k3s –> k3d
尝试见尝试一种方式不行,会提示你更换操作方式
尝试使用k3d 单机部署 容器花集群
归功于DeepSeek V3
全程把报错日志信息抛给AI 处理得到的结果
第一步:
kubectl create service clusterip nginx –tcp=80:80
<
🎯 推荐使用方案一
为了尽快看到集群能否正常运行,建议使用方案一,因为它保留了负载均衡器,可以让你后续更方便地访问部署的服务。
bash
清理
k3d cluster delete mycluster 2>/dev/null
docker system prune -af
创建集群(方案一)
k3d cluster create mycluster
–servers 1
–agents 0
-p “8080:80@loadbalancer”
-p “8443:443@loadbalancer”
–k3s-arg “--kube-proxy-arg=conntrack-max-per-core=0@server:*”
–image rancher/k3s:v1.21.1-k3s1
–wait
–timeout 15m
如果这个命令执行后仍然出现之前的超时错误,那问题就与端口映射无关了,我们需要继续排查其他原因。
终于成功啦0.0
``
[root@i-698afbd7cee176e6ce8a84dd ~]# k3d cluster create mycluster \
–servers 1
–agents 0
-p “8080:80@loadbalancer”
-p “8443:443@loadbalancer”
–k3s-arg “--kube-proxy-arg=conntrack-max-per-core=0@server:“
–image rancher/k3s:v1.21.1-k3s1
–wait
–timeout 15m
INFO[0000] portmapping ‘8080:80’ targets the loadbalancer: defaulting to [servers::proxy agents::proxy]
INFO[0000] portmapping ‘8443:443’ targets the loadbalancer: defaulting to [servers::proxy agents:*:proxy]
INFO[0000] Prep: Network
INFO[0000] Created network ‘k3d-mycluster’
INFO[0000] Created image volume k3d-mycluster-images
INFO[0000] Starting new tools node…
INFO[0000] Pulling image ‘ghcr.io/k3d-io/k3d-tools:5.8.3’
INFO[0001] Creating node ‘k3d-mycluster-server-0’
INFO[0002] Starting node ‘k3d-mycluster-tools’
INFO[0002] Pulling image ‘rancher/k3s:v1.21.1-k3s1’
INFO[0017] Creating LoadBalancer ‘k3d-mycluster-serverlb’
INFO[0018] Pulling image ‘ghcr.io/k3d-io/k3d-proxy:5.8.3’
INFO[0024] Using the k3d-tools node to gather environment information
INFO[0024] HostIP: using network gateway 172.30.0.1 address
INFO[0024] Starting cluster ‘mycluster’
INFO[0024] Starting servers…
INFO[0024] Starting node ‘k3d-mycluster-server-0’
INFO[0034] All agents already running.
INFO[0034] Starting helpers…
INFO[0034] Starting node ‘k3d-mycluster-serverlb’
INFO[0041] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap…
INFO[0043] Cluster ‘mycluster’ created successfully!
INFO[0043] You can now use it like this:
kubectl cluster-info
``
第二步:
kubectl create service clusterip nginx –tcp=80:80
第三步:
🌐 最后一步完整操作指南
- 创建 Ingress 配置文件
首先,创建一个名为 nginx-ingress.yaml 的文件:
bash
cat > nginx-ingress.yaml << ‘EOF’
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
ingress.kubernetes.io/ssl-redirect: “false”
spec:
rules:
- http:
paths:- path: /
pathType: Prefix
backend:
service:
EOFname: nginx port: number: 80
这个文件的意思是:
- path: /
创建一个 Ingress 资源,名为 nginx
关闭 SSL 重定向(只用 HTTP)
将所有访问 / 路径的请求,转发到名为 nginx 的 Service 的 80 端口
应用这个配置
bash
kubectl apply -f nginx-ingress.yaml
你应该看到输出:ingress.networking.k8s.io/nginx created验证 Ingress 是否创建成功
bash
kubectl get ingress
输出类似:
text
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx
4. 测试访问
现在,打开浏览器访问,或者在命令行测试:
bash
命令行测试
curl localhost:8080
或者用浏览器打开
http://localhost:8080
如果看到 nginx 的欢迎页面,就大功告成了!🎉
第四步:
部署多个pod
🚀 方式一:扩容现有部署(最简单)
直接在命令行扩容:
bash
将 nginx 从1个副本扩容到3个
kubectl scale deployment nginx –replicas=3
查看结果:
bash
kubectl get pods
你会看到类似:
text
NAME READY STATUS RESTARTS AGE
nginx-xxxx-yyyy-1 1/1 Running 0 5m
nginx-xxxx-yyyy-2 1/1 Running 0 30s
nginx-xxxx-yyyy-3 1/1 Running 0 30s
🎯 验证负载均衡
现在有了 3 个 nginx 副本,Kubernetes 会自动负载均衡:
bash
循环访问多次,看返回的 Pod 名称
for i in {1..6}; do
curl -s localhost:8080 | grep -o “
done
想看更详细的信息,可以用这个命令:
bash
在 nginx Pod 里创建一个文件来标识自己
kubectl exec deployment/nginx – sh -c ‘echo “Pod: $(hostname)” > /usr/share/nginx/html/pod.html’
然后访问这个页面
curl localhost:8080/pod.html
多运行几次,你会看到不同的 Pod 名称轮流出现。
第五步:
🔄 滚动更新(同时更新所有副本)
当你修改镜像版本时,Kubernetes 会逐个更新 Pod:
bash
更新镜像版本,同时保持3个副本
kubectl set image deployment/nginx nginx=nginx:1.25
观察滚动更新过程
kubectl rollout status deployment/nginx