kubevirt
CRD 形式将 VM 管理接口接入到 kubernetes,通过一个 podvirtd 方式,实现管理 pod 与以 lib 的 VM 接口,以作为容器通用的虚拟管理机,并使用与容器相同的资源管理、调度计划。

一、背景介绍
Kubevirt 主要实现了下面几个资源,以实现对虚拟机的管理:
VirtualMachineInstance(VMI)
: 类似于 kubernetes Pod,是管理虚拟机的小程序。一个对象即表示要运行资源VirtualM
的实例,需要包含一个虚拟机所的各种配置。通常情况下用户不会直接创建 VMI 对象,而是创建更高层级的对象,即 VM 和 VMRS。版权声明:本文遵循 CC 4.0 BY-SA 版权协议,若要转载请务必附上原文出处链接及本声明,谢谢合作! achineInstanceVirtualMachine(VM)
: 为集群内部提供的管理功能,例如机器的VirtualMachineInstance
激活/关闭/重启虚拟机,确保虚拟机实例启动,与虚拟机实例是 1:1 的关系,类似spec.replica
为 1 的 StatefulSet。VirtualMachineInstanceReplicaSet
: 类似ReplicaSet
,可以启动指定数量的VirtualMachineInstance
,并且保证指定数量的VirtualMachineInstance
运行,可以配置HPA。
二、架构设计
首先,先介绍一下它的整体架构:

virt-api
- kubevirt 是 CRD 形式管理 vm pod,virt-api 就是去所有虚拟化操作的入口,包括常规的 CRD 更新验证以及vm start、stop
virt-controlller
- Virt-controller会根据vmi CRD,生成virt-lancher pod,并维护CRD的状态
virt-handler
virt-handler
会以 Daemonset 的状态部署在每个节点上,负责监控上每个虚拟机实例的状态变化,一旦检测到变化,会进行响应并确保相应的操作能够达到要求的()状态。virt-handler
保持集群级之间的同步规范与 libvirt 的同步报告 Libvirt 和集群的规范;调用以节点为中心的变化域 VMI 规范定义的网络状态和管理要求。
virt-launcher
virt-lanuncher pod
一个 VMI,kubelet 只是负责运行状态,不会去玩virt-lanuncher pod
VMI 创建情况。virt-handler
会根据 CRD 参数配置去通知 virt-lanuncher 去使用本地 libvirtd 实例来启动 VMI,virt-lanuncher 会通过 pid 去管理 VMI,如果 pod 生命周期结束,virt-lanuncher 也会去通知 VMI 去终止。- 然后再去一个libvirtd,去virt-lanuncher pod,通过libvirtd去管理VM的生命周期,到t-中心,不再是以前的机器那套,libvirtd去管理多个VM。
libvirtd
libvirtd
每个 VMI pod 中都有一个 的实例。virt-launcher
使
virtctl
- virctl 是kubevir自带控制类似kubectl命令,它是越过virt-lancher pod这层去直接管理vm,可以vm的start、stop、restart。
三、虚拟机流程
上面架构的简述已经创建了部分流程,下面进行了VM的内部流程:
- K8S API 创建 VMI CRD 对象
virt-controller
监听到VMI
创建时间,会根据 VMI 配置生成 pod spec 文件,创建virt-launcher pods
virt-controller
发现 virt-launcher pod 创建完成后,更新 VMI CRD 状态virt-handler
监听到 VMI 状态变更,通信virt-launcher
去创建虚拟机,并负责虚拟机生命周期管理
如下图所示:
Client K8s API VMI CRD Virt Controller VMI Handler
-------------------------- ----------- ------- ----------------------- ----------
listen <----------- WATCH /virtualmachines
listen <----------------------------------- WATCH /virtualmachines
| |
POST /virtualmachines ---> validate | |
create ---> VMI ---> observe --------------> observe
| | v v
validate <--------- POST /pods defineVMI
create | | |
| | | |
schedPod ---------> observe |
| | v |
validate <--------- PUT /virtualmachines |
update ---> VMI ---------------------------> observe
| | | launchVMI
| | | |
: : : :
| | | |
DELETE /virtualmachines -> validate | | |
delete ----> * ---------------------------> observe
| | shutdownVMI
| | |
: : :
四、部署流程
4.1、节点初始化
节点上需要安装libvirt
和qemu
安装:
# Ubuntu
$ apt install -y qemu-kvm libvirt-bin bridge-utils virt-manager
# CentOS
$ yum install -y qemu-kvm libvirt virt-install bridge-utils
查看节点是否支持 KVM 硬件虚拟化
[root@VM-4-27-centos ~]# virt-host-validate qemu
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller support : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'devices' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for device assignment IOMMU support : PASS
QEMU: Checking if IOMMU is enabled by kernel : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments)
QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support)
此时此刻上已经加载了kvm
[root@VM-4-27-centos ~]# lsmod | grep kvm
kvm_intel 315392 15
kvm 847872 1 kvm_intel
irqbypass 16384 17 kvm
4.2、安装 kubevirt
$ export VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- '-rc' | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)
$ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml
$ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml
备注:如果之前节点不支持硬件化,可以通过修改kubevirt-cr
来开启软件模拟模式,参考https://kubevirt.io/user-guide/operations/installation/#installing-kubevirt-on-kubernetes
部署结果
$ kubectl -n kubevirt get pod
NAME READY STATUS RESTARTS AGE
virt-api-64999f7bf5-n9kcl 1/1 Running 0 6d
virt-api-64999f7bf5-st5qv 1/1 Running 0 6d8h
virt-controller-8696ccdf44-v5wnq 1/1 Running 0 6d
virt-controller-8696ccdf44-vjvsw 1/1 Running 0 6d8h
virt-handler-85rdn 1/1 Running 3 7d19h
virt-handler-bpgzp 1/1 Running 21 7d19h
virt-handler-d55c7 1/1 Running 1 7d19h
virt-operator-78fbcdfdf4-sf5dv 1/1 Running 0 6d8h
virt-operator-78fbcdfdf4-zf9qr 1/1 Running 0 6d
4.3、部署容器化数据导入器
Containerized Data Importer
(CDI)项目提供了用于使 PVC 成为 KubeVirt VM 陶瓷的功能。
$ export VERSION=$(curl -s https://github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*")
$ kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
$ kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
4.5、部署 HostPath 配置程序
在采用 PVC 云存储 CBS 的时候,发现存在 PVC 挂载不到 Pod 的,于是,选择 kubevirt 提供的hostpath-provisioner
作为 PVC 的供应商。
# hostpath provisioner operator 依赖于 cert manager 提供鉴权能力
$ kubectl create -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml
# 创建 hostpah-provisioner namespace
$ kubectl create -f https://raw.githubusercontent.com/kubevirt/hostpath-provisioner-operator/main/deploy/namespace.yaml
# 部署 operator
$ kubectl create -f https://raw.githubusercontent.com/kubevirt/hostpath-provisioner-operator/main/deploy/operator.yaml -n hostpath-provisioner
$ kubectl create -f https://raw.githubusercontent.com/kubevirt/hostpath-provisioner-operator/main/deploy/webhook.yaml
创建 CR 作为服务器上的名称,这里指定了作为节点的/var/hpvolumes
实际数据存放位置:
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: Always
storagePools:
- name: "local"
path: "/var/hpvolumes"
workload:
nodeSelector:
kubernetes.io/os: linux
创建PVC
[root@VM-4-27-centos hostpath-provisioner]# tree /var/hpvolumes/csi/
/var/hpvolumes/csi/
|-- pvc-11d671f7-efe3-4cb0-873b-ebd877af53fe
| `-- disk.img
|-- pvc-a484dae6-720e-4cc4-b1ab-8c59eec7a963
| `-- disk.img
`-- pvc-de897334-cb72-4272-bd76-725663d3f515
`-- disk.img
3 directories, 3 files
[root@VM-4-27-centos hostpath-provisioner]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
iso-win10 Bound pvc-de897334-cb72-4272-bd76-725663d3f515 439Gi RWO hostpath-csi 23h
iso-win10-2 Bound pvc-a484dae6-720e-4cc4-b1ab-8c59eec7a963 439Gi RWO hostpath-csi 23h
iso-win10-3 Bound pvc-11d671f7-efe3-4cb0-873b-ebd877af53fe 439Gi RWO hostpath-csi 22h
需要创建的存储类,注意这里的storagePool
内容就是刚才创建的 CR 里面的local
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hostpath-csi
provisioner: kubevirt.io.hostpath-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
parameters:
storagePool: local
4.6、配置硬盘
打开 kubevirt 的HardDisk
模式:
$ cat << END > enable-feature-gate.yaml
---
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
name: kubevirt
namespace: kubevirt
spec:
configuration:
developerConfiguration:
featureGates:
- HardDisk
- DataVolumes
END
$ kubectl apply -f enable-feature-gate.yaml
4.7、客户端准备
Kubevirt 提供了一个高级工具virtctl
,可以直接下载
$ export VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- '-rc' | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)
$ curl -L -o /usr/local/bin/virtctl https://github.com/kubevirt/kubevirt/releases/download/$VERSION/virtctl-$VERSION-linux-amd64
$ chmod +x /usr/local/bin/virtctl
也可以通过krew
安装为 kubectl 的插件:
$ kubectl krew install virt
4.8、创建 Linux 虚拟机
创建 Linux 虚拟机的 VMI 示例,基于这个 VMI 会自动创建一个 VM。在下面这个 CR 里面,指定了一个虚拟机需要的几个关键要素:
- 域:域是一个虚拟机都需要的根元素,指定了虚拟机需要的所有资源。kubevirt 会根据这个域规范转换成 libvirt 的 XML 文件,创建机器。
- 存储:
spec.volumes
表示真正的存储spec.domain.devices.disks
什么存储,表示这个VM要使用。 - :
spec.networks
表示这个真正设备的什么网络语言,spec.domain.devices.interfaces
表示这个VM使用类型网卡
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
name: testvmi-nocloud2
spec:
terminationGracePeriodSeconds: 30
domain:
resources:
requests:
memory: 1024M
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: emptydisk
disk:
bus: virtio
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- bridge: {}
name: default
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo:latest
- name: emptydisk
emptyDisk:
capacity: "2Gi"
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }
创建VirtualMachineInstance
下面的 CR 之后,可以看到集群中启动了virt-launcher-testvmi-nocloud2-jbbhs
这个 Pod。查看 Pod 和虚拟机:
[root@VM-4-27-centos ~]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
virt-launcher-testvmi-nocloud2-jbbhs 2/2 Running 0 24h 172.16.0.24 10.3.4.27 <none> 1/1
[root@VM-4-27-centos ~]# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
testvmi-nocloud2 24h Running 172.16.0.24 10.3.4.27 True
登陆虚拟机,账号和密码都是fedora
:
[root@VM-4-27-centos ~]# ssh [email protected]
[email protected]'s password:
Last login: Wed Feb 23 06:30:38 2022 from 172.16.0.1
[fedora@testvmi-nocloud2 ~]$ uname -a
Linux testvmi-nocloud2 5.6.6-300.fc32.x86_64 #1 SMP Tue Apr 21 13:44:19 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[fedora@testvmi-nocloud2 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 5e:04:61:17:c1:c9 brd ff:ff:ff:ff:ff:ff
altname enp1s0
inet 172.16.0.24/26 brd 172.16.0.63 scope global dynamic noprefixroute eth0
valid_lft 86226231sec preferred_lft 86226231sec
inet6 fe80::5c04:61ff:fe17:c1c9/64 scope link
valid_lft forever preferred_lft forever
4.9、创建 Windows 虚拟机
上传镜像
CDI 提供了使用 PVC 作为虚拟机支持的方案,CDI 将以下几种模式导入到 PVC:
- 通过 URL 虚拟机导入链接到 PVC,URL 可以是 http 链接,s3
- 克隆一个已经存在的 PVC
- 通过容器注册表导入虚拟机到磁盘上,需要结合
ContainerDisk
使用 - 通过客户端上传本地镜像到PVC
这里使用virtctl
本地工具,结合CDI项目上传到PVC第四种方式:
$ virtctl image-upload \
--image-path='Win10_20H2_Chinese(Simplified)_x64.iso' \
--storage-class hostpath-csi \
--pvc-name=iso-win10 \
--pvc-size=10Gi \
--uploadproxy-url=https://<cdi-uploadproxy_svc_ip> \
--insecure \
--wait-secs=240
PersistentVolumeClaim default/iso-win10 created
Waiting for PVC iso-win10 upload pod to be ready...
Pod now ready
Uploading data to https://10.111.29.156
5.63 GiB / 5.63 GiB [======================================================================================================================================================] 100.00% 27s
Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress
Processing completed successfully
Uploading Win10_20H2_Chinese(Simplified)_x64.iso completed successfully
参数解释:
- –image-path : 操作系统镜像的本地地址
- –pvc- : 指定存储运行镜像的PVC过程,这个PVC不需要提前准备好自动上传中会创建。
- –pvc-size:PVC大小,根据镜像大小来设置,一般略大一个G就行
- –uploadproxy-url : cdi-uploadproxy 的Service IP,可以通过命令
kubectl -n cdi get svc -l cdi.kubevirt.io=cdi-uploadproxy
来查看。
创建虚拟机
创建VirtualMachine
CR 之后,可以看到集群中创建了 Pod 和虚拟机:
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
name: win10
spec:
running: false
template:
metadata:
labels:
kubevirt.io/domain: win10
spec:
domain:
cpu:
cores: 4
devices:
disks:
- bootOrder: 1
disk:
bus: virtio
name: harddrive
- bootOrder: 2
cdrom:
bus: sata
name: cdromiso
- cdrom:
bus: sata
name: virtiocontainerdisk
interfaces:
- masquerade: {}
model: e1000
name: default
machine:
type: q35
resources:
requests:
memory: 16G
networks:
- name: default
pod: {}
volumes:
- name: cdromiso
persistentVolumeClaim:
claimName: iso-win10-3
- name: harddrive
hostDisk:
capacity: 50Gi
path: /data/disk.img
type: DiskOrCreate
- name: virtiocontainerdisk
containerDisk:
image: kubevirt/virtio-container-disk
参数解释:
hostDisk
驱动器:选择直接挂载的磁盘,即刻安装在该硬盘上。- cdrom: 提供操作系统安装镜像,即发行镜像后的PVC
iso-win10
。 - virtiocontainerdisk: 视窗到默认无法识别原始格式的磁盘,需要安装virtio驱动。
启动虚拟机实例:
$ virtctl start win10
# 如果 virtctl 安装为 kubectl 的插件,命令格式如下:
$ kubectl virt start win10
查看启动VM实例,Windows虚拟机已经可以正常运行了。
[root@VM-4-27-centos ~]# kubectl get vmi
NAME AGE PHASE IP NODENAME READY
win10 23h Running 172.16.0.32 10.3.4.27 True
[root@VM-4-27-centos ~]# kubectl get vm
NAME AGE STATUS READY
win10 23h Running True
配置VNC访问
部署 virtVNC 来访问启动的 Windows 服务器。这里主要是暴露一个 NodePort 的服务
apiVersion: v1
kind: ServiceAccount
metadata:
name: virtvnc
namespace: kubevirt
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: virtvnc
subjects:
- kind: ServiceAccount
name: virtvnc
namespace: kubevirt
roleRef:
kind: ClusterRole
name: virtvnc
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: virtvnc
rules:
- apiGroups:
- subresources.kubevirt.io
resources:
- virtualmachineinstances/console
- virtualmachineinstances/vnc
verbs:
- get
- apiGroups:
- kubevirt.io
resources:
- virtualmachines
- virtualmachineinstances
- virtualmachineinstancepresets
- virtualmachineinstancereplicasets
- virtualmachineinstancemigrations
verbs:
- get
- list
- watch
---
apiVersion: v1
kind: Service
metadata:
labels:
app: virtvnc
name: virtvnc
namespace: kubevirt
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8001
selector:
app: virtvnc
#type: LoadBalancer
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: virtvnc
namespace: kubevirt
spec:
replicas: 1
selector:
matchLabels:
app: virtvnc
template:
metadata:
labels:
app: virtvnc
spec:
serviceAccountName: virtvnc
containers:
- name: virtvnc
image: quay.io/samblade/virtvnc:v0.1
livenessProbe:
httpGet:
port: 8001
path: /
scheme: HTTP
failureThreshold: 30
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
$ kubectl apply -f virtvnc.yaml
通过访问这个 NodePort 服务,既可以

配置远程连接
启用VNC可以远程访问Windows图形界面,但是操作体验比较难受。当系统安装完成后,就可以使用Windows的远程连接协议RDP了,现在可以通过来测试 RDP 端口 3389 的协议性:
[root@VM-4-27-centos ~]# telnet 172.16.0.32 3389
Trying 172.16.0.32...
Connected to 172.16.0.32.
Escape character is '^]'.
如果你的本地电脑能够直接连接,现在就可以直接通过Pod IP
RDP客户SVC IP
端来远程连接 Windows 了。操作是创建一个Service,类型为NodePort:Pod IP
SVC IP
Node IP
NodePort
[root@VM-4-27-centos ~]# virtctl expose vm win10 --name win10-rdp --port 3389 --target-port 3389 --type NodePort
Service win10-rdp successfully exposed for vm win10
[root@VM-4-27-centos ~]#
[root@VM-4-27-centos ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-user LoadBalancer 172.16.253.39 10.3.4.28 443:31525/TCP 27h
kubernetes ClusterIP 172.16.252.1 <none> 443/TCP 42h
win10-rdp NodePort 172.16.255.78 <none> 3389:32200/TCP 8s
在 Windows 虚拟机中安装
[root@VM-4-27-centos ~]# kubectl get vmi win10
NAME AGE PHASE IP NODENAME READY
win10 5d5h Running 172.16.0.32 10.3.4.27 True
[root@VM-4-27-centos ~]# curl 172.16.0.32:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
为了能够将这个 Windows 虚拟机的服务暴露到外网,创建以下服务:
apiVersion: v1
kind: Service
metadata:
name: win10-nginx
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
kubevirt.io/domain: win10
sessionAffinity: None
type: NodePort
可以看到创建了 NodePort 的服务:
[root@VM-4-27-centos ~]# kubectl get svc
win10-nginx NodePort 172.16.255.171 <none> 80:31251/TCP 3s
五、存储
虚拟机镜像(盘)是虚拟机非常的部分,中提供了虚拟机盘,虚拟机(磁盘启动)方式灵活。这里部分使用比较常用的:
- PertentVolumeClai 使用 PVC 可以运行为内存或者块,适用于数据化,即在虚拟机重建后数据仍然存在。
- 使用文件系统的时候,会使用上的/disk.img,格式为RAW格式的文件作为硬盘。
- 块模式时,使用块卷直接作为原始块设备提供给虚拟机。
- 临时性的:每天存储层复制在做一个镜像层,所有本地的写入都在存储的中,本地实例时写入就解除,本地存储上写上的不变化的镜像。
- container:基于从头开始的docker镜像,需要镜像中包含虚拟机启动所使用的虚拟机镜像,可以直接将镜像推送到注册表,时从注册表磁盘拉取镜像,containerDisk作为VMI磁盘无法使用持久化的。
- hostDisk: 使用节点上的空磁盘镜像
hostpath
,也可以在创建的镜像时。 - dataVolume: 提供在虚拟机启动流程中自动将虚拟机导入 pvc 的功能,然后在不使用 DataVolume 的下例,用户必须先准备好
版权声明:本文遵循 CC 4.0 BY-SA 版权协议,若要转载请务必附上原文出处链接及本声明,谢谢合作! 钻石镜像的 pvc,再将其分配给 VM 或 VMI。 dataVolume 拉取一个对象的等来源可以时http,存储,另一个PVC。
六、流量模式-路桥
虚拟机网络就是pod网络,virt-launcher pod网络的网卡不再挂有pod ip,作为虚拟机的虚拟网卡的与外部网络通信的交接物理网卡,virt-launcher实现了简单的单ip dhcp server ,就是需要虚拟机中启动dhclient,virt-launcher 服务会分配给虚拟机。

出向:为聚会外地址
在虚拟机中查看路由:
[fedora@testvmi-nocloud2 ~]$ ip route
default via 172.16.0.1 dev eth0 proto dhcp metric 100
172.16.0.0/26 dev eth0 proto kernel scope link src 172.16.0.24 metric 100
- 虚拟机Node1上面所有这些Pod都属于同一个IP子网172.20.0.0/26,Pod都连接到了虚拟网桥cbr0上。如路由表的第二个目录显示,目地地为子网172.20.0.0 / 虚拟机的 eth0 的 eth0 发出的对网桥将通过网桥传递到网桥上,因此网数据桥将通过网桥接到网桥上。包从网桥上连接到目的 Pod 的端口发送出去,数据将到达该端口对 Veth 对,即该数据包的目的 Pod 上。
- 如果是发往本人的IP,则默认发给网关172.16.1。 的网关是往来机地址0.cbr0的。
原来这里虚拟机的路由都来自于launcher Pod中的路由信息。此时查看launcher Pod
中的路由信息,已经为空:
[root@VM-4-27-centos ~]# ip r
[root@VM-4-27-centos ~]# ip n
查看虚拟机IP信息,可以看到虚拟机IP为172.16.0.24
,MAC为5e:04:61:17:c1:c9
,这也是从launcher Pod拿来的。
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 5e:04:61:17:c1:c9 brd ff:ff:ff:ff:ff:ff
altname enp1s0
inet 172.16.0.24/26 brd 172.16.0.63 scope global dynamic noprefixroute eth0
valid_lft 85794782sec preferred_lft 85794782sec
inet6 fe80::5c04:61ff:fe17:c1c9/64 scope link
valid_lft forever preferred_lft forever
查看launcher Pod 中现有的接口,
eth0-nic
为 Pod 原来 eth0 的接口tap0
为虚拟机 eth0 的 tap 设备k6t-eth0
为launcher Pod中的桥网
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0-nic@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master k6t-eth0 state UP group default
link/ether 5e:04:61:bf:d4:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
4: eth0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 0a:80:97:de:c2:56 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.24/26 brd 172.16.0.63 scope global eth0
valid_lft forever preferred_lft forever
5: k6t-eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 1a:2b:ab:44:18:07 brd ff:ff:ff:ff:ff:ff
inet 169.254.75.10/32 scope global k6t-eth0
valid_lft forever preferred_lft forever
6: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master k6t-eth0 state UP group default qlen 1000
link/ether 36:9c:11:71:fa:d6 brd ff:ff:ff:ff:ff:ff
可以看到eth0-nic
和tap0
都被挂载到网桥k6t-eth0
上:
# ip link show master k6t-eth0
3: eth0-nic@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master k6t-eth0 state UP mode DEFAULT group default
link/ether 5e:04:61:bf:d4:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
6: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master k6t-eth0 state UP mode DEFAULT group default qlen 1000
link/ether 36:9c:11:71:fa:d6 brd ff:ff:ff:ff:ff:ff
你的机器上 cbr0 信息:
5: cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 92:3f:13:e2:55:c9 brd ff:ff:ff:ff:ff:ff
inet 172.16.0.1/26 brd 172.16.0.63 scope global cbr0
valid_lft forever preferred_lft forever
inet6 fe80::903f:13ff:fee2:55c9/64 scope link
valid_lft forever preferred_lft forever
从虚拟机访问外部网段,8.8.8.8
比如,在launcher Pod中抓包,这里在tap设备上抓包:
- 源IP:虚拟机eth0的IP
- 源Mac:虚拟机eth0的MAC
- 目的IP:虚拟机IP,也就是机上网关的IP
- 目标 Mac: 一台机器上 cbr0
[root@VM-4-27-centos ~]# tcpdump -itap0 -nnvve host 8.8.8.8
dropped privs to tcpdump
tcpdump: listening on tap0, link-type EN10MB (Ethernet), capture size 262144 bytes
20:19:58.369799 5e:04:61:17:c1:c9 > 92:3f:13:e2:55:c9, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 54189, offset 0, flags [DF], proto ICMP (1), length 84)
172.16.0.24 > 8.8.8.8: ICMP echo request, id 10, seq 1, length 64
20:19:58.371143 92:3f:13:e2:55:c9 > 5e:04:61:17:c1:c9, ethertype IPv4 (0x0800), length 98: (tos 0x64, ttl 117, id 0, offset 0, flags [none], proto ICMP (1), length 84)
8.8.8.8 > 172.16.0.24: ICMP echo reply, id 10, seq 1, length 64
点击设备接到网桥上,在k6t-eth0
网设备上抓源和目标桥地址都不变
[root@VM-4-27-centos ~]# tcpdump -ik6t-eth0 -nnvve host 8.8.8.8
dropped privs to tcpdump
tcpdump: listening on k6t-eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
20:28:28.945397 5e:04:61:17:c1:c9 > 92:3f:13:e2:55:c9, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 21796, offset 0, flags [DF], proto ICMP (1), length 84)
172.16.0.24 > 8.8.8.8: ICMP echo request, id 11, seq 1, length 64
20:28:28.946743 92:3f:13:e2:55:c9 > 5e:04:61:17:c1:c9, ethertype IPv4 (0x0800), length 98: (tos 0x64, ttl 117, id 0, offset 0, flags [none], proto ICMP (1), length 84)
8.8.8.8 > 172.16.0.24: ICMP echo reply, id 11, seq 1, length 64
会在Pod的原来网口eth0-nic
上传送到包,源和目的地址都不变:
[root@VM-4-27-centos ~]# tcpdump -ieth0-nic -nnvve host 8.8.8.8
dropped privs to tcpdump
tcpdump: listening on eth0-nic, link-type EN10MB (Ethernet), capture size 262144 bytes
20:30:02.087639 5e:04:61:17:c1:c9 > 92:3f:13:e2:55:c9, ethertype IPv4 (0x0800), length 98: (tos 0x0, ttl 64, id 2902, offset 0, flags [DF], proto ICMP (1), length 84)
172.16.0.24 > 8.8.8.8: ICMP echo request, id 11, seq 94, length 64
20:30:02.088959 92:3f:13:e2:55:c9 > 5e:04:61:17:c1:c9, ethertype IPv4 (0x0800), length 98: (tos 0x64, ttl 117, id 0, offset 0, flags [none], proto ICMP (1), length 84)
8.8.8.8 > 172.16.0.24: ICMP echo reply, id 11, seq 94, length 64
网桥k6t-eth0
是怎么知道应该把给到eth0-nic
呢?这里是在k6t-eth0网这里发洪水,所有挂桥接到网桥上的接口都可以访问8.8.8.8
。
虚拟机是怎么知道172.16.0.1
的MAC呢?在虚拟机查看ARP
[fedora@testvmi-nocloud2 ~]$ ip n
172.16.0.1 dev eth0 lladdr 92:3f:13:e2:55:c9 REACHABLE
这里就是cbr0的MAC,干掉这个表项之后,就可以抓到的arp包了
到达eth0-nic
之后,就可以对到达网口的终端,按照航线的模式从节点出去。
入向:从异上访问虚拟机
在越野模式中,对于本不同的不同路段的IP,默认走上桥路的网路cbr0
[root@VM-4-27-centos ~]# ip r
172.16.0.0/26 dev cbr0 proto kernel scope link src 172.16.0.1
cbr0 挂桥加载了 Pod 的 veth,对于访问虚拟机的 IP,默认通过网桥二层转向到 Pod 的 veth 对。到达 Pod 之后,走 Pod 里面的网桥,给到 Tap 设备,最后到达虚拟机机。
七、VPC CNI 模式-桥接
对于VPC CNI,桥接虚拟机,基本使用外网的网络模式类似,但是会从虚拟机不能访问IP包,只能抓到ARP包。
➜ ~ k exec network-tool-549c7756bd-6tfkf -- route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 169.254.1.1 0.0.0.0 UG 0 0 0 eth0
169.254.1.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
➜ ~ k exec network-tool-549c7756bd-6tfkf -- arp
Address HWtype HWaddress Flags Mask Iface
169.254.1.1 ether e2:62:fb:d2:cb:28 CM eth0
但是在虚拟机中,只有默认路由,没有中间的ARP:
[fedora@testvmi-nocloud2 ~]$ ip r
default via 169.254.1.1 dev eth0 proto dhcp metric 100
169.254.1.1 dev eth0 proto dhcp scope link metric 100
[fedora@testvmi-nocloud2 ~]$ ip n
10.3.1.6 dev eth0 lladdr d2:e9:79:c9:e6:2d STALE
169.254.1.1 dev eth0 INCOMPLETE
我们可以从虚拟机和 Pod 中的网桥抓到的 ARP 包,但是没有地方提供的响应,所以
# 进入 launcher pod netns
[root@VM-1-6-centos ~]# tcpdump -itap0 -nnvve arp
dropped privs to tcpdump
tcpdump: listening on tap0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:46:20.627859 d2:e9:79:c9:e6:2d > 92:7b:a5:ca:24:5a, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.3.4.17 tell 10.3.1.6, length 28
15:46:20.628185 92:7b:a5:ca:24:5a > d2:e9:79:c9:e6:2d, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Reply 10.3.4.17 is-at 92:7b:a5:ca:24:5a, length 28
[root@VM-1-6-centos ~]# tcpdump -ik6t-eth0 -nnvve arp
dropped privs to tcpdump
tcpdump: listening on k6t-eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:47:12.653020 92:7b:a5:ca:24:5a > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 169.254.1.1 tell 10.3.4.17, length 28
15:47:13.676948 92:7b:a5:ca:24:5a > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 169.254.1.1 tell 10.3.4.17, length 28
[root@VM-1-6-centos ~]# tcpdump -ieth0-nic -nnvve arp
dropped privs to tcpdump
tcpdump: listening on eth0-nic, link-type EN10MB (Ethernet), capture size 262144 bytes
15:47:23.918394 92:7b:a5:ca:24:5a > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 169.254.1.1 tell 10.3.4.17, length 28
15:47:24.940922 92:7b:a5:ca:24:5a > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 169.254.1.1 tell 10.3.4.17, length 28
# 在宿主机 node netns,针对 pod 的 veth pair 抓包
[root@VM-1-6-centos ~]# tcpdump -ienib7c8df57e35 -nnvve arp
dropped privs to tcpdump
tcpdump: listening on enib7c8df57e35, link-type EN10MB (Ethernet), capture size 262144 bytes
15:48:03.853968 92:7b:a5:ca:24:5a > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 169.254.1.1 tell 10.3.4.17, length 28
15:48:04.876960 92:7b:a5:ca:24:5a > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 169.254.1.1 tell 10.3.4.17, length 28
如果给虚拟机添加上虚拟网络的路由,则可以打通机,这里的 mac 是 Pod 的 veth pair mac 地址:
[fedora@testvmi-nocloud2 ~]$ sudo ip n replace 169.254.1.1 lladdr d2:e9:79:c9:e6:2d dev eth0
[fedora@testvmi-nocloud2 ~]$ ping baidu.com
PING baidu.com (220.181.38.251) 56(84) bytes of data.
64 bytes from 220.181.38.251 (220.181.38.251): icmp_seq=1 ttl=48 time=46.2 ms
64 bytes from 220.181.38.251 (220.181.38.251): icmp_seq=2 ttl=48 time=46.5 ms
八、代码解析
Phase1 的方法是决定改变使用绑定机制实现的,每个绑定机制实现了以下:
type BindMechanism interface {
discoverPodNetworkInterface() error
preparePodNetworkInterfaces() error
loadCachedInterface(pid, name string) (bool, error)
setCachedInterface(pid, name string) error
loadCachedVIF(pid, name string) (bool, error)
setCachedVIF(pid, name string) error
// The following entry points require domain initialized for the
// binding and can be used in phase2 only.
decorateConfig() error
startDHCP(vmi *v1.VirtualMachineInstance) error
}
确定机制,则使用以下方法:
- discoverPodNetworkInterface:获取Pod网卡设备相关信息,包括IP地址、路由、网关等信息
- preparePodNetworkInterfaces:基于获取前面的信息,配置网络
- setCachedInterface:缓存接口信息在内存中
- setCachedIF:在文件系统中设置VIF对象的地址
/proc/<virt-launcher-pid>/root/var/run/kubevirt-private/vif-cache-<iface_name>.json
func (l *podNIC) PlugPhase1() error {
// There is nothing to plug for SR-IOV devices
if l.vmiSpecIface.SRIOV != nil {
return nil
}
state, err := l.state()
if err != nil {
return err
}
switch state {
case cache.PodIfaceNetworkPreparationStarted:
return errors.CreateCriticalNetworkError(fmt.Errorf("pod interface %s network preparation cannot be resumed", l.podInterfaceName))
case cache.PodIfaceNetworkPreparationFinished:
return nil
}
if err := l.setPodInterfaceCache(); err != nil {
return err
}
if l.infraConfigurator == nil {
return nil
}
if err := l.infraConfigurator.DiscoverPodNetworkInterface(l.podInterfaceName); err != nil {
return err
}
dhcpConfig := l.infraConfigurator.GenerateNonRecoverableDHCPConfig()
if dhcpConfig != nil {
log.Log.V(4).Infof("The generated dhcpConfig: %s", dhcpConfig.String())
err = cache.WriteDHCPInterfaceCache(l.cacheCreator, getPIDString(l.launcherPID), l.podInterfaceName, dhcpConfig)
if err != nil {
return fmt.Errorf("failed to save DHCP configuration: %w", err)
}
}
domainIface := l.infraConfigurator.GenerateNonRecoverableDomainIfaceSpec()
if domainIface != nil {
log.Log.V(4).Infof("The generated libvirt domain interface: %+v", *domainIface)
if err := l.storeCachedDomainIface(*domainIface); err != nil {
return fmt.Errorf("failed to save libvirt domain interface: %w", err)
}
}
if err := l.setState(cache.PodIfaceNetworkPreparationStarted); err != nil {
return fmt.Errorf("failed setting state to PodIfaceNetworkPreparationStarted: %w", err)
}
// preparePodNetworkInterface must be called *after* the Generate
// methods since it mutates the pod interface from which those
// generator methods get their info from.
if err := l.infraConfigurator.PreparePodNetworkInterface(); err != nil {
log.Log.Reason(err).Error("failed to prepare pod networking")
return errors.CreateCriticalNetworkError(err)
}
if err := l.setState(cache.PodIfaceNetworkPreparationFinished); err != nil {
log.Log.Reason(err).Error("failed setting state to PodIfaceNetworkPreparationFinished")
return errors.CreateCriticalNetworkError(err)
}
return nil
}
Phase2 运行在 virt-launcher 中,相对于 phase1 中然后有权限会很多,网络中CAP_NET_ADMIN
只有权限。Phase2 也会选择正确的 BindMechanism,获取 Phase1 的配置信息(通过加载 VIF 对象)。基于 VIF 信息,继续装饰它将封装的VM的域xml配置
loadCachedInterface
loadCachedVIF
decorateConfig
func (l *podNIC) PlugPhase2(domain *api.Domain) error {
precond.MustNotBeNil(domain)
// There is nothing to plug for SR-IOV devices
if l.vmiSpecIface.SRIOV != nil {
return nil
}
if err := l.domainGenerator.Generate(); err != nil {
log.Log.Reason(err).Critical("failed to create libvirt configuration")
}
if l.dhcpConfigurator != nil {
dhcpConfig, err := l.dhcpConfigurator.Generate()
if err != nil {
log.Log.Reason(err).Errorf("failed to get a dhcp configuration for: %s", l.podInterfaceName)
return err
}
log.Log.V(4).Infof("The imported dhcpConfig: %s", dhcpConfig.String())
if err := l.dhcpConfigurator.EnsureDHCPServerStarted(l.podInterfaceName, *dhcpConfig, l.vmiSpecIface.DHCPOptions); err != nil {
log.Log.Reason(err).Criticalf("failed to ensure dhcp service running for: %s", l.podInterfaceName)
panic(err)
}
}
return nil
}
8.1、绑定机制
DiscoverPod 网络接口
func (b *BridgePodNetworkConfigurator) DiscoverPodNetworkInterface(podIfaceName string) error {
link, err := b.handler.LinkByName(podIfaceName)
if err != nil {
log.Log.Reason(err).Errorf("failed to get a link for interface: %s", podIfaceName)
return err
}
b.podNicLink = link
addrList, err := b.handler.AddrList(b.podNicLink, netlink.FAMILY_V4)
if err != nil {
log.Log.Reason(err).Errorf("failed to get an ip address for %s", podIfaceName)
return err
}
if len(addrList) == 0 {
b.ipamEnabled = false
} else {
b.podIfaceIP = addrList[0]
b.ipamEnabled = true
if err := b.learnInterfaceRoutes(); err != nil {
return err
}
}
b.tapDeviceName = virtnetlink.GenerateTapDeviceName(podIfaceName)
b.vmMac, err = virtnetlink.RetrieveMacAddressFromVMISpecIface(b.vmiSpecIface)
if err != nil {
return err
}
if b.vmMac == nil {
b.vmMac = &b.podNicLink.Attrs().HardwareAddr
}
return nil
}
PreparePod网络接口
func (b *BridgePodNetworkConfigurator) PreparePodNetworkInterface() error {
// Set interface link to down to change its MAC address
b.handler.LinkSetDown(b.podNicLink)
if b.ipamEnabled {
// Remove IP from POD interface
err := b.handler.AddrDel(b.podNicLink, &b.podIfaceIP)
b.switchPodInterfaceWithDummy();
// Set arp_ignore=1 to avoid
// the dummy interface being seen by Duplicate Address Detection (DAD).
// Without this, some VMs will lose their ip address after a few
// minutes.
b.handler.ConfigureIpv4ArpIgnore();
}
b.handler.SetRandomMac(b.podNicLink.Attrs().Name);
err := b.createBridge();
tapOwner := netdriver.LibvirtUserAndGroupId
if util.IsNonRootVMI(b.vmi) {
tapOwner = strconv.Itoa(util.NonRootUID)
}
createAndBindTapToBridge(b.handler, b.tapDeviceName, b.bridgeInterfaceName, b.launcherPID, b.podNicLink.Attrs().MTU, tapOwner, b.vmi)
b.handler.LinkSetUp(b.podNicLink);
b.handler.LinkSetLearningOff(b.podNicLink);
return nil
}
参考文献
https://kubevirt.io/user-guide/
https://github.com/kubevirt/containerized-data-importer#deploy-it
https://github.com/kubevirt/hostpath-provisioner
http://kubevirt.io/api-reference
https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/
https://kubevirt.io/user-guide/virtual_machines/interfaces_and_networks/
https://github.com/kubevirt/kubevirt/blob/main/docs/devel/networking.md
https://icloudnative.io/posts/use-kubevirt-to-manage-windows-on-kubernetes/