前言
Kubernetes集群對(duì)外提供服務(wù)時(shí),Ingress是標(biāo)準(zhǔn)的服務(wù)暴露方式。Ingress資源定義了HTTP/HTTPS路由規(guī)則,而Ingress Controller則是這些規(guī)則的實(shí)現(xiàn)者。Ingress Controller負(fù)責(zé)監(jiān)控Ingress資源的變化,將路由規(guī)則翻譯為具體的反向代理配置,并動(dòng)態(tài)更新。
主流的Ingress Controller包括Nginx Ingress Controller、Traefik和Envoy Proxy。它們各有特點(diǎn)和適用場(chǎng)景,選擇合適的Ingress Controller對(duì)集群的穩(wěn)定性、性能和可維護(hù)性都有重要影響。
本文面向初中級(jí)運(yùn)維工程師,從架構(gòu)原理、配置方式、功能特性、性能表現(xiàn)、選型建議等維度對(duì)這三款I(lǐng)ngress Controller進(jìn)行深入對(duì)比。通過(guò)本文,讀者能夠理解每種Ingress Controller的工作機(jī)制,并根據(jù)自己的場(chǎng)景做出合理選擇。
第一章 Ingress基礎(chǔ)概念
1.1 Kubernetes網(wǎng)絡(luò)模型
Kubernetes采用扁平化的網(wǎng)絡(luò)模型,每個(gè)Pod都有獨(dú)立的IP地址,Pod之間可以直接通信,不需要NAT。Service是Pod的抽象,通過(guò)標(biāo)簽選擇器將一組Pod統(tǒng)一暴露,提供負(fù)載均衡和發(fā)現(xiàn)機(jī)制。
ClusterIP是Service的默認(rèn)類(lèi)型,僅在集群內(nèi)部可訪(fǎng)問(wèn)。要從外部訪(fǎng)問(wèn)服務(wù),通常需要以下幾種方式:NodePort在每個(gè)節(jié)點(diǎn)上開(kāi)放一個(gè)端口;LoadBalancer需要云服務(wù)商支持,創(chuàng)建外部負(fù)載均衡器;Ingress則基于HTTP/HTTPS主機(jī)名和路徑提供七層路由。
1.2 Ingress資源定義
Ingress是Kubernetes的標(biāo)準(zhǔn)資源,用于配置HTTP/HTTPS外部訪(fǎng)問(wèn)路由。
apiVersion:networking.k8s.io/v1 kind:Ingress metadata: name:demo-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target:/ spec: ingressClassName:nginx rules: -host:demo.example.com http: paths: -path:/api pathType:Prefix backend: service: name:api-service port: number:80 -path:/web pathType:Prefix backend: service: name:web-service port: number:8080 tls: -hosts: -demo.example.com secretName:demo-tls-secret
Ingress的配置通過(guò)注解(annotations)擴(kuò)展,不同的Ingress Controller支持不同的注解。ingressClassName字段指定使用的Ingress Controller類(lèi),Kubernetes 1.18起支持此字段。
1.3 Ingress Controller職責(zé)
Ingress Controller的核心職責(zé)包括:監(jiān)控Kubernetes API獲取Ingress、Service、Endpoints等資源變化;將路由規(guī)則翻譯為反向代理配置;熱更新配置而不丟失請(qǐng)求;提供健康檢查和指標(biāo)暴露;處理TLS終止;實(shí)現(xiàn)負(fù)載均衡算法。
理解Ingress Controller的實(shí)現(xiàn)機(jī)制,有助于選擇合適的方案和排查問(wèn)題。每種Ingress Controller本質(zhì)上都是一個(gè)運(yùn)行在集群中的Deployment,它持續(xù)監(jiān)聽(tīng)Kubernetes API并更新自己的配置。
第二章 Nginx Ingress Controller
2.1 架構(gòu)與原理
Nginx Ingress Controller基于原生Nginx構(gòu)建,部署模式分為兩種:Deployment模式和DaemonSet模式。Deployment模式適合大規(guī)模集群,可以通過(guò)HPA實(shí)現(xiàn)自動(dòng)擴(kuò)縮容;DaemonSet模式在每個(gè)節(jié)點(diǎn)都運(yùn)行一個(gè)實(shí)例,適合低延遲要求的場(chǎng)景。
apiVersion:apps/v1 kind:Deployment metadata: name:nginx-ingress-controller namespace:ingress-nginx spec: replicas:3 selector: matchLabels: app:nginx-ingress template: metadata: labels: app:nginx-ingress spec: containers: -name:controller image:registry.k8s.io/ingress-nginx/controller:v1.9.4 args: -/nginx-ingress-controller ---publish-service=$(POD_NAMESPACE)/ingress-nginx-controller ---election-id=ingress-controller-leader ---controller-class=k8s.io/ingress-nginx ---ingress-class=nginx ---configmap=$(POD_NAMESPACE)/nginx-configuration env: -name:POD_NAME valueFrom: fieldRef: fieldPath:metadata.name -name:POD_NAMESPACE valueFrom: fieldRef: fieldPath:metadata.namespace ports: -name:http containerPort:80 -name:https containerPort:443 livenessProbe: httpGet: path:/healthz port:10254 scheme:HTTP initialDelaySeconds:10 periodSeconds:10 readinessProbe: httpGet: path:/healthz port:10254 scheme:HTTP periodSeconds:5 resources: requests: cpu:100m memory:90Mi limits: cpu:1 memory:1Gi
Nginx Ingress Controller的工作流程:用戶(hù)創(chuàng)建或更新Ingress資源;Controller監(jiān)聽(tīng)到變化并生成Nginx配置文件;Nginx reload加載新配置;請(qǐng)求通過(guò)Service NodePort或LoadBalancer進(jìn)入Nginx。
2.2 配置方式
Nginx Ingress Controller支持兩種配置方式:通過(guò)Ingress注解和通過(guò)ConfigMap全局配置。
Ingress注解示例:
apiVersion:networking.k8s.io/v1
kind:Ingress
metadata:
name:cafe-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect:"true"
nginx.ingress.kubernetes.io/force-ssl-redirect:"true"
nginx.ingress.kubernetes.io/limit-rps:"100"
nginx.ingress.kubernetes.io/limit-connections:"50"
nginx.ingress.kubernetes.io/proxy-body-size:"50m"
nginx.ingress.kubernetes.io/proxy-connect-timeout:"30"
nginx.ingress.kubernetes.io/proxy-read-timeout:"60"
nginx.ingress.kubernetes.io/proxy-send-timeout:"60"
nginx.ingress.kubernetes.io/websocket-services:"ws-service"
nginx.ingress.kubernetes.io/use-regex:"true"
spec:
ingressClassName:nginx
rules:
-host:cafe.example.com
http:
paths:
-path:/tea
pathType:Exact
backend:
service:
name:tea-svc
port:
number:80
-path:/coffee
pathType:Prefix
backend:
service:
name:coffee-svc
port:
number:80
常用注解說(shuō)明:
| 注解 | 說(shuō)明 | 示例值 |
|---|---|---|
| nginx.ingress.kubernetes.io/ssl-redirect | 是否強(qiáng)制HTTPS | true/false |
| nginx.ingress.kubernetes.io/limit-rps | 限流每秒請(qǐng)求數(shù) | 100 |
| nginx.ingress.kubernetes.io/limit-connections | 限流并發(fā)連接數(shù) | 50 |
| nginx.ingress.kubernetes.io/proxy-body-size | 請(qǐng)求體大小限制 | 50m |
| nginx.ingress.kubernetes.io/proxy-read-timeout | 后端讀取超時(shí) | 60 |
| nginx.ingress.kubernetes.io/rewrite-target | URL重寫(xiě)目標(biāo) | / |
| nginx.ingress.kubernetes.io/use-regex | 是否使用正則匹配 | true/false |
| nginx.ingress.kubernetes.io/canary | 開(kāi)啟金絲雀發(fā)布 | true |
| nginx.ingress.kubernetes.io/canary-weight | 金絲雀權(quán)重 | 50 |
2.3 全局配置ConfigMap
通過(guò)ConfigMap可以設(shè)置Nginx的全局默認(rèn)配置:
apiVersion:v1 kind:ConfigMap metadata: name:nginx-configuration namespace:ingress-nginx data: proxy-body-size:"50m" proxy-connect-timeout:"30" proxy-read-timeout:"60" proxy-send-timeout:"60" use-forwarded-headers:"true" compute-full-forwarded-for:"true" use-proxy-protocol:"false" enable-underscores-in-headers:"true" large-client-header-buffers:"4 16k" client-header-buffer-size:"4k" keep-alive:"75" keep-alive-requests:"1000" upstream-keepalive-connections:"50" upstream-keepalive-timeout:"60" upstream-keepalive-requests:"10000" enable-brotli:"true" enable-gzip:"true" gzip-level:"6" gzip-types:"application/json application/javascript application/xml text/css text/html text/javascript"
2.4 TLS配置
Nginx Ingress Controller支持多種TLS配置方式。
簡(jiǎn)單TLS配置:
apiVersion:networking.k8s.io/v1
kind:Ingress
metadata:
name:tls-ingress
spec:
ingressClassName:nginx
tls:
-hosts:
-example.com
-www.example.com
secretName:example-tls
rules:
-host:example.com
http:
paths:
-path:/
pathType:Prefix
backend:
service:
name:frontend
port:
number:80
自簽名證書(shū)創(chuàng)建:
# 創(chuàng)建私鑰 openssl genrsa -out tls.key 2048 # 創(chuàng)建證書(shū) openssl req -new -x509 -key tls.key -out tls.crt -days 365 -subj"/CN=example.com/O=MyOrg" # 創(chuàng)建Secret kubectl create secret tls example-tls --cert=tls.crt --key=tls.key # 或使用kubectl直接創(chuàng)建 kubectl create secret tls example-tls --cert=tls.crt --key=tls.key --dry-run=client -o yaml | kubectl apply -f -
2.5 金絲雀發(fā)布
Nginx Ingress Controller支持基于權(quán)重和Header的金絲雀發(fā)布:
# 主版本Ingress
apiVersion:networking.k8s.io/v1
kind:Ingress
metadata:
name:main-ingress
spec:
ingressClassName:nginx
rules:
-host:app.example.com
http:
paths:
-path:/
pathType:Prefix
backend:
service:
name:main-service
port:
number:80
---
# 金絲雀Ingress
apiVersion:networking.k8s.io/v1
kind:Ingress
metadata:
name:canary-ingress
annotations:
nginx.ingress.kubernetes.io/canary:"true"
nginx.ingress.kubernetes.io/canary-weight:"30"
spec:
ingressClassName:nginx
rules:
-host:app.example.com
http:
paths:
-path:/
pathType:Prefix
backend:
service:
name:canary-service
port:
number:80
基于Header的金絲雀:
apiVersion:networking.k8s.io/v1
kind:Ingress
metadata:
name:header-canary-ingress
annotations:
nginx.ingress.kubernetes.io/canary:"true"
nginx.ingress.kubernetes.io/canary-by-header:"X-Canary"
nginx.ingress.kubernetes.io/canary-by-header-value:"always"
spec:
ingressClassName:nginx
rules:
-host:app.example.com
http:
paths:
-path:/
pathType:Prefix
backend:
service:
name:canary-service
port:
number:80
2.6 限流配置
Nginx Ingress Controller提供多維度限流功能:
apiVersion:networking.k8s.io/v1
kind:Ingress
metadata:
name:rate-limit-ingress
annotations:
nginx.ingress.kubernetes.io/limit-rps:"100"
nginx.ingress.kubernetes.io/limit-rpm:"1000"
nginx.ingress.kubernetes.io/limit-connections:"50"
nginx.ingress.kubernetes.io/limit-burst-multiplier:"5"
spec:
ingressClassName:nginx
rules:
-host:api.example.com
http:
paths:
-path:/
pathType:Prefix
backend:
service:
name:api-service
port:
number:80
全局限流通過(guò)ConfigMap配置:
apiVersion:v1 kind:ConfigMap metadata: name:nginx-configuration namespace:ingress-nginx data: limit_req_zone:"$binary_remote_addr zone=api:10m rate=10r/s" limit_conn_zone:"$binary_remote_addr zone=addr:10m" proxy-limit-rate:"0"
第三章 Traefik
3.1 架構(gòu)與原理
Traefik是一個(gè)現(xiàn)代云原生的反向代理和負(fù)載均衡器,采用動(dòng)態(tài)配置機(jī)制。與傳統(tǒng)Nginx不同,Traefik的配置變更不需要重載進(jìn)程,它通過(guò)監(jiān)聽(tīng)Kubernetes API或Consul等配置中心實(shí)時(shí)更新路由規(guī)則。
Traefik的架構(gòu)由多個(gè)組件構(gòu)成:Provider負(fù)責(zé)從不同來(lái)源獲取配置;Router負(fù)責(zé)將請(qǐng)求匹配到相應(yīng)的服務(wù);Middleware在請(qǐng)求到達(dá)服務(wù)前進(jìn)行修改,如認(rèn)證、限流、重試等;Service將請(qǐng)求轉(zhuǎn)發(fā)到實(shí)際的后端。
Traefik Ingress Controller的部署:
apiVersion:apps/v1
kind:Deployment
metadata:
name:traefik
namespace:ingress
spec:
replicas:2
selector:
matchLabels:
app:traefik
template:
metadata:
labels:
app:traefik
spec:
serviceAccountName:traefik-ingress-controller
containers:
-name:traefik
image:traefik:v3.0.4
args:
---api.insecure
---accesslog
---entrypoints.http.address=:80
---entrypoints.https.address=:443
---providers.kubernetesingress
---providers.kubernetescrd
---log.level=INFO
---metrics.prometheus
ports:
-name:http
containerPort:80
-name:https
containerPort:443
-name:admin
containerPort:8080
livenessProbe:
httpGet:
path:/ping
port:8080
initialDelaySeconds:10
periodSeconds:10
readinessProbe:
httpGet:
path:/ping
port:8080
periodSeconds:5
resources:
requests:
cpu:100m
memory:128Mi
limits:
cpu:500m
memory:512Mi
3.2 IngressRoute資源
Traefik提供了自己的CRD——IngressRoute,它比Kubernetes原生Ingress更強(qiáng)大,支持更多配置選項(xiàng):
apiVersion:traefik.io/v1alpha1
kind:IngressRoute
metadata:
name:demo-ingressroute
namespace:default
spec:
entryPoints:
-web
-websecure
routes:
-match:Host(`demo.example.com`)&&PathPrefix(`/api`)
kind:Rule
services:
-name:api-service
port:80
middlewares:
-name:strip-api-prefix
-name:rate-limit
-match:Host(`demo.example.com`)&&PathPrefix(`/`)
kind:Rule
services:
-name:frontend-service
port:80
tls:
secretName:demo-tls-cert
3.3 Middleware中間件
Traefik的Middleware是強(qiáng)大的請(qǐng)求處理組件,可以在路由層面添加認(rèn)證、重試、重定向、限流等功能。
基本認(rèn)證中間件:
apiVersion:traefik.io/v1alpha1 kind:Middleware metadata: name:basic-auth namespace:default spec: basicAuth: secret:basic-auth-secret --- apiVersion:v1 kind:Secret metadata: name:basic-auth-secret namespace:default type:Opaque stringData: users:| admin:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/
IP白名單中間件:
apiVersion:traefik.io/v1alpha1 kind:Middleware metadata: name:ip-whitelist namespace:default spec: ipWhiteList: sourceRange: -"10.0.0.0/8" -"192.168.1.0/24" ipStrategy: depth:0
限流中間件:
apiVersion:traefik.io/v1alpha1 kind:Middleware metadata: name:rate-limit namespace:default spec: rateLimit: average:100 burst:50 period:1s
重試中間件:
apiVersion:traefik.io/v1alpha1 kind:Middleware metadata: name:retry namespace:default spec: retry: attempts:3 initialInterval:100ms
重定向中間件:
apiVersion:traefik.io/v1alpha1 kind:Middleware metadata: name:https-redirect namespace:default spec: redirectScheme: scheme:https permanent:true
StripPrefix中間件:
apiVersion:traefik.io/v1alpha1 kind:Middleware metadata: name:strip-prefix namespace:default spec: stripPrefix: prefixes: -/api/v1 -/static
3.4 動(dòng)態(tài)服務(wù)發(fā)現(xiàn)
Traefik支持多種Provider,除了Kubernetes Ingress和CRD外,還支持Consul、Etcd、ZooKeeper等外部配置源。
Consul Provider配置示例:
# 在Consul中注冊(cè)服務(wù)
curl -X PUT http://consul:8500/v1/agent/service/register
-d'{
"ID": "web-1",
"Name": "web",
"Address": "10.0.1.100",
"Port": 8080,
"Check": {
"HTTP": "http://10.0.1.100:8080/health",
"Interval": "10s"
}
}'
Traefik Provider配置:
apiVersion:v1 kind:ConfigMap metadata: name:traefik-config namespace:ingress data: providers-consul-catalog:| address: consul:8500 endpoint: http://consul:8500 prefix: traefik requireConsistent: true refreshInterval: 10s exposedByDefault: false
3.5 TCP服務(wù)支持
Traefik不僅支持HTTP/HTTPS,還支持TCP和UDP服務(wù):
apiVersion:traefik.io/v1alpha1
kind:IngressRouteTCP
metadata:
name:mysql-ingressroute
namespace:default
spec:
entryPoints:
-mysql
routes:
-match:HostSNI(`mysql.example.com`)
services:
-name:mysql-service
port:3306
tls:
passthrough:true
---
apiVersion:v1
kind:Service
metadata:
name:mysql-service
namespace:default
spec:
type:ClusterIP
ports:
-name:mysql
port:3306
targetPort:3306
TCP EntryPoint配置:
apiVersion:v1
kind:ConfigMap
metadata:
name:traefik-config
namespace:ingress
data:
servers-transport:|
[serversTransport]
insecureSkipVerify = true
entrypoints:|
[entryPoints]
[entryPoints.mysql]
address = ":3306"
[entryPoints.redis]
address = ":6379"
第四章 Envoy Proxy
4.1 架構(gòu)與原理
Envoy是Lyft開(kāi)發(fā)的開(kāi)源邊緣和服務(wù)代理,專(zhuān)為云原生架構(gòu)設(shè)計(jì)。Envoy的架構(gòu)基于xDS協(xié)議,通過(guò)監(jiān)聽(tīng)配置源動(dòng)態(tài)更新路由規(guī)則,無(wú)需重啟進(jìn)程。
Envoy的核心概念包括:Listener監(jiān)聽(tīng)入口流量;Route配置路由規(guī)則;Cluster定義后端服務(wù);Endpoint是具體的Pod地址;Filter是請(qǐng)求處理鏈;Health Check用于檢測(cè)后端服務(wù)健康狀態(tài)。
Envoy在Kubernetes中的部署通常通過(guò)Contour或Istio等項(xiàng)目實(shí)現(xiàn)。Contour是最常用的Envoy Ingress Controller實(shí)現(xiàn)。
4.2 Contour部署
Contour是Envoy的Kubernetes Ingress Controller實(shí)現(xiàn),由Heptio(現(xiàn)VMware)開(kāi)發(fā)。
apiVersion:apps/v1
kind:DaemonSet
metadata:
name:contour
namespace:projectcontour
spec:
replicas:2
selector:
matchLabels:
app:contour
template:
metadata:
labels:
app:contour
spec:
containers:
-name:contour
image:ghcr.io/projectcontour/contour:v1.28.2
command:
-contour
-serve
---xds-address=0.0.0.0
---xds-port=8001
---envoy-http-port=8080
---envoy-https-port=8443
---config-path=/config/contour.yaml
ports:
-name:xds
containerPort:8001
protocol:TCP
-name:http
containerPort:8080
protocol:TCP
-name:https
containerPort:8443
protocol:TCP
livenessProbe:
httpGet:
path:/healthz
port:8001
initialDelaySeconds:5
periodSeconds:10
readinessProbe:
httpGet:
path:/healthz
port:8001
periodSeconds:5
-name:envoy
image:ghcr.io/projectcontour/contour:v1.28.2
command:
-envoy
--c/config/envoy.json
---service-clusterprojectcontour
---service-node$(NODE_NAME)
env:
-name:NODE_NAME
valueFrom:
fieldRef:
fieldPath:spec.nodeName
ports:
-name:http
containerPort:8080
protocol:TCP
-name:https
containerPort:8443
protocol:TCP
Contour也支持Gateway API,這是Kubernetes網(wǎng)絡(luò)的新標(biāo)準(zhǔn):
apiVersion:gateway.networking.k8s.io/v1
kind:Gateway
metadata:
name:prod-gateway
namespace:ingress
spec:
gatewayClassName:contour
listeners:
-name:http
port:80
protocol:HTTP
allowedRoutes:
namespaces:
from:Same
-name:https
port:443
protocol:HTTPS
allowedRoutes:
namespaces:
from:Same
tls:
mode:Terminate
certificateRefs:
-name:demo-cert
---
apiVersion:gateway.networking.k8s.io/v1
kind:HTTPRoute
metadata:
name:demo-route
namespace:default
spec:
parentRefs:
-name:prod-gateway
namespace:ingress
hostnames:
-"demo.example.com"
rules:
-backendRefs:
-name:demo-service
port:80
4.3 HTTPProxy資源
Contour擴(kuò)展了Kubernetes Ingress,提供了更強(qiáng)大的HTTPProxy CRD:
apiVersion:projectcontour.io/v1
kind:HTTPProxy
metadata:
name:demo-proxy
namespace:default
spec:
virtualhost:
fqdn:demo.example.com
cors:
allow-credentials:true
allow-headers:
-X-Custom-Header
allow-methods:
-GET
-POST
-PUT
-DELETE
allow-origin:
-"https://allowed.example.com"
max-age:"86400"
routes:
-conditions:
-prefix:/api
services:
-name:api-service
port:80
healthcheck:
path:/health
interval:10s
timeout:5s
unhealthythreshold:3
healthythreshold:2
loadBalancerPolicy:
strategy:WeightedLeastRequest
retryPolicy:
retryOn:gateway-error,connect-failure,reset
numRetries:3
perTryTimeout:10s
-conditions:
-prefix:/
services:
-name:frontend-service
port:80
rateLimit:
global:
descriptors:
-entries:
-key:remote_addr
rateLimitValue:
requests:100
unit:minute
tcpproxy:
services:
-name:tcp-service
port:9000
weight:1
4.4 限流配置
Envoy支持多種限流策略:全局限流基于令牌桶算法,所有服務(wù)共享限流配額;本地限流在每個(gè)Envoy實(shí)例獨(dú)立執(zhí)行;基于請(qǐng)求屬性的限流可以組合多個(gè)維度。
全局限流示例:
apiVersion:projectcontour.io/v1
kind:TLSPolicy
metadata:
name:ratelimit-policy
namespace:default
spec:
limits:
-units:second
requests:100
condition:
-requestHeader:
headerName:X-Forwarded-For
count:1
本地限流注解:
apiVersion:v1 kind:Service metadata: name:api-service annotations: projectcontour.io/lb-num-retries:"3" projectcontour.io/retry-on:"gateway-error,connect-failure,reset" projectcontour.io/per-try-timeout:"10s" spec: ports: -port:80 targetPort:8080 selector: app:api
4.5 健康檢查與負(fù)載均衡
Envoy支持主動(dòng)和被動(dòng)兩種健康檢查方式。
主動(dòng)健康檢查配置:
apiVersion:projectcontour.io/v1
kind:HTTPProxy
metadata:
name:healthcheck-proxy
namespace:default
spec:
virtualhost:
fqdn:api.example.com
routes:
-services:
-name:api-service
port:80
healthcheck:
path:/healthz
interval:10s
timeout:5s
expected-status:"200-299"
unhealthythreshold:3
healthythreshold:2
負(fù)載均衡策略:
apiVersion:projectcontour.io/v1
kind:HTTPProxy
metadata:
name:lb-proxy
namespace:default
spec:
virtualhost:
fqdn:app.example.com
routes:
-loadBalancerPolicy:
strategy:Random
services:
-name:app-service
port:80
支持的負(fù)載均衡策略:
| 策略 | 說(shuō)明 |
|---|---|
| RoundRobin | 輪詢(xún),默認(rèn)策略 |
| WeightedLeastRequest | 加權(quán)最少請(qǐng)求 |
| Random | 隨機(jī)選擇 |
| Cookie | 基于Cookie的會(huì)話(huà)保持 |
| Header | 基于請(qǐng)求頭的會(huì)話(huà)保持 |
第五章 三者綜合對(duì)比
5.1 性能對(duì)比
性能是Ingress Controller的重要考量因素,以下是基于公開(kāi)測(cè)試數(shù)據(jù)的對(duì)比。
原生Nginx性能最優(yōu),它的請(qǐng)求處理延遲最低,吞吐量最高。這得益于Nginx多年的性能優(yōu)化和事件驅(qū)動(dòng)架構(gòu)。
Traefik v3相比v2版本性能提升顯著,但在高并發(fā)場(chǎng)景下仍略遜于Nginx。Traefik的動(dòng)態(tài)配置能力帶來(lái)了少量性能開(kāi)銷(xiāo),但對(duì)于中低流量應(yīng)用影響很小。
Envoy采用C++實(shí)現(xiàn),性能表現(xiàn)優(yōu)秀。Envoy的優(yōu)勢(shì)在于其可擴(kuò)展性,支持在請(qǐng)求處理鏈中添加多個(gè)Filter而不顯著影響性能。
實(shí)測(cè)數(shù)據(jù)參考(單實(shí)例,8核CPU,16GB內(nèi)存):
| 方案 | QPS | P99延遲 | 內(nèi)存占用 |
|---|---|---|---|
| Nginx Ingress | 50,000+ | 5ms | 150MB |
| Traefik v3 | 35,000+ | 8ms | 200MB |
| Envoy (Contour) | 40,000+ | 6ms | 300MB |
5.2 功能特性對(duì)比
| 功能 | Nginx Ingress | Traefik | Envoy (Contour) |
|---|---|---|---|
| 七層路由 | 支持 | 支持 | 支持 |
| 四層代理 | 支持 | 支持 | 支持 |
| WebSocket | 支持 | 支持 | 支持 |
| gRPC | 支持 | 支持 | 支持 |
| TCP/UDP | 支持 | 支持 | 支持 |
| TLS終止 | 支持 | 支持 | 支持 |
| 自動(dòng)HTTPS | 支持(Cert-Manager) | 支持(內(nèi)置) | 支持(Cert-Manager) |
| 限流 | 注解/全局 | Middleware | 全局限流 |
| 認(rèn)證 | 注解 | Middleware | CRD配置 |
| 重試/超時(shí) | 注解 | Middleware | CRD配置 |
| 金絲雀發(fā)布 | 注解 | 無(wú)原生支持 | HTTPProxy |
| 熔斷 | 無(wú) | 無(wú) | 支持 |
| 動(dòng)態(tài)配置 | reload | 熱更新 | 熱更新 |
| 指標(biāo)暴露 | Prometheus | Prometheus/Datadog | Prometheus |
| 追蹤 | Zipkin/Jaeger | OpenTelemetry | OpenTelemetry/Zipkin |
5.3 配置復(fù)雜度對(duì)比
Nginx Ingress的配置相對(duì)直觀,對(duì)于熟悉Nginx的運(yùn)維工程師來(lái)說(shuō)學(xué)習(xí)曲線(xiàn)平緩。大量在線(xiàn)資源和企業(yè)實(shí)踐經(jīng)驗(yàn)使得問(wèn)題解決相對(duì)容易。
Traefik的配置最簡(jiǎn)潔,聲明式的IngressRoute和Middleware設(shè)計(jì)使得配置易于理解。它的自動(dòng)服務(wù)發(fā)現(xiàn)減少了手動(dòng)配置工作。
Envoy的功能最強(qiáng)大,但配置也最復(fù)雜。Contour在一定程度上簡(jiǎn)化了Envoy的配置,但高級(jí)特性仍需要理解Envoy的架構(gòu)。對(duì)于團(tuán)隊(duì)技術(shù)實(shí)力較強(qiáng)、需要細(xì)粒度控制的場(chǎng)景,Envoy是理想選擇。
5.4 生態(tài)與社區(qū)對(duì)比
Nginx Ingress擁有最成熟的企業(yè)生態(tài),大量生產(chǎn)環(huán)境案例和完善的文檔。Nginx公司提供商業(yè)支持Nginx Ingress Controller Plus。
Traefik由Traefik Labs維護(hù),社區(qū)活躍,文檔質(zhì)量高。Traefik企業(yè)版提供更多企業(yè)級(jí)特性。
Envoy是Envoyproxy基金會(huì)和CNCF項(xiàng)目的重要組成部分,也是Istio服務(wù)網(wǎng)格的數(shù)據(jù)平面。Envoy在服務(wù)網(wǎng)格領(lǐng)域占據(jù)主導(dǎo)地位,選擇Envoy可以獲得整個(gè)云原生生態(tài)系統(tǒng)的好處。
第六章 選型建議與最佳實(shí)踐
6.1 場(chǎng)景化選型
中小規(guī)模集群(Pod數(shù)量小于1000)
推薦方案:Nginx Ingress Controller或Traefik
原因:配置簡(jiǎn)單,文檔完善,生態(tài)成熟。Nginx的性能足夠應(yīng)對(duì)中低流量,業(yè)務(wù)場(chǎng)景不需要復(fù)雜的金絲雀發(fā)布和流量管理。
大規(guī)模高流量集群
推薦方案:Nginx Ingress Controller或Envoy
原因:需要關(guān)注性能和穩(wěn)定性。Nginx在簡(jiǎn)單場(chǎng)景下性能最優(yōu),Envoy在需要細(xì)粒度流量管理時(shí)更有優(yōu)勢(shì)。
需要精細(xì)流量管理的場(chǎng)景
推薦方案:Envoy(Contour)
原因:Envoy的xDS協(xié)議和豐富的Filter鏈支持復(fù)雜的流量管理需求,如熔斷、重試、限流、故障注入等。
服務(wù)網(wǎng)格集成
推薦方案:Envoy
原因:Istio、Linkerd等主流服務(wù)網(wǎng)格都使用Envoy作為數(shù)據(jù)平面。選擇Envoy Ingress Controller可以與現(xiàn)有服務(wù)網(wǎng)格技術(shù)棧保持一致。
6.2 部署架構(gòu)建議
生產(chǎn)環(huán)境建議使用DaemonSet模式部署Ingress Controller,確保每個(gè)節(jié)點(diǎn)都有入口:
apiVersion:apps/v1 kind:DaemonSet metadata: name:nginx-ingress namespace:ingress-nginx spec: selector: matchLabels: app:nginx-ingress template: metadata: labels: app:nginx-ingress spec: hostNetwork:true dnsPolicy:ClusterFirstWithHostNet containers: -name:controller image:registry.k8s.io/ingress-nginx/controller:v1.9.4 args: -/nginx-ingress-controller ---configmap=$(POD_NAMESPACE)/nginx-configuration ---report-node-internal-ip-address env: -name:POD_NAME valueFrom: fieldRef: fieldPath:metadata.name -name:POD_NAMESPACE valueFrom: fieldRef: fieldPath:metadata.namespace ports: -name:http hostPort:80 containerPort:80 -name:https hostPort:443 containerPort:443
高可用部署建議:
apiVersion:autoscaling/v2
kind:HorizontalPodAutoscaler
metadata:
name:nginx-ingress
namespace:ingress-nginx
spec:
scaleTargetRef:
apiVersion:apps/v1
kind:Deployment
name:nginx-ingress-controller
minReplicas:2
maxReplicas:10
metrics:
-type:Resource
resource:
name:cpu
target:
type:Utilization
averageUtilization:70
-type:Resource
resource:
name:memory
target:
type:Utilization
averageUtilization:80
6.3 安全配置
TLS配置最佳實(shí)踐:
apiVersion:networking.k8s.io/v1
kind:Ingress
metadata:
name:secure-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect:"true"
nginx.ingress.kubernetes.io/hsts-max-age:"31536000"
nginx.ingress.kubernetes.io/hsts-include-subdomains:"true"
nginx.ingress.kubernetes.io/proxy-buffer-size:"128k"
nginx.ingress.kubernetes.io/server-snippet:|
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
spec:
ingressClassName:nginx
tls:
-hosts:
-example.com
secretName:example-tls
rules:
-host:example.com
http:
paths:
-path:/
pathType:Prefix
backend:
service:
name:frontend
port:
number:80
6.4 監(jiān)控配置
Prometheus指標(biāo)暴露是生產(chǎn)環(huán)境的必備能力:
apiVersion:v1 kind:Service metadata: name:nginx-ingress-controller-metrics namespace:ingress-nginx annotations: prometheus.io/port:"10254" prometheus.io/scrape:"true" spec: ports: -name:metrics port:10254 targetPort:metrics selector: app:nginx-ingress --- apiVersion:monitoring.coreos.com/v1 kind:ServiceMonitor metadata: name:nginx-ingress namespace:monitoring spec: selector: matchLabels: app:nginx-ingress endpoints: -port:metrics interval:15s namespaceSelector: matchNames: -ingress-nginx
Grafana儀表板推薦使用公開(kāi)社區(qū)儀表板:
Nginx Ingress Controller: ID 9614
Traefik: ID 10162
Envoy: ID 13261
第七章 排障指南
7.1 查看Ingress Controller狀態(tài)
# 查看Controller日志 kubectl logs -n ingress-nginx -l app=nginx-ingress -f # 查看最近事件 kubectl get events -n ingress-nginx --sort-by='.lastTimestamp' # 查看Controller配置 kubectlexec-n ingress-nginx deploy/nginx-ingress-controller -- nginx-ingress-controller --show-config
7.2 常見(jiàn)問(wèn)題與解決
問(wèn)題1:Ingress無(wú)法訪(fǎng)問(wèn),返回404
排查步驟:
# 1. 檢查Ingress資源是否存在 kubectl get ingress -n default # 2. 檢查Ingress地址配置 kubectl describe ingress demo-ingress -n default # 3. 檢查Endpoints是否就緒 kubectl get endpoints -n default # 4. 檢查Service配置 kubectl get svc -n default # 5. 檢查Pod是否運(yùn)行正常 kubectl get pods -n default -l app=demo # 6. 檢查DNS解析 kubectlexec-ittest-pod -- nslookup demo-service
問(wèn)題2:證書(shū)錯(cuò)誤或TLS不工作
# 檢查Secret是否存在
kubectl get secret demo-tls -n default
# 驗(yàn)證證書(shū)內(nèi)容
kubectl get secret demo-tls -n default -o jsonpath='{.data.tls.crt}'| base64 -d | openssl x509 -text -noout
# 檢查證書(shū)過(guò)期時(shí)間
kubectl get secret demo-tls -n default -o jsonpath='{.data.tls.crt}'| base64 -d | openssl x509 -enddate -noout
# 檢查Cert-Manager狀態(tài)(如使用)
kubectl get certificate -n default
kubectl describe certificate demo-cert -n default
問(wèn)題3:限流未生效
# 檢查限流注解是否正確配置 kubectl describe ingress rate-limit-ingress -n default # 查看Controller日志中的限流信息 kubectl logs -n ingress-nginx -l app=nginx-ingress | grep -ilimit # 驗(yàn)證ConfigMap中的限流配置 kubectl get configmap nginx-configuration -n ingress-nginx -o yaml
7.3 性能問(wèn)題排查
# 查看資源使用情況 kubectl top pods -n ingress-nginx # 檢查連接數(shù) kubectlexec-it nginx-ingress-controller-xxx -n ingress-nginx -- wget -qO- http://localhost:10254/status # 查看upstream狀態(tài)(Nginx Ingress) kubectlexec-it nginx-ingress-controller-xxx -n ingress-nginx -- cat /etc/nginx/nginx.conf | grep -A 50"upstream"
7.4 日志分析
Nginx Ingress訪(fǎng)問(wèn)日志格式配置:
apiVersion:v1 kind:ConfigMap metadata: name:nginx-configuration namespace:ingress-nginx data: log-format-upstream:| '$remote_addr $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' '$request_time $upstream_response_time $upstream_connect_time ' 'route=$upstream_route'
分析日志發(fā)現(xiàn)慢請(qǐng)求:
# 提取慢請(qǐng)求日志
kubectl logs -n ingress-nginx -l app=nginx-ingress |
awk'{if($NF > 1) print}'| sort -k9 -nr | head -20
結(jié)語(yǔ)
Nginx Ingress Controller、Traefik和Envoy是當(dāng)前最主流的三種Ingress Controller方案。它們各有優(yōu)勢(shì):Nginx性能最優(yōu)、生態(tài)最成熟;Traefik配置最簡(jiǎn)潔、動(dòng)態(tài)能力最強(qiáng);Envoy功能最豐富、與服務(wù)網(wǎng)格集成最緊密。
選型時(shí)應(yīng)綜合考慮團(tuán)隊(duì)技術(shù)棧、性能需求、功能需求和運(yùn)維能力。對(duì)于大多數(shù)場(chǎng)景,Nginx Ingress Controller是穩(wěn)妥的選擇;對(duì)于追求配置簡(jiǎn)潔和動(dòng)態(tài)能力的團(tuán)隊(duì),Traefik值得考慮;對(duì)于需要精細(xì)流量控制和計(jì)劃使用服務(wù)網(wǎng)格的場(chǎng)景,Envoy是理想選擇。
無(wú)論選擇哪種方案,都應(yīng)該建立完善的監(jiān)控告警體系,制定 Ingress Controller的運(yùn)維流程,并定期進(jìn)行故障演練,確保在生產(chǎn)環(huán)境中能夠快速響應(yīng)和恢復(fù)。
參考資料
Kubernetes Ingress官方文檔:https://kubernetes.io/docs/concepts/services-networking/ingress/
Nginx Ingress Controller官方文檔:https://kubernetes.github.io/ingress-nginx/
Traefik官方文檔:https://doc.traefik.io/traefik/
Contour官方文檔:https://projectcontour.io/docs/
Envoy官方文檔:https://www.envoyproxy.io/docs/envoy/latest/
-
網(wǎng)絡(luò)
+關(guān)注
關(guān)注
14文章
8323瀏覽量
95512 -
模型
+關(guān)注
關(guān)注
1文章
3808瀏覽量
52241 -
kubernetes
+關(guān)注
關(guān)注
0文章
273瀏覽量
9526
原文標(biāo)題:Kubernetes Ingress Controller 對(duì)比解析:Nginx、Traefik、Envoy 特性與性能
文章出處:【微信號(hào):magedu-Linux,微信公眾號(hào):馬哥Linux運(yùn)維】歡迎添加關(guān)注!文章轉(zhuǎn)載請(qǐng)注明出處。
發(fā)布評(píng)論請(qǐng)先 登錄
leader選舉在kubernetes controller中是如何實(shí)現(xiàn)的
Kubernetes Ingress 高可靠部署最佳實(shí)踐
Kubernetes集群中使用阿里云 SLB 實(shí)現(xiàn)四層金絲雀發(fā)布
再次升級(jí)!阿里云Kubernetes日志解決方案
Ingress增強(qiáng)現(xiàn)實(shí)游戲簡(jiǎn)紹
阿里云Serverless Kubernetes通過(guò)Ingress提供7層服務(wù)訪(fǎng)問(wèn)
全球公測(cè),阿里云Serverless Kubernetes 更快、更強(qiáng)、更省心
解析Kubernetes監(jiān)控指標(biāo)獲取方式對(duì)比
BFE Ingress Controller基于BFE實(shí)現(xiàn)的Kubernetes Ingress Controller
快速了解kubernetes
APISIX Ingress VS Ingress NGINX詳細(xì)對(duì)比
帶你快速了解 kubernetes
k8s生態(tài)鏈包含哪些技術(shù)
騰訊云和華為云的ingress路徑匹配規(guī)則把我繞暈了
Kubernetes Ingress Controller對(duì)比解析
評(píng)論