Network Isolation Strategy - NetworkPolicy
In UK8S cluster, by default, all Pods are interconnected, i.e., any Pod can both receive requests sent from any Pod in the cluster and send requests to any Pod in the cluster.
However, in practical business scenarios, network isolation is essential for business security. This article introduces how to implement network isolation in UK8S.
Pre-installation Check
⚠️ Before installing the Calico network isolation plugin, please make sure that the CNI version is greater than or equal to 19.12.1, otherwise it will delete the original network configuration on Node and cause the Pod network to be unavailable. For CNI version check and upgrade, please refer to: CNI Network Plugin Upgrade.
Kubernetes version <=1.24.12, and >= 1.16.4, and the cluster needs to access external network to pull images outside of Uhub.
Confirm whether the ipamd component is used in the cluster:
kubectl -n kube-system get ds cni-vpc-ipamd
If not used, you can ignore the following check;
If ipamd is used, confirm if ipamd has enabled Calico network policy support;
Use the following command to view whether the parameter --calicoPolicyFlag
is true
:
kubectl -n kube-system get ds cni-vpc-ipamd -o=jsonpath='{.spec.template.spec.containers[0].args}{"\t"}{"\n"}'
["--availablePodIPLowWatermark=3","--availablePodIPHighWatermark=50","--calicoPolicyFlag=true","--cooldownPeriodSeconds=30"]
If the parameter is not true
, use the following command to enable it:
kubectl -n kube-system patch ds cni-vpc-ipamd -p '{"spec":{"template":{"spec":{"containers":[{"name":"cni-vpc-ipamd","args":["--availablePodIPLowWatermark=3","--availablePodIPHighWatermark=50","--calicoPolicyFlag=true","--cooldownPeriodSeconds=30"]}]}}}}'
1. Plugin Installation
To implement network isolation in UK8S, it is necessary to deploy Felix and Typha components of Calico. The component modules have been containerized and can be installed directly in UK8S via kubectl command.
UK8S provides two versions of the Calico component to achieve network isolation, which are compatible with the following UK8S versions. Please select by yourself.
UK8S version | Calico version |
---|---|
<=1.24.12 | 3.10.0 |
1.26.7 | 3.25.2 |
2. NetworkPolicy Rule Analysis
After installing the network isolation policy components of Calico, we can create NetworkPolicy objects in UK8S for access control of Pods, as shown below.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Below is a brief description of the function of each parameter:
-
spec.podSelector: This parameter determines the scope of the NetworkPolicy, i.e., which Pods it applies to. The above example indicates it takes effect on Pods with the label role=db in the default namespace. It should be noted that NetworkPolicy is a namespace-level resource object.
-
spec.ingress.from: Inbound request access control, i.e., which sources are allowed. It supports three control modes: IP, namespace, and pod. The above example means allowing requests from the source address 172.17/16 except 172.17.1/24, or any Pod in namespaces with the label projcet=myproject, or Pods with the label role=frontend in the Default namespace. The multiple rules in from are in a logical OR relationship, and access is allowed if any of the three conditions is met. The namespaceSelector field is used to filter request sources from multiple namespaces.
-
spec.ingress.ports: Declares the ports open for access; if not filled in, all ports are open by default. The above example indicates that only port 6379 is allowed to be accessed. from and ports are in a logical AND relationship, meaning that the sources allowed by the above from rules are permitted to access port 6379 (TCP).
-
spec.egress: Declares the allowed destination addresses, similar to from. The above example indicates that only requests to the IP address segment 10.0.0.0/24 are allowed, and only port 5978 (TCP) of addresses in this segment can be accessed.
Through the above description, we should understand that NetworkPolicy is a whitelist mechanism, i.e., once NetworkPolicy is enabled, all are denied unless explicitly stated.
3. Examples
3.1 Limiting a group of Pods to only access resources within the VPC (Cannot access external network)
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: pod-egress-policy
spec:
podSelector:
matchLabels:
pod: internal-only
egress:
- to:
- ipBlock:
cidr: 10.9.0.0/16
3.2 Limiting the source IP of the Service exposed to the public network
First, create an application that exposes services to the public network via external network ULB4
apiVersion: v1
kind: Service
metadata:
name: UCloud Global-nginx
labels:
app: UCloud Global-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- protocol: TCP
port: 80
selector:
app: UCloud Global-nginx
---
apiVersion: v1
kind: Pod
metadata:
name: test-nginx
labels:
app: UCloud Global-nginx
spec:
containers:
- name: nginx
image: uhub.ucloud-global.com/ucloud/nginx:1.9.2
ports:
- containerPort: 80
After the above application is created, we can directly access the application through the external network ULB IP. Now we set that the application is only accessible from the office environment. The office exit IP is assumed to be 106.10.10.10.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: UCloud Global-nginx
ingress:
- from:
- ipBlock:
cidr: 106.10.10.10/24 # When validating, please change to your client's exit IP.
- ipBlock:
cidr: 10.23.248.0/21 # Regional public service segment, otherwise the ULB health check will fail and the isolation policy will not take effect, see below
4. Allow VPC Public Service Segment
The public service segment is mainly used for internal network DNS, ULB health checks etc. When configuring NetworkPolicy, it is recommended to allow the public service segment in all regions.
For the public service segments in each region, please refer to the VPC documentation: VPC Segment Usage Restrictions