Kubernetes
Kubernetes ↗ is a container orchestration tool that helps deploy applications onto physical or virtual machines, scale the deployment to meet traffic demands, and push updates without downtime. The Kubernetes cluster, or environment, where the application instances are running is connected internally through a private network. You can install the cloudflared
daemon inside of the Kubernetes cluster in order to connect applications inside of the cluster to Cloudflare.
This guide will cover how to expose a Kubernetes service to the public Internet using a remotely-managed Cloudflare Tunnel. For the purposes of this example, we will deploy a basic web application alongside cloudflared
in Google Kubernetes Engine (GKE). The same principles apply to any other Kubernetes environment (such as minikube
, kubeadm
, or a cloud-based Kubernetes service) where cloudflared
can connect to Cloudflare's network.

As shown in the diagram, we recommend setting up cloudflared
as an adjacent deployment ↗ to the application deployments. Having a separate Kubernetes deployment for cloudflared
allows you to scale cloudflared
independently of the application. In the cloudflared
deployment, you can spin up multiple replicas running the same Cloudflare Tunnel -- there is no need to build a dedicated tunnel for each pod. Each cloudflared
replica / pod can reach all Kubernetes services in the cluster.
Once the cluster is connected to Cloudflare, you can configure Cloudflare Tunnel routes to control how cloudflared
will proxy traffic to services within the cluster. For example, you may wish to publish certain Kubernetes application to the Internet and restrict other applications to internal WARP client users.
- Install the gcloud CLI ↗ and kubectl CLI ↗.
- In the GCP console create a new Kubernetes cluster.
- In order to connect to the cluster, select the three dots and then connect from the drop down.
- Copy the command that appears and paste it into your local terminal.
A pod is the basic deployable object that Kubernetes creates. It represents an instance of a running process in the cluster. The following .yml file ( httpbin-app.yml) will create a pod that contains the httpbin application. It will create two replicas so as to prevent any downtime. The application will be accessible inside the cluster at web-service:80.
apiVersion: apps/v1kind: Deploymentmetadata: name: httpbin-deploymentspec: selector: matchLabels: app: httpbin replicas: 2 template: metadata: labels: app: httpbin spec: containers: - name: httpbin image: kennethreitz/httpbin:latest ports: - containerPort: 80---apiVersion: v1kind: Servicemetadata: name: web-servicespec: selector: app: httpbin ports: - protocol: TCP port: 80
Using the following command the application will begin to run inside the cluster.
kubectl create -f httpbin-app.yml
The pods' status can be seen through the console or using the kubectl get pod command.
kubectl get pods
Applications must be packaged into a containerized image, such as a Docker image, before you can run it in Kubernetes. Kubernetes uses the image to spin up multiple instances of the application.
The tunnel can be created through the dashboard using this guide. Instead of running the command to install a connector you will select docker as the environment and copy just the token rather than the whole command. Configure the tunnel to route to k8.example.com from the service http://web-service:80 ↗. Create the cloudflared-deployment.yml file with the following content.
apiVersion: apps/v1kind: Deploymentmetadata: labels: app: cloudflared name: cloudflared-deployment namespace: defaultspec: replicas: 2 selector: matchLabels: pod: cloudflared template: metadata: creationTimestamp: null labels: pod: cloudflared spec: securityContext: sysctls: - name: net.ipv4.ping_group_range value: "65532 65532" containers: - command: - cloudflared - tunnel - --no-autoupdate # In a k8s environment, the metrics server needs to listen outside the pod it runs on. # The address 0.0.0.0:2000 allows any pod in the namespace. - --metrics - 0.0.0.0:2000 - run args: - --token - <token value> image: cloudflare/cloudflared:latest name: cloudflared livenessProbe: httpGet: # Cloudflared has a /ready endpoint which returns 200 if and only if # it has an active connection to the edge. path: /ready port: 2000 failureThreshold: 1 initialDelaySeconds: 10 periodSeconds: 10
This file will be deployed with the following command.
kubectl create -f cloudflared-deployment.yml
Was this helpful?
- Resources
- API
- New to Cloudflare?
- Products
- Sponsorships
- Open Source
- Support
- Help Center
- System Status
- Compliance
- GDPR
- Company
- cloudflare.com
- Our team
- Careers
- 2025 Cloudflare, Inc.
- Privacy Policy
- Terms of Use
- Report Security Issues
- Trademark