Learn how to expose applications to the internet on io.net Kubernetes clusters using ingress controllers, with guidance on DNS automation and SSL certificate management.
Use this file to discover all available pages before exploring further.
This guide provides a comprehensive overview of how to expose applications to the internet on io.netKubernetes Clusters using an ingress controller. It details two supported deployment approaches, outlines their respective advantages and trade-offs, and explains how DNS can be automated with ExternalDNS. The guide also covers SSL certificate management using cert-manager to support secure application delivery.
Option 1: Ingress Controller as a Deployment with Service IPs
The ingress controller is deployed as a scalable Kubernetes deployment and exposed via a LoadBalancer Service with manually assigned external IP addresses.Incoming traffic is automatically load-balanced across ingress pods. This approach is fully compatible with ExternalDNS.
[ Client ] | vConnects to node public IPs (set as externalIPs on LoadBalancer service) | vTraffic → balanced across ingress controller pods | vIngress controller routes to applications
The Pod Security Admission plugin is enabled by default in all io.net Kubernetes clusters to enforce Pod Security Standards and enhance baseline cluster security.You may override Pod Security Admission settings at the namespace level when necessary. This can be useful for workloads such as ingress controllers or monitoring solutions that require less restrictive security policies.However, the recommended approach is to adapt your applications to run securely by configuring an appropriate podSecurityContext.
The ingress controller is configured to deploy only on worker nodes with the worker-node=true label.
Optional: add Tolerations if your worker nodes have taints:--set 'controller.tolerations[0].operator=Exists'
Optional: add Pod Anti-Affinity to ensure pod replicas are scheduled on different nodes (only if you have sufficient worker nodes):--set 'controller.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[0].labelSelector.matchLabels.app\.kubernetes\.io/name=ingress-nginx'--set 'controller.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[0].topologyKey=kubernetes.io/hostname'
2
Assign Public IPs to Ingress Nodes
Edit the Service configuration so that Public IPs are assigned only from nodes where ingress pods are running.
kubectl patch svc ingress-nginx-controller -n ingress-nginx --type=merge -p "{ \"spec\": { \"externalIPs\": [ $(kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[*].spec.nodeName}' \ | tr ' ' '\n' | sort -u | while read node; do \ kubectl get node "$node" -o jsonpath='{.status.addresses[?(@.type=="ExternalIP")].address}'; \ echo; \ done | tr '\n' ',' | sed 's/,$//' | sed 's/\([^,]*\)/"\1"/g') ] }}"
This command identifies the nodes currently running ingress controller pods and assigns their external IP addresses to the LoadBalancer Service. This allows ExternalDNS to correctly detect the service endpoints and manage the corresponding DNS records.
3
Update DNS Records
Point your domain to the Service’s EXTERNAL-IP values shown by kubectl get svc -n ingress-nginx, or follow the ExternalDNS Guide.
4
Deploy Applications with Ingress
Deploy your applications and configure ingress resources to handle routing to the appropriate services.
Run the ingress controller on every cluster node, where it listens directly on ports 80 and 443 of each node’s public IP. Your domain can be configured to point to multiple node IPs for traffic ingress.
Workload Type: DaemonSet (one ingress controller pod per node).
Traffic Exposure: The ingress controller listens directly on ports 80/443 of each node’s public IP.
DNS Management: DNS records are configured to point directly to the public IPs of all nodes running the ingress controller (for example, using DNS round-robin).
Pros
Cons
Simple and straightforward to set up.
Relies on DNS round robin when no external load balancer is used.
Every node can accept traffic directly.
DNS records may continue to point to unavailable nodes until they are refreshed.
High availability through multiple node public IPs.
Ports 80/443 are reserved on every node.
Can combine with an external load balancer for a more advanced traffic distribution.
The Pod Security Admission plugin is enabled by default in all io.net Kubernetes clusters to enforce Pod Security Standards and enhance baseline cluster security.You may override Pod Security Admission settings at the namespace level when necessary. This can be useful for workloads such as ingress controllers or monitoring solutions that require less restrictive security policies.However, the recommended approach is to adapt your applications to run securely by configuring an appropriate podSecurityContext.
Using hostNetwork=true reserves ports 80/443 on every node. Use nodeSelector and tolerations if you only want ingress on specific nodes.--set 'controller.tolerations[0].operator=Exists'--set 'controller.nodeSelector.node-role\.kubernetes\.io/hostname='
2
Configure DNS for Node IPs
Point your domain to the public IPs of all nodes running the ingress controller, or follow the ExternalDNS Guide.
3
Deploy Applications with Ingress
Deploy your applications and define Ingress resources to route incoming traffic to the appropriate services.
When exposing applications to the internet through a Kubernetes ingress controller, SSL/TLS certificates are required to securely terminate HTTPS traffic. cert-manager automates the issuance and renewal of certificates from Let’s Encrypt, integrating directly with Kubernetes Ingress resources to provide end-to-end HTTPS without manual certificate management.In the setup below, cert-manager is installed in the cluster and configured to use a DNS-01 challenge, which is well suited for internet-facing applications, wildcard domains, and environments where ingress traffic reaches services through public IPs.The following examples show how to install cert-manager and configure it with Cloudflare as the DNS provider.Read more: https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/
# Get all ingress-related resourceskubectl get all,ingress,certificates -n ingress-nginxkubectl get all,ingress,certificates -n your-app-namespace# Test connectivitykubectl run test-pod --image=busybox --rm -it -- sh# Inside pod: wget -O- http://your-service.namespace.svc.cluster.local