Prerequisites
kubectl
access to your IO.net Kubernetes cluster- Helm 3.x installed
- Domain ownership and DNS management access
- Basic understanding of Kubernetes concepts (Pods, Services, Ingress)
- For ExternalDNS: API credentials for your DNS provider
Choosing the Right Method
Factor | Deployment + Service IPs | DaemonSet |
---|---|---|
Setup Complexity | Medium | Simple |
Scalability | High (flexible scaling) | Limited (one pod per node) |
High Availability | Requires manual IP management | Built-in with multiple nodes |
Resource Usage | Configurable | Fixed per node |
Best For | Large-scale applications | Simple deployments, edge cases |
Option 1: Ingress Controller as Deployment with Service IPs
The ingress controller runs as a scalable Deployment exposed via a Service with assigned public IPs.Traffic is automatically balanced across ingress pods.
Pros | Cons |
---|---|
Automatic load balancing across ingress controllers. | Node failures affect assigned public IPs. |
No host port conflicts. | Public IPs must be managed manually. |
Ingress controllers can scale flexibly. | No automatic failover without extra tools. |
Cleaner setup for most applications. |
Setup Steps
1. Install NGINX Ingress Controller as Deployment:
⚠️ The Pod Security Admission plugin is automatically enabled in every io.net Kubernetes cluster to enforce Pod Security Standards and improve cluster security by default. You can override the Pod Security Admission configuration at the namespace level. This is useful for workloads such as ingress controllers or monitoring solutions that require more relaxed security settings. However, the recommended approach is to adjust your applications to run securely using a proper podSecurityContext.
⚠️ UsenodeSelector
andtolerations
if you only want ingress on specific nodes.--set 'controller.tolerations[0].operator=Exists'
--set 'controller.nodeSelector.node-role\.kubernetes\.io/hostname='
Use affinity to ensure that pod replicas are scheduled on different nodes:--set 'controller.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[0].labelSelector.matchLabels.app\.kubernetes\.io/name=ingress-nginx'
--set 'controller.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[0].topologyKey=kubernetes.io/hostname'
2. Edit the Service to assign public IPs:
3. Update your DNS to point your domain to these EXTERNAL-IPs (kubectl get svc -n ingress-nginx
), or follow the ExternalDNS guide below.
4. Deploy applications with Ingress resources for routing.
Option 2: Ingress Controller as DaemonSet (Direct on Every Node)
The ingress controller runs on every cluster node and listens directly on ports 80/443of each node’s public IP. Your domain can point to multiple node IPs.
Pros | Cons |
---|---|
Simple to set up. | Relies on DNS round robin if no external load balancer. |
Every node can accept traffic directly. | If a node goes down, DNS may still point to it until refresh. |
High availability with multiple node IPs. | Ports 80/443 occupied on each node. |
Can combine with an external load balancer for smarter distribution. | Scaling fixed to one ingress pod per node. |
Step Setup
1. Install NGINX Ingress Controller as DaemonSet:
⚠️ The Pod Security Admission plugin is automatically enabled in every io.net Kubernetes cluster to enforce Pod Security Standards and improve cluster security by default. You can override the Pod Security Admission configuration at the namespace level. This is useful for workloads such as ingress controllers or monitoring solutions that require more relaxed security settings. However, the recommended approach is to adjust your applications to run securely using a proper podSecurityContext.
⚠️ UsinghostNetwork=true
reserves ports 80/443 on every node. UsenodeSelector
andtolerations
if you only want ingress on specific nodes.--set 'controller.tolerations[0].operator=Exists'
--set 'controller.nodeSelector.node-role\.kubernetes\.io/hostname='
2: Point your domain to the public IPs of all nodes, or follow the ExternalDNS guide below.
3: Deploy applications with Ingress resources for routing.
ExternalDNS Deployment
ExternalDNS automatically manages DNS records for all applications routed through your ingress controller.It works with both Deployment and DaemonSet ingress setups.
1. Deploy ExternalDNS
Popular DNS Providers: cloudflare, route53, google, digitalocean, vultr, etc.
2: Annotate the Ingress Controller Service
- Wildcard domains simplify DNS management for multiple applications.
- No need to annotate each Ingress individually.
- DNS records are automatically updated when IPs change.
By default, ExternalDNS is deployed with theupsert-only
policy, which allows it to create or update DNS records but never delete them. If you want ExternalDNS to also delete records, you can change the policy tosync
.--set policy=sync
SSL/TLS Certificate Management
cert-manager with Let’s Encrypt (Recommended)
Quick Start: Hello World with HTTPS
1. Deploy the application
2. Expose the application with a Service
3. Create an Ingress resource with HTTPS
4. Test the application
- ExternalDNS automatically manages DNS if the ingress controller Service is annotated but it may take some time to synchronize DNS records.
- Ensure your domain points to the ingress public IP(s).
- Open
https://app.example.com/
in a browser to see “Hello World”.