Skip to main content
This guide provides a comprehensive overview of how to expose applications to the internet on io.net Kubernetes Clusters using an ingress controller. It details two supported deployment approaches, outlines their respective advantages and trade-offs, and explains how DNS can be automated with ExternalDNS. The guide also covers SSL certificate management using cert-manager to support secure application delivery.

Prerequisites

  • kubectl access to your io.net Kubernetes Cluster.
  • Helm 3.x installed.
  • Domain ownership and DNS management access.
  • Basic understanding of Kubernetes concepts (Pods, Services, Ingress).
  • For ExternalDNS: API credentials for your DNS provider.

Selecting your Method

This table outlines the key factors to consider when choosing between Deployment + Service IPs and DaemonSet.
Key FactorDeployment + Service IPsDaemonSet
Setup ComplexityMediumSimple
ScalabilityHigh, supports flexible scalingLimited, one pod per node
High AvailabilityRequires manual IP managementBuilt-in with multiple nodes
Resource UsageConfigurableFixed per node
Best ForLarge-scale applicationsSimple deployments, edge cases

Option 1: Ingress Controller as a Deployment with Service IPs

The ingress controller is deployed as a scalable Kubernetes deployment and exposed via a LoadBalancer Service with manually assigned external IP addresses.Incoming traffic is automatically load-balanced across ingress pods. This approach is fully compatible with ExternalDNS.

Flow overview:

[ Client ]
   |
   v
Connects to node public IPs (set as externalIPs on LoadBalancer service)
   |
   v
Traffic → balanced across ingress controller pods
   |
   v
Ingress controller routes to applications

How it works:

  • Service Type: LoadBalancer (required for ExternalDNS compatibility).
  • External IPs: Manually assigned and mapped to the nodes where ingress pods are scheduled.
  • DNS Management: ExternalDNS monitors the LoadBalancer Service and automatically creates DNS records that point to the configured external IPs.
ProsCons
Automatic load balancing across ingress controllers.Node failures impact the assigned public IPs.
No host port conflicts.Public IPs must be managed manually.
Ingress controllers can scale flexibly.No automatic failover without additional tools.
Cleaner setup for most applications.

Step-by-Step Setup:

1

Install NGINX Ingress Controller as a Deployment

The Pod Security Admission plugin is enabled by default in all io.net Kubernetes clusters to enforce Pod Security Standards and enhance baseline cluster security.You may override Pod Security Admission settings at the namespace level when necessary. This can be useful for workloads such as ingress controllers or monitoring solutions that require less restrictive security policies.However, the recommended approach is to adapt your applications to run securely by configuring an appropriate podSecurityContext.
kubectl create namespace ingress-nginx
kubectl label namespace ingress-nginx pod-security.kubernetes.io/enforce=privileged

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --set controller.kind=Deployment \
  --set controller.replicaCount=3 \
  --set controller.resources.requests.cpu=100m \
  --set controller.resources.requests.memory=128Mi \
  --set controller.service.enabled=true \
  --set controller.service.type=LoadBalancer \
  --set controller.publishService.enabled=false \
  --set-string controller.nodeSelector.worker-node=true
The ingress controller is configured to deploy only on worker nodes with the worker-node=true label.
Optional: add Tolerations if your worker nodes have taints:--set 'controller.tolerations[0].operator=Exists'
Optional: add Pod Anti-Affinity to ensure pod replicas are scheduled on different nodes (only if you have sufficient worker nodes):--set 'controller.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[0].labelSelector.matchLabels.app\.kubernetes\.io/name=ingress-nginx'--set 'controller.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[0].topologyKey=kubernetes.io/hostname'
2

Assign Public IPs to Ingress Nodes

Edit the Service configuration so that Public IPs are assigned only from nodes where ingress pods are running.
kubectl patch svc ingress-nginx-controller -n ingress-nginx --type=merge -p "{
  \"spec\": {
    \"externalIPs\": [
      $(kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[*].spec.nodeName}' \
        | tr ' ' '\n' | sort -u | while read node; do \
          kubectl get node "$node" -o jsonpath='{.status.addresses[?(@.type=="ExternalIP")].address}'; \
          echo; \
        done | tr '\n' ',' | sed 's/,$//' | sed 's/\([^,]*\)/"\1"/g')
    ]
  }
}"
This command identifies the nodes currently running ingress controller pods and assigns their external IP addresses to the LoadBalancer Service. This allows ExternalDNS to correctly detect the service endpoints and manage the corresponding DNS records.
3

Update DNS Records

Point your domain to the Service’s EXTERNAL-IP values shown by kubectl get svc -n ingress-nginx, or follow the ExternalDNS Guide.
4

Deploy Applications with Ingress

Deploy your applications and configure ingress resources to handle routing to the appropriate services.

SSL/TLS Certificate Management

When exposing applications to the internet through a Kubernetes ingress controller, SSL/TLS certificates are required to securely terminate HTTPS traffic. cert-manager automates the issuance and renewal of certificates from Let’s Encrypt, integrating directly with Kubernetes Ingress resources to provide end-to-end HTTPS without manual certificate management. In the setup below, cert-manager is installed in the cluster and configured to use a DNS-01 challenge, which is well suited for internet-facing applications, wildcard domains, and environments where ingress traffic reaches services through public IPs. The following examples show how to install cert-manager and configure it with Cloudflare as the DNS provider. Read more: https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/
kubectl create namespace cert-manager
kubectl label namespace cert-manager pod-security.kubernetes.io/enforce=privileged

# Install cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --set installCRDs=true

### Example for Cloudflare - https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: cloudflare-api-key-secret
  namespace: cert-manager
type: Opaque
stringData:
  api-key: ${CLOUDFLARE_API_KEY}
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: ${cloudflare_email}
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-dns01-private-key
    solvers:
    - dns01:
        cloudflare:
          email: ${cloudflare_email}
          apiKeySecretRef:
            name: cloudflare-api-key-secret
            key: api-key
EOF

Example for Cloudflare with API Key

For an end-to-end HTTPS application example, refer to Quick Start: Hello World with HTTPS.

Troubleshooting

Health Checks

kubectl get pods -n ingress-nginx
kubectl logs -f deployment/nginx-ingress-ingress-nginx-controller -n ingress-nginx
kubectl get ingress --all-namespaces
kubectl describe ingress hello-ingress -n hello


Common Issues and Solutions

# Check DNS propagation
nslookup app.example.com
dig app.example.com

# Check ExternalDNS logs
kubectl logs -f deployment/external-dns -n external-dns
# Check certificate status
kubectl get certificates --all-namespaces
kubectl describe certificate hello-tls -n hello

# Check cert-manager logs
kubectl logs -f deployment/cert-manager -n cert-manager
# Check ingress configuration
kubectl describe ingress hello-ingress -n hello

# Test backend service directly
kubectl port-forward svc/hello-world 8080:80 -n hello
# Then test: curl localhost:8080

# Check ingress controller logs
kubectl logs -f deployment/nginx-ingress-ingress-nginx-controller -n ingress-nginx
# Check resource usage
kubectl top pods -n ingress-nginx
kubectl describe pod <ingress-controller-pod> -n ingress-nginx

# Adjust resource limits
helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --set controller.resources.limits.cpu=2000m \
  --set controller.resources.limits.memory=1Gi
# Get all ingress-related resources
kubectl get all,ingress,certificates -n ingress-nginx
kubectl get all,ingress,certificates -n your-app-namespace

# Test connectivity
kubectl run test-pod --image=busybox --rm -it -- sh
# Inside pod: wget -O- http://your-service.namespace.svc.cluster.local