I’ve been in the process of migrating a lot things back to kubernetes, and I’m debating whether I should have separate private and public clusters.

Some stuff I’ll keep out of kubernetes and leave in separate vms, like nextcloud/immich/etc. Basically anything I think would be more likely to have sensitive data in it.

I also have a few public-facing things like public websites, a matrix server, etc.

Right now I’m solving this by having two separate ingress controllers in one cluster - one for private stuff only available over a vpn, and one only available over public ips.

The main concern I’d have is reducing the blast radius if something gets compromised. But I also don’t know if I really want to maintain multiple personal clusters. I am using Omni+Talos for kubernetes, so it’s not too difficult to maintain two clusters. It would be more inefficient as far as resources go since some of the nodes are baremetal servers and others are only vms. I wouldn’t be able to share a large baremetal server anymore, unless I split it into vms.

What are y’all’s opinions on whether to keep everything in one cluster or not?

  • Tiuku
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Right now I’m solving this by having two separate ingress controllers in one cluster - one for private stuff only available over a vpn, and one only available over public ips.

    How’s this working out? What kinda alternatives are there with a single cluster?

    • johntash@eviltoast.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      It’s mostly working fine for me.

      An alternative I tried before was just whitelisting which IPs are allowed to access specific ingresses, but having the ingress listen on both public/private networks. I like having a separate ingress controller better because I know the ingress isn’t accessible at all from a public ip. It keeps the logs separated as well.

      Another alternative would be an external load balancer or reverse proxy that can access your cluster. It’d act as the “public” ingress, but would need to be configured to allow specific hostnames/services through.