Get 2 Months Free When You Switch to HorizonIQ. LEARN MORE.

Sep 18, 2025

Kubernetes on Proxmox: A Practical Guide Backed by Real Experiences

Tony Joy

Kubernetes has become the de facto standard for container orchestration, powering applications at scale across enterprises, startups, and homelabs alike. Proxmox VE is a trusted open-source virtualization platform that combines KVM virtualization, LXC containers, and Ceph storage into a single management layer.

Naturally, administrators often ask: What’s the best way to run Kubernetes on Proxmox? Should it be inside virtual machines, inside LXC containers, or directly on bare metal alongside Proxmox? The answers aren’t always straightforward.

Whether setting up a homelab or production environment, the Proxmox community and HorizonIQ are here to share years of virtualization experience. In this post, we’ll explore how Kubernetes fits into a Proxmox environment, the challenges and trade-offs of different deployment approaches, and the creative solutions community members have built.

Can You Run Kubernetes Directly on Bare Metal Proxmox?

A recurring question is whether Kubernetes can be installed directly onto Proxmox nodes (side by side with the hypervisor and perhaps even with Ceph storage). Bare metal Kubernetes has real appeal: no virtualization overhead, fewer layers to manage, and direct access to hardware. With tools like KubeVirt, it’s even possible to run and manage virtual machines inside Kubernetes, making bare-metal clusters versatile.

That said, bare metal comes with added responsibility. Community members point out that etcd quorum—the consistency mechanism for Kubernetes’ control plane—is especially sensitive to disk performance and redundancy. Without proper planning, a single host failure can cripple the cluster. 

Another option is a hybrid approach. Keep control plane nodes virtualized on Proxmox for easier backups, while running worker nodes on bare metal for raw performance. Others suggest distributions like Talos or homelab-focused stacks like Cozystack to simplify bare-metal deployments.

Pro tip: The recommendation depends on your priorities. If you value flexibility and fast recovery, running Kubernetes inside Proxmox VMs is still the most manageable route. If maximum performance is your goal—and you’re comfortable handling quorum intricacies and dedicating SSDs for etcd—bare metal can be a powerful choice. A hybrid setup can also provide the best of both worlds.

Why Is Running Kubernetes in Proxmox VMs the Best Option?

The consensus is that running Kubernetes in virtual machines is the most stable and practical option.

VMs let you:

  • Assign CPU and RAM resources cleanly.
  • Take snapshots for rollbacks.
  • Separate Kubernetes nodes across different hosts for high availability.
  • Scale clusters up or down without disturbing your base infrastructure.

Virtualization provides guardrails. You can build a resilient Kubernetes setup without interfering with Proxmox’s own services. Proxmox HA and Ceph can provide the foundation, while Kubernetes focuses purely on orchestrating containers inside its VM sandbox. For admins who want Kubernetes to behave predictably in production, this is the best-practice path.

Pro tip: At HorizonIQ, we’ve seen the same principle hold true in practice. When migrating from VMware to Proxmox, we chose a VM-based model to run critical workloads at scale. A similar approach applies directly to Kubernetes. By hosting clusters inside VMs, you preserve isolation, simplify management, and unlock the scalability necessary for production environments.

Is Kubernetes in LXC Containers Practical?

Because Proxmox supports both VMs and containers, some admins have tried running Kubernetes inside LXC containers (often with lightweight distributions like k3s). At first, this seems efficient since LXCs are faster to spin up and use fewer resources.

But many users warn that the reality is messy. Deploying Kubernetes inside LXCs required manually loading kernel modules on the host, loosening AppArmor restrictions, and adding custom mounts. Even then, limitations quickly appeared.

The biggest drawback, according to another contributor, is that many Kubernetes persistent storage solutions, such as Longhorn, Rook, and OpenEBS, require low-level kernel access that LXCs can’t provide. Without these features, storage options are constrained.

Pro tip: Running Kubernetes in LXC is possible for experiments or small labs, but it’s fragile and not recommended for production.

Which OS Should You Use for Kubernetes Nodes on Proxmox?

Another discussion in the Proxmox forum centers around which operating system works best as the base for Kubernetes nodes in Proxmox. Users have experimented with Debian, Talos, Flatcar, Bottlerocket, and others.

Each has pros and cons. Debian is familiar and widely supported, but more general-purpose. Talos and Bottlerocket are Kubernetes-first operating systems with security hardening, but require comfort with specialized tooling.

In that same thread, one user eventually concluded:

“I gave up LXC & VM approach and installed k3s alongside Proxmox. In fact I’m considering removing Proxmox and leaving k3s alone.”

This reflects a preference for simplicity in some environments. Still, for most users, Debian or Ubuntu remains the safe and widely supported choice.

How Do You Handle Persistent Storage for Kubernetes on Proxmox?

Persistent volumes are a key part of any Kubernetes deployment. Without them, containerized workloads lose state when pods restart. So how does persistent storage fit into a Proxmox + Kubernetes setup?

The recommended approach is to use CSI drivers. As one community member explained in this thread, since Proxmox supports Ceph natively, many admins deploy Ceph on their Proxmox cluster and then use the ceph-csi driver inside Kubernetes. This provides a clean integration point without overlapping storage responsibilities.

Other drivers like csi-lvm can also be used, but forum members warn about the importance of careful isolation. If Proxmox and Kubernetes both try to manipulate the same storage resources without boundaries, corruption risks rise.

Pro tip: Let Proxmox handle Ceph, expose it via CSI to Kubernetes, and keep the storage layers clearly separated. Exposing that storage to Kubernetes via CSI is the natural extension, making Kubernetes clusters first-class citizens in a Proxmox environment.

How Should You Network Kubernetes Across Proxmox Nodes?

Networking is another common pain point. A Kubernetes cluster needs reliable communication between nodes and pods across Proxmox hosts.

One solution is to add a second physical NIC to each Proxmox host, bridge it to a dedicated network, and assign this to Kubernetes VMs. This ensures clean separation from management and storage traffic.

Alternatively, administrators can use VLANs or multiple bridges to keep Kubernetes traffic isolated. In all cases, redundancy is critical. A single DHCP server outage could bring an entire cluster down if not designed properly.

As with storage, isolation and redundancy are the keys to a healthy Kubernetes-on-Proxmox network.

Can Kubernetes Manage Proxmox Directly?

One of the more creative projects to emerge from the Proxmox community is the Proxmox Operator. The operator allows administrators to manage Proxmox VMs directly from Kubernetes, using custom resource definitions (CRDs).

For example, you can define a Kubernetes resource that automatically provisions a VM on Proxmox. Combined with projects like Sidero, Proxmox becomes a flexible backend for Kubernetes infrastructure management.

Pro tip: This isn’t the mainstream use case for most admins, but it demonstrates how deeply the two platforms can be integrated if needed. 

What’s the Best Way to Run Kubernetes on Proxmox?

So, what’s the most effective way to run Kubernetes on Proxmox? 

  • Use VMs for Kubernetes nodes. It adds minimal overhead while maximizing flexibility and maintainability.
  • Avoid running Kubernetes directly on Proxmox or inside LXCs unless you’re experimenting in a lab. These setups are fragile and hard to maintain.
  • Choose stable base OS images like Debian or Ubuntu for your Kubernetes VMs, unless you have a valid reason to use a specialized distribution.
  • Integrate persistent storage with CSI drivers such as ceph-csi. Let Proxmox manage the storage backend, and let Kubernetes consume it cleanly.
  • Design your networking carefully with dedicated bridges, VLANs, or redundant NICs to prevent single points of failure.

How Can HorizonIQ help with Your Proxmox Environment?

Proxmox and Kubernetes both shine in their own domains. Proxmox provides a rock-solid foundation for virtualization, containers, and Ceph storage, while Kubernetes orchestrates containers at scale. When combined thoughtfully, they complement each other. Proxmox ensures infrastructure resilience, and Kubernetes delivers application agility.

At HorizonIQ, we’ve proven this in practice. Our VMware to Proxmox migration case study highlights how standardizing on Proxmox VMs and Ceph-backed storage creates a durable platform for both enterprise workloads and Kubernetes clusters. Whether you need a managed private cloud (MPC) environment for predictable scale or bare metal servers for maximum performance, HorizonIQ ensures Kubernetes runs on top of battle-tested infrastructure.

By combining open-source flexibility, storage durability, and orchestration predictability, HorizonIQ makes Kubernetes on Proxmox not just possible, but a forward-thinking strategy. Clients gain the agility of Kubernetes without sacrificing the stability of enterprise-grade infrastructure—all backed by HorizonIQ expertise to guide, manage, and support every step of the journey.

Explore HorizonIQ's
Managed Private Cloud

LEARN MORE

Stay Connected

About Author

Tony Joy

Read More