Get 2 Months Free When You Switch to HorizonIQ. LEARN MORE.

Sep 23, 2025

S3-Compatible Storage: What It Is and How It Gives You More Control

Tony Joy

If you’ve ever built against Amazon S3, you already know the appeal of object storage. It offers virtually limitless scale. It uses simple key-value access. It has a mature API that has become the standard for storing unstructured data.

Object storage powers data lakes, content delivery, backups, and analytics. It quietly supports countless modern applications behind the scenes.

But relying on one provider can create challenges. It can cause vendor lock-in, lead to unpredictable costs, and make it hard to meet data-residency requirements.

S3-compatible storage solves these problems. It gives you the same developer experience. You keep the same authentication model, bucket and object structure, and familiar tools. You also gain the freedom to run on different infrastructure with better cost, control, and compliance options.

What is S3-compatible storage?

S3-compatible storage is object storage that implements the Amazon S3 API. It enables your applications, SDKs, and tools to interact with buckets and objects in the same way they do with Amazon S3.

In practice, that means you can:

  • Point your existing tools like AWS CLI, rclone, s3cmd, or SDKs to a new endpoint
  • Swap in new access keys and credentials
  • Keep your pipelines, automation scripts, and backup jobs intact

If your applications speak the S3 API, they can talk to any S3-compatible target with little to no code changes. This makes it easier to:

  • Use hybrid strategies by keeping some data in the public cloud and placing the rest on private or hosted infrastructure
  • Gain smarter cost control by avoiding hyperscale features you don’t need while keeping the same access model
  • Create clean exit paths so you can switch providers without rewriting pipelines or retraining your teams.

 

What is object storage?

Object storage is built for scale, durability, and cost efficiency. It stores blobs as objects with metadata inside buckets and delivers them over HTTP(S).

This is the exact model the S3 API was created for. It is ideal for data lakes, analytics, media, AI artifacts, backups, and archives.

 

What are the benefits of S3-compatible storage?

S3-compatible storage gives you the best of both worlds. You get the familiar S3 experience and gain new freedom to choose where your data lives. Some of the key benefits include:

Portability without rewrites

You can swap endpoints and credentials while leaving your existing pipelines, backup jobs, and ingest tasks untouched.

Ecosystem continuity

Your standard tools and SDKs work as they are.

Cost and performance control

You can place data on infrastructure with the price and performance profile that fits your needs.

Operational simplicity at scale

HTTP(S) access and flat namespaces make it easy to manage billions of objects programmatically.

Future-proof design

Because the API is the contract, you are not tied to a single proprietary platform.

 

What are the best use cases for S3-compatible storage?

S3-compatible storage is ideal for large, unstructured data accessed by many clients in parallel. Common use cases include:

Data lakes and analytics

Land raw and curated datasets by prefix and let analytics engines read objects directly over HTTP(S).

AI and machine learning pipelines

Store training sets, checkpoints, and model artifacts. Access them in parallel using multipart uploads and key sharding.

Content and media

Serve static assets, downloads, images, and video. Pair buckets with a CDN as needed.

Backups and long-term retention

Use versioning and lifecycle policies to meet retention requirements.

Observability and build artifacts

Persist logs, build outputs, and release bundles without managing filesystems.

 

What’s the difference between multi-tenant and dedicated storage for S3 compatibility?

When you choose S3-compatible storage, you also need to choose how it is delivered. The two main options are multi-tenant and dedicated. Both can support the S3 API, but they work very differently.

Multi-tenant storage: Shares infrastructure between multiple customers. Your data stays isolated, but capacity and hardware are pooled. This makes it cost-efficient and quick to deploy. It scales easily and works well for workloads like data lakes, AI datasets, media libraries, and backups.

Dedicated storage: Gives you your own isolated infrastructure. You do not share resources with other customers. This costs more but gives you exclusive performance, security, and enterprise features like advanced data reduction, array-level encryption, and custom protection policies. It is best for compliance-heavy workloads or strict performance requirements.

Here’s a quick comparison:

Feature Multi-Tenant Dedicated
Resource model Shared across multiple customers Fully isolated per customer
Cost Lower Higher
Performance consistency Good Very high
S3 compatibility Yes Yes (when enabled on the array)
Best for Scale, cost efficiency, flexible workloads Compliance, predictable performance, enterprise data services

A simple way to decide:
If you want fast deployment and lower cost, multi-tenant storage is usually the right fit. If you need strict control, exclusive performance, or advanced enterprise features, dedicated storage is the safer choice.

What is HorizonIQ’s dedicated storage (for S3-compatible needs)?

Some workloads need enterprise-grade data services. HorizonIQ offers dedicated storage arrays from vendors like Pure and Nimble.

When these arrays support S3, HorizonIQ can present S3-compatible object storage on top. Your applications still speak S3 while gaining features such as:

  • Advanced data reduction
  • Array-native encryption
  • Snapshot and replication policies
  • Integrated protection and compliance controls

Choose dedicated arrays if you need array-native encryption, aggressive data reduction, or tightly integrated protection policies. This path gives you S3 compatibility with full enterprise features.

What is HorizonIQ’s multi-tenant storage (for S3-compatible buckets)?

If cost, performance, and scale are your top priorities, HorizonIQ offers a multi-tenant platform built on Ceph.

Ceph is an open-source storage system that supports massive scalability and resiliency. HorizonIQ uses it to deliver S3-compatible buckets for object storage with:

  • Familiar S3 operations, including multipart uploads, versioning, lifecycle rules, and presigned URLs
  • Built-in resiliency using X4 data replication across distributed nodes
  • High availability that tolerates at least two full-node failures without disruption
  • A cost-efficient model that balances performance and scale for modern workloads

Ceph does not include some of the advanced security and data-reduction features of high-end arrays. If you need those, HorizonIQ will guide you to the dedicated option. If not, the multi-tenant platform offers the best balance of cost, performance, and scale for S3-style workloads.

S3 Compatible object storage comparison chart

How do HorizonIQ’s S3-compatible options compare?

Dimension Multi-Tenant (Ceph-based) Dedicated Arrays
Primary role Cost-efficient S3-compatible buckets S3-compatible object with enterprise features
API and tooling S3 operations for buckets/objects S3 operations with array-native services
Security and data services Core capabilities Advanced reduction, encryption, replication, policy suites
Best fit Data lakes, AI, media, backups, archives Compliance-heavy or feature-rich workloads
Operations model Multi-tenant Dedicated arrays managed for you

How do you choose what’s right for your business?

Use these three simple questions:

Do your apps already speak S3, and is cost-performance your priority?
Choose HorizonIQ’s multi-tenant (Ceph-based) S3-compatible object storage. It delivers scalable buckets, resilient design with X4 replication, and the familiar S3 experience your teams expect.

Do you need enterprise data services as a must-have?
Choose HorizonIQ dedicated arrays with S3 support. You get S3 compatibility plus advanced features like data reduction, encryption, snapshots, and protection controls.

Do you want one platform for both block and object?
Use HorizonIQ’s Ceph-based platform. It lets you run block volumes for databases and VMs alongside S3-compatible buckets for analytics, AI, media, logs, backups, and archives.

If you need… Choose… Why it fits
Cost-efficient S3 buckets with strong resiliency Multi-tenant Ceph-based S3-style model at lower cost with X4 replication
S3 compatibility plus enterprise data services Dedicated arrays Adds encryption, reduction, replication, and policy controls
One platform for object and block Ceph-based Operates both buckets and volumes together

S3-compatible storage lets you keep the familiar S3 interface while choosing infrastructure that fits your needs.

HorizonIQ offers two clear S3-compatible paths. Use our multi-tenant Ceph-based platform for cost-efficient scale. Use dedicated arrays when you need deep enterprise data services.

Either way, your applications keep working the way they do today—just pointed at a smarter target.

Explore HorizonIQ's
Managed Private Cloud

LEARN MORE

Stay Connected

About Author

Tony Joy

Read More
Sep 18, 2025

Kubernetes on Proxmox: A Practical Guide Backed by Real Experiences

Tony Joy

Kubernetes has become the de facto standard for container orchestration, powering applications at scale across enterprises, startups, and homelabs alike. Proxmox VE is a trusted open-source virtualization platform that combines KVM virtualization, LXC containers, and Ceph storage into a single management layer.

Naturally, administrators often ask: What’s the best way to run Kubernetes on Proxmox? Should it be inside virtual machines, inside LXC containers, or directly on bare metal alongside Proxmox? The answers aren’t always straightforward.

Whether setting up a homelab or production environment, the Proxmox community and HorizonIQ are here to share years of virtualization experience. In this post, we’ll explore how Kubernetes fits into a Proxmox environment, the challenges and trade-offs of different deployment approaches, and the creative solutions community members have built.

Can You Run Kubernetes Directly on Bare Metal Proxmox?

A recurring question is whether Kubernetes can be installed directly onto Proxmox nodes (side by side with the hypervisor and perhaps even with Ceph storage). Bare metal Kubernetes has real appeal: no virtualization overhead, fewer layers to manage, and direct access to hardware. With tools like KubeVirt, it’s even possible to run and manage virtual machines inside Kubernetes, making bare-metal clusters versatile.

That said, bare metal comes with added responsibility. Community members point out that etcd quorum—the consistency mechanism for Kubernetes’ control plane—is especially sensitive to disk performance and redundancy. Without proper planning, a single host failure can cripple the cluster. 

Another option is a hybrid approach. Keep control plane nodes virtualized on Proxmox for easier backups, while running worker nodes on bare metal for raw performance. Others suggest distributions like Talos or homelab-focused stacks like Cozystack to simplify bare-metal deployments.

Pro tip: The recommendation depends on your priorities. If you value flexibility and fast recovery, running Kubernetes inside Proxmox VMs is still the most manageable route. If maximum performance is your goal—and you’re comfortable handling quorum intricacies and dedicating SSDs for etcd—bare metal can be a powerful choice. A hybrid setup can also provide the best of both worlds.

Why Is Running Kubernetes in Proxmox VMs the Best Option?

The consensus is that running Kubernetes in virtual machines is the most stable and practical option.

VMs let you:

  • Assign CPU and RAM resources cleanly.
  • Take snapshots for rollbacks.
  • Separate Kubernetes nodes across different hosts for high availability.
  • Scale clusters up or down without disturbing your base infrastructure.

Virtualization provides guardrails. You can build a resilient Kubernetes setup without interfering with Proxmox’s own services. Proxmox HA and Ceph can provide the foundation, while Kubernetes focuses purely on orchestrating containers inside its VM sandbox. For admins who want Kubernetes to behave predictably in production, this is the best-practice path.

Pro tip: At HorizonIQ, we’ve seen the same principle hold true in practice. When migrating from VMware to Proxmox, we chose a VM-based model to run critical workloads at scale. A similar approach applies directly to Kubernetes. By hosting clusters inside VMs, you preserve isolation, simplify management, and unlock the scalability necessary for production environments.

Looking to migrate without overlap costs?

Migration shouldn’t drain your budget. With HorizonIQ’s 2 Months Free, you can move workloads, skip the overlap bills, and gain extra time to switch providers without double paying.

Get 2 Months Free

Is Kubernetes in LXC Containers Practical?

Because Proxmox supports both VMs and containers, some admins have tried running Kubernetes inside LXC containers (often with lightweight distributions like k3s). At first, this seems efficient since LXCs are faster to spin up and use fewer resources.

But many users warn that the reality is messy. Deploying Kubernetes inside LXCs required manually loading kernel modules on the host, loosening AppArmor restrictions, and adding custom mounts. Even then, limitations quickly appeared.

The biggest drawback, according to another contributor, is that many Kubernetes persistent storage solutions, such as Longhorn, Rook, and OpenEBS, require low-level kernel access that LXCs can’t provide. Without these features, storage options are constrained.

Pro tip: Running Kubernetes in LXC is possible for experiments or small labs, but it’s fragile and not recommended for production.

Which OS Should You Use for Kubernetes Nodes on Proxmox?

Another discussion in the Proxmox forum centers around which operating system works best as the base for Kubernetes nodes in Proxmox. Users have experimented with Debian, Talos, Flatcar, Bottlerocket, and others.

Each has pros and cons. Debian is familiar and widely supported, but more general-purpose. Talos and Bottlerocket are Kubernetes-first operating systems with security hardening, but require comfort with specialized tooling.

In that same thread, one user eventually concluded:

“I gave up LXC & VM approach and installed k3s alongside Proxmox. In fact I’m considering removing Proxmox and leaving k3s alone.”

This reflects a preference for simplicity in some environments. Still, for most users, Debian or Ubuntu remains the safe and widely supported choice.

How Do You Handle Persistent Storage for Kubernetes on Proxmox?

Persistent volumes are a key part of any Kubernetes deployment. Without them, containerized workloads lose state when pods restart. So how does persistent storage fit into a Proxmox + Kubernetes setup?

The recommended approach is to use CSI drivers. As one community member explained in this thread, since Proxmox supports Ceph natively, many admins deploy Ceph on their Proxmox cluster and then use the ceph-csi driver inside Kubernetes. This provides a clean integration point without overlapping storage responsibilities.

Other drivers like csi-lvm can also be used, but forum members warn about the importance of careful isolation. If Proxmox and Kubernetes both try to manipulate the same storage resources without boundaries, corruption risks rise.

Pro tip: Let Proxmox handle Ceph, expose it via CSI to Kubernetes, and keep the storage layers clearly separated. Exposing that storage to Kubernetes via CSI is the natural extension, making Kubernetes clusters first-class citizens in a Proxmox environment.

How Should You Network Kubernetes Across Proxmox Nodes?

Networking is another common pain point. A Kubernetes cluster needs reliable communication between nodes and pods across Proxmox hosts.

One solution is to add a second physical NIC to each Proxmox host, bridge it to a dedicated network, and assign this to Kubernetes VMs. This ensures clean separation from management and storage traffic.

Alternatively, administrators can use VLANs or multiple bridges to keep Kubernetes traffic isolated. In all cases, redundancy is critical. A single DHCP server outage could bring an entire cluster down if not designed properly.

As with storage, isolation and redundancy are the keys to a healthy Kubernetes-on-Proxmox network.

Can Kubernetes Manage Proxmox Directly?

One of the more creative projects to emerge from the Proxmox community is the Proxmox Operator. The operator allows administrators to manage Proxmox VMs directly from Kubernetes, using custom resource definitions (CRDs).

For example, you can define a Kubernetes resource that automatically provisions a VM on Proxmox. Combined with projects like Sidero, Proxmox becomes a flexible backend for Kubernetes infrastructure management.

Pro tip: This isn’t the mainstream use case for most admins, but it demonstrates how deeply the two platforms can be integrated if needed. 

What’s the Best Way to Run Kubernetes on Proxmox?

So, what’s the most effective way to run Kubernetes on Proxmox? 

  • Use VMs for Kubernetes nodes. It adds minimal overhead while maximizing flexibility and maintainability.
  • Avoid running Kubernetes directly on Proxmox or inside LXCs unless you’re experimenting in a lab. These setups are fragile and hard to maintain.
  • Choose stable base OS images like Debian or Ubuntu for your Kubernetes VMs, unless you have a valid reason to use a specialized distribution.
  • Integrate persistent storage with CSI drivers such as ceph-csi. Let Proxmox manage the storage backend, and let Kubernetes consume it cleanly.
  • Design your networking carefully with dedicated bridges, VLANs, or redundant NICs to prevent single points of failure.

How Can HorizonIQ help with Your Proxmox Environment?

Proxmox and Kubernetes both shine in their own domains. Proxmox provides a rock-solid foundation for virtualization, containers, and Ceph storage, while Kubernetes orchestrates containers at scale. When combined thoughtfully, they complement each other. Proxmox ensures infrastructure resilience, and Kubernetes delivers application agility.

At HorizonIQ, we’ve proven this in practice. Our VMware to Proxmox migration case study highlights how standardizing on Proxmox VMs and Ceph-backed storage creates a durable platform for both enterprise workloads and Kubernetes clusters. Whether you need a managed private cloud (MPC) environment for predictable scale or bare metal servers for maximum performance, HorizonIQ ensures Kubernetes runs on top of battle-tested infrastructure.

By combining open-source flexibility, storage durability, and orchestration predictability, HorizonIQ makes Kubernetes on Proxmox not just possible, but a forward-thinking strategy. Clients gain the agility of Kubernetes without sacrificing the stability of enterprise-grade infrastructure—all backed by HorizonIQ expertise to guide, manage, and support every step of the journey.

Explore HorizonIQ's
Managed Private Cloud

LEARN MORE

Stay Connected

About Author

Tony Joy

Read More