Ceph Storage Cluster in Proxmox VE

How to Configure a Ceph Storage Cluster in Proxmox VE?

The Ceph Storage Cluster is built right into Proxmox, giving you a free, software-defined storage solution. In short, Ceph is a distributed storage system that functions as both an object store and a file system, built for performance, reliability, and scalability. When you combine Proxmox VE with Ceph in a hyperconverged cluster, you create an infrastructure that’s easier to scale, highly reliable, and optimized for performance.

This guide walks you through the Ceph storage cluster configuration in Proxmox VE to build a robust virtualization environment with efficient storage management. If you’re new to clustering, start with our step-by-step guide on setting up a Proxmox Cluster.

Why Combine Proxmox VE with Ceph?

Proxmox Virtual Environment (VE) is an open-source platform for enterprise virtualization, combining KVM hypervisor and LXC containers. Integrating Ceph, a distributed storage system, provides:

  • Easy setup and management through CLI and GUI
  • Thin provisioning
  • Snapshot support
  • Self-healing
  • Easily expand storage by adding new nodes
  • Provides block, file system, and object storage
  • Setup pools with different performance and redundancy characteristics
  • Data replication across nodes ensures redundancy
  • Runs on commodity hardware
  • No need for hardware RAID controllers
  • Open source

For small to medium-sized deployments, it is possible to install a Ceph server for using RADOS Block Devices (RBD) or CephFS directly on your Proxmox VE cluster nodes.

How to Configure a Ceph Storage Cluster in Proxmox VE?

Prerequisites

Before starting, ensure you have:

  1. Hardware:
    • CPU: Adequate cores; OSD services benefit from higher frequencies and multiple cores.
    • Memory: Sufficient RAM; consider both VM/container needs and Ceph’s requirements.
    • Storage: Dedicated disks for Ceph OSDs; uniform disk sizes are recommended.
    • Network: Reliable networking; separate networks for public and cluster traffic enhance performance.
  2. Software:
    • Proxmox VE installed on all nodes.

Step 1: Set Up the Proxmox Cluster

Set Up the Proxmox Cluster

    Thanks to: horizoniq.com

  1. Create Cluster on First Node:
    • Navigate to Datacenter > Cluster in the Proxmox web interface.
    • Click Create Cluster, provide a Cluster Name, and specify the Cluster Network.
    • Click Create.
  2. Join Additional Nodes:
    • On each additional node, navigate to Datacenter > Cluster.
    • Click Join Cluster, enter the Cluster Join Information from the first node, and click Join.

Step 2: Install Ceph on All Nodes

Install Ceph on All Nodes
  1. Access the First Node:
    • Navigate to Ceph in the Proxmox web interface.
    • Click Install Ceph, select the desired Ceph version, and start the installation.
  2. Configure Networks:
    • Define the Public Network and Cluster Network under Configuration.
    • Confirm the setup installation when prompted.
    • Ensure the correct IP subnet is assigned.
  3. Repeat on Remaining Nodes
    • Perform the Ceph installation on all cluster nodes.

Step 3: Set Up Ceph Monitors and Managers

Set Up Ceph Monitors and Managers

Thanks to: starwindsoftware.com

  1. Add Monitors
    • Navigate to Ceph > Monitor and click Create Monitor.
    • Add a monitor to each node for high availability.
  2. Add Managers (MGR)
    • Ensure at least one Manager daemon is created (Proxmox usually deploys one automatically when the first monitor is added).

Step 4: Configure Ceph OSDs

Configure Ceph OSDs

  

  1. Prepare Disks
  • Navigate to Disks to verify disk availability.

2. Create OSDs

  • On each node, go to Ceph > OSD, click Create OSD, select the disk, and click Create.

Also read: Proxmox VE 8.4: Live Migration, vGPU, and Backup API Power-Up

Step 5: Create Ceph Pools

Create Ceph Pools
  1. Navigate to Pools
    • Go to Ceph > Pools, click Create Pool.
  2. Configure Pool
  3. Name the pool (e.g., vm_storage).Set the Size (number of replicas, typically 3).
    • Click Create.

Testing Live Migrations in Your Proxmox Ceph Environment

Once your Proxmox Ceph cluster is configured with three monitor nodes and integrated storage, it creates a shared, distributed storage system available across all hosts. This setup makes live VM migration seamless, moving only the memory map instead of entire disk images—keeping downtime to a minimum.

 

The cluster also delivers high availability and fault tolerance, ensuring workloads continue to run even if a node goes down. By pooling storage resources, it maximizes efficiency while simplifying management through the Proxmox interface.

 

The result is a resilient, scalable virtualization environment—perfect for testing, small-scale production, or even nested lab deployments.

Common Proxmox Ceph Cluster Troubleshooting Issues & Solutions

IssuePossible CauseSolution
Slow Read/Write SpeedsNetwork bottlenecks or high CPU usageEnsure dedicated Ceph network, also enable jumbo frames
OSD CrashesDisk failure or corruptionReplace faulty disk and rebalance cluster with ceph osd out <ID>
Monitors Going OfflineNetwork issues or inadequate resourcesRestart monitor service: systemctl restart ceph-mon@<hostname>
Cluster in HEALTH_WARN StateImbalanced distribution or huge disk usageRun ceph pg rebalance to fix data placement

FAQs

Q1. What is Ceph in Proxmox?

Ceph is a built-in, open-source storage solution in Proxmox that provides distributed, software-defined storage. It combines object, block, and file storage, making it highly scalable and fault-tolerant.

Q2. Do I need special hardware for a Proxmox Ceph cluster?

While Ceph can run on standard hardware, it performs best with multiple nodes, fast networking (10GbE+ recommended), and SSDs or NVMe drives for journals or metadata.

Q3. How many nodes do I need to build a Ceph cluster in Proxmox?

A minimum of three monitor nodes is recommended to maintain quorum and ensure cluster stability. For storage, you can start small but ideally have three or more OSD (Object Storage Daemon) nodes for redundancy.

Q4. Can I migrate VMs between nodes without downtime?

Yes. With Ceph-backed storage, Proxmox supports live migration. Only the VM’s memory map is transferred, not the full disk image, so downtime is minimal.

Q5. What happens if one node fails in the cluster?

The cluster automatically handles failures. VMs can continue running on other nodes, and Ceph ensures data remains available through replication.

Q6. Is a Proxmox Ceph cluster suitable for production?

Yes, it’s production-ready when configured properly. However, for small-scale testing or labs, you can also deploy it in nested environments with fewer resources.

Q7. How is management handled?

Proxmox provides an easy-to-use web interface where you can monitor Ceph status, manage storage pools, and handle VM migrations without complex manual configurations.

Q8. Is Ceph free to use with Proxmox?

Yes. Ceph is fully open-source and comes integrated with Proxmox at no extra cost.

Scroll to Top