
ZFS Backups in Proxmox
Recently I added a ZFS storage pool in my Proxmox Node and wanted to start backing up my VMs there. However, when I went to the backup tab, I did not see an option to back up to my ZFS disk. Luckily, thanks to Aaron Weiss and his article “How To Create ZFS Backups in Proxmox”, I was able to find a solution to my problem. Of course, this is specific to my use case, so feel free to check out his guide for more details.
In case you don’t already have proxmox setup, check out my post on installing Proxmox VE.
Here’s the step-by-step process I used to enable ZFS Backups in Proxmox:
Create the ZFS Pool
I started with two empty 3TB hard drives, so my process began with connecting the drives to my server. Now we need to create the ZFS pool. Although this can be done via the command line, I chose to use the GUI method — Nevertheless, I will outline both methods.
GUI Method:
- Connect the Drives: Physically connect your hard drives to your server.
- Create the ZFS Pool:
- Log in to the Proxmox web interface.
- Navigate to Datacenter -> Node -> Disks -> ZFS.
- Click on Create: ZFS.
- Name your pool, select the disks, and choose the RAID level (e.g., RAID1, RAIDZ, etc.).
- Click Create.
- Here is an example of what this will look like:

Command Line Method:
- Connect the Drives: Physically connect your hard drives to your server.
- Identify the Disks:
- Run the following command to list all the disks:
lsblk
The output should look similar to the following
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 1G 0 part /boot/efi
└─sda3 8:3 0 930.5G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 96G 0 lvm /
├─pve-data_tmeta 252:2 0 8.1G 0 lvm
│ └─pve-data-tpool 252:4 0 794.3G 0 lvm
│ ├─pve-data 252:5 0 794.3G 1 lvm
│ ├─pve-vm--100--disk--1 252:6 0 50G 0 lvm
│ ├─pve-vm--102--disk--1 252:7 0 250G 0 lvm
│ ├─pve-vm--108--disk--0 252:8 0 75G 0 lvm
│ ├─pve-vm--106--disk--0 252:9 0 8G 0 lvm
│ └─pve-vm--101--disk--0 252:10 0 32G 0 lvm
└─pve-data_tdata 252:3 0 794.3G 0 lvm
└─pve-data-tpool 252:4 0 794.3G 0 lvm
├─pve-data 252:5 0 794.3G 1 lvm
├─pve-vm--100--disk--1 252:6 0 50G 0 lvm
├─pve-vm--102--disk--1 252:7 0 250G 0 lvm
├─pve-vm--108--disk--0 252:8 0 75G 0 lvm
├─pve-vm--106--disk--0 252:9 0 8G 0 lvm
└─pve-vm--101--disk--0 252:10 0 32G 0 lvm
sdb 8:16 0 2.7T 0 disk
├─sdb1 8:17 0 2.7T 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 2.7T 0 disk
├─sdc1 8:33 0 2.7T 0 part
└─sdc9 8:41 0 8M 0 part
Here, sdb
and sdc
are the disks I’m going to use to create the ZFS pool.
- Create the ZFS Pool:
- Run the following command, replacing Toshiba with your desired pool name:
zpool create Toshiba mirror /dev/sdb /dev/sdc
- Verify the Pool:
- We can verify that everything was successful by running the following command:
zpool list
The output should look similar to the following
zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Toshiba 2.72T 166G 2.56T - - 0% 5% 1.00x ONLINE -
Setting Up and Mounting a ZFS Dataset
With the pool set up, it’s time to create a new dataset and mount it to a directory. As suggested by Aaron in his post, it’s recommended to have a separate dataset for each VM.
Here are the steps:
- Create the necessary directories:
- After creating the main directory, create a subdirectory for each VM you wish to backup:
# Create the main backup directory
mkdir /mnt/toshiba_backups
# Create subdirectoy for each VM you wish to backup
mkdir /mnt/toshiba_backups/docker
mkdir /mnt/toshiba_backups/dc01
- Create Datasets:
- Next, we need to create the datasets under the pools using
zfs
:
- Next, we need to create the datasets under the pools using
# Create the main ZFS dataset for backups
zfs create Toshiba/backups
# Create a ZFS dataset for each VM you wish to backup
zfs create Toshiba/backups/docker -o mountpoint=/mnt/toshiba_backups/docker
zfs create Toshiba/backups/dc01 -o mountpoint=/mnt/toshiba_backups/dc01
These commands set up a dataset named backups
under Toshiba
, and two datasets for docker
and que-dc01
, each linked to their respective directories.
- Check Dataset Creation:
- Use
zfs list
to verify the datasets were created properly. It should look something like the following
- Use
zfs list
NAME USED AVAIL REFER MOUNTPOINT
Toshiba 166G 2.47T 144K /Toshiba
Toshiba/backups 104G 2.47T 104G /Toshiba/backups
Toshiba/backups/docker 96K 2.47T 96K /mnt/toshiba_backups/docker
Toshiba/backups/dc01 96K 2.47T 96K /mnt/toshiba_backups/dc01
We can observe that the datasets “Toshiba/backups/docker” and “Toshiba/backups/dc01” have been successfully created and mounted to their designated directories.
Set Up Directories in Proxmox
With the datasets set up, we can now link them as directories in Proxmox. This step will establish a connection to the mounted directories within Proxmox, enabling you to use them for backing up your virtual machines.
To proceed:
- Go to Datacenter > Storage > Add > Directory.
- Fill out the form with the following details:
- ID: Choose an appropriate name, such as
toshiba_backup_docker
ortoshiba_backup_dc01
. - Directory: Input the full dataset path (e.g.,
/Toshiba/backups/docker_awme
). This should be the ZFS dataset path, complete with a leading forward slash. - Content: Select ‘Backup’.
- Backup Retention: Configure according to your needs.
- ID: Choose an appropriate name, such as
- Here is an example of what this will look like:

Backing Up… Finally!
Now that everything is set up, you can create a backup. Navigate to Datacenter > Backup > Add. In the Storage dropdown, you should see the new directory we created earlier. Customize the remaining options to fit your specific needs.
To ensure everything is working correctly, I clicked on “Run now” to initiate a manual backup. You can track the progress in the Tasks section at the bottom of the Proxmox GUI. Once the backup is complete, you can verify the result by checking the ZFS path via the command line:
ls -lha /Toshiba/backups/docker/dump
The output should look similar to the following:
ls -lha /Toshiba/backups/docker/dump
total 26G
drwxr-xr-x 2 root root 8 May 16 10:59 .
drwxr-xr-x 3 root root 3 May 11 14:20 ..
-rw-r--r-- 1 root root 793 May 11 14:34 vzdump-lxc-108-2024_05_11-14_20_57.log
-rw-r--r-- 1 root root 12G May 11 14:34 vzdump-lxc-108-2024_05_11-14_20_57.tar.zst
-rw-r--r-- 1 root root 6 May 11 14:34 vzdump-lxc-108-2024_05_11-14_20_57.tar.zst.notes
-rw-r--r-- 1 root root 793 May 16 10:59 vzdump-lxc-108-2024_05_16-10_44_02.log
-rw-r--r-- 1 root root 14G May 16 10:58 vzdump-lxc-108-2024_05_16-10_44_02.tar.zst
-rw-r--r-- 1 root root 6 May 16 10:58 vzdump-lxc-108-2024_05_16-10_44_02.tar.zst.notes
Here we can see 2 separate backups, totaling 26 GB. The .log
file holds relevant log information for that backup, and the .notes
file contains notes associated with the backup. The .tar.zst
is the actual backup archive.
Worth Mentioning
It should be worth mentioning as well, that after setting up this ZFS pool I started seeing high memory usage on my node. After some research, it seems that “By default ZFS will use up to 50% of your hosts RAM for the ARC (read caching)., as stated by Dunuin on Proxmox’s forum in 2022.
This memory usage should be freed up when other processes require it, but in practice, this might not always happen seamlessly. If you have other memory-intensive applications running on the same node, they might experience performance issues due to the high memory consumption by ZFS.
To mitigate this, you can adjust the amount of RAM ZFS uses for ARC. This can be done by setting the zfs_arc_max
parameter to a lower value. For example, you can limit the ARC size to 4GB by adding the following line to /etc/modprobe.d/zfs.conf
:
options zfs zfs_arc_max=4294967296
After making this change, you’ll need to update the initramfs and reboot your node for the changes to take effect:
update-initramfs -u -k all
reboot
By tuning the ARC size, you can balance the memory usage between ZFS and other processes, ensuring that your node operates more efficiently and without unexpected performance degradation.
So, while ZFS provides robust and efficient storage management, it’s crucial to monitor and manage its memory usage to prevent potential issues, especially on nodes with limited RAM or high memory demands from other applications.
Leave a Reply
You must be logged in to post a comment.