Proxmox Series #7: Upgrading Proxmox version

Proxmox Series #7: Upgrading Proxmox version

Proxmox 8.0 was released on June 22, 2023, and it’s full of exciting new features. Among the highlights are the following straight from the official Proxmox forum:

  • Debian 12, but using a newer Linux kernel 6.2
  • QEMU 8.0.2, LXC 5.0.2, ZFS 2.1.12
  • Ceph Quincy 17.2 is the default and comes with continued support.
  • There is now an enterprise repository for Ceph which can be accessed via any Proxmox VE subscription, providing the best stability for production systems.
  • Additional text-based user interface (TUI) for the installer ISO.
  • Integrate host network bridge and VNet access when configuring virtual guests into the ACL system of Proxmox VE.
  • Add access realm sync jobs to conveniently synchronize users and groups from an LDAP/AD server automatically at regular intervals.
  • New default CPU type for VMs: x86-64-v2-AES
  • Resource mappings: between PCI(e) or USB devices, and nodes in a Proxmox VE cluster.
  • Countless GUI and API improvements.

While you can very easily take a backup of your existing VMs and then reinstall your server with the new version of Proxmox, I’m going to show you how to perform an in place upgrade from version 7.4 to version 8.0.

1.) First, you’ll want to make sure you back up all your existing virtual machines and containers, just in case something goes wrong during the upgrade. You’ll then either want to shut them down, or, if you have a cluster and need to maintain high availability, migrate them away from the node you’ll be upgrading. Check out my tutorials on backing up virtual machines, and building node clusters to find out more about those operations. You’ll also need at least 5 GB of disk space on the root mount point, along with SSH or console access to the node. (it’s safer that way)

root@upgrade:~# qm list
      VMID NAME                 STATUS     MEM(MB)    BOOTDISK(GB) PID       
       100 backuptest           stopped    2048              32.00 0         

2.) Make sure the system is using the latest Proxmox VE 7.4 packages using APT.

apt update
apt dist-upgrade
pveversion

The last command should return at least v7.4.15.

Note: If you’re not subscribed to paid enterprise support, you may need to disable the enterprise repository and enable the testing repo. Do this by clicking on your node, then “Repositories” under “Updates” and disabling the enterprise repo, then you can add the “testing” repo.

3.) The latest proxmox 7.4 packages ship with a tool called pve7to8. You’ll want to run that in a terminal session on the node you intend to upgrade:

pve7to8 --full

The script will check and report any potential issues it finds with the upcoming upgrade process, however it won’t repair things. If there are issues needing to be fixed, you’ll want to re-run the script as you fix the issues to verify that the issues were indeed solved. Note that if you haven’t migrated virtual machines off the node you are upgrading, the script may warn you that those VMs are present, but it can be safely ignored if you don’t require HA.

= CHECKING VERSION INFORMATION FOR PVE PACKAGES =

Checking for package updates..
PASS: all packages up-to-date

Checking proxmox-ve package version..
PASS: proxmox-ve package has version >= 7.4-1

Checking running kernel version..
PASS: running kernel '5.15.102-1-pve' is considered suitable for upgrade.

= CHECKING CLUSTER HEALTH/SETTINGS =

SKIP: standalone node.

= CHECKING HYPER-CONVERGED CEPH STATUS =

SKIP: no hyper-converged ceph setup detected!

= CHECKING CONFIGURED STORAGES =

PASS: storage 'local' enabled and active.
PASS: storage 'local-lvm' enabled and active.
INFO: Checking storage content type configuration..
PASS: no storage content problems found
PASS: no storage re-uses a directory for multiple content types.

= MISCELLANEOUS CHECKS =

INFO: Checking common daemon services..
PASS: systemd unit 'pveproxy.service' is in state 'active'
PASS: systemd unit 'pvedaemon.service' is in state 'active'
PASS: systemd unit 'pvescheduler.service' is in state 'active'
PASS: systemd unit 'pvestatd.service' is in state 'active'
INFO: Checking for supported & active NTP service..
PASS: Detected active time synchronisation unit 'chrony.service'
INFO: Checking for running guests..
PASS: no running guest detected.
INFO: Checking if the local node's hostname 'upgrade' is resolvable..
INFO: Checking if resolved IP is configured on local node..
PASS: Resolved node IP '192.168.122.230' configured and active on single interface.
INFO: Check node certificate's RSA key size
PASS: Certificate 'pve-root-ca.pem' passed Debian Busters (and newer) security level for TLS connections (4096 >= 2048)
PASS: Certificate 'pve-ssl.pem' passed Debian Busters (and newer) security level for TLS connections (2048 >= 2048)
INFO: Checking backup retention settings..
PASS: no backup retention problems found.
INFO: checking CIFS credential location..
PASS: no CIFS credentials at outdated location found.
INFO: Checking permission system changes..
INFO: Checking custom role IDs for clashes with new 'PVE' namespace..
PASS: no custom roles defined, so no clash with 'PVE' role ID namespace enforced in Proxmox VE 8
INFO: Checking if LXCFS is running with FUSE3 library, if already upgraded..
SKIP: not yet upgraded, no need to check the FUSE library version LXCFS uses
INFO: Checking node and guest description/note length..
PASS: All node config descriptions fit in the new limit of 64 KiB
PASS: All guest config descriptions fit in the new limit of 8 KiB
INFO: Checking container configs for deprecated lxc.cgroup entries
PASS: No legacy 'lxc.cgroup' keys found.
INFO: Checking if the suite for the Debian security repository is correct..
PASS: found no suite mismatch
INFO: Checking for existence of NVIDIA vGPU Manager..
PASS: No NVIDIA vGPU Service found.
INFO: Checking bootloader configuration...
SKIP: not yet upgraded, no need to check the presence of systemd-boot
SKIP: No containers on node detected.

= SUMMARY =

TOTAL:    29
PASSED:   24
SKIPPED:  5
WARNINGS: 0
FAILURES: 0
root@upgrade:/# 

4.) Update all Debian and Proxmox VE repo entries to bookworm:

sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list

5.) Update the repo package index:

sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list

6.) Update the system to Debian Bookworm and Proxmox VE 8.0:

apt dist-upgrade

The time for this command to complete depends on the processing power of your server. If your Proxmox installation is based on fast SSD drives, it will go quicker than if it was on mechanical drives. The process will ask you to approve various changes to configuration files. Below are the recommended choices from the official Proxmox upgrade documentation:

/etc/issue -> Proxmox VE will auto-generate this file on boot, and it has only cosmetic effects on the login console.

Using the default "No" (keep your currently-installed version) is safe here.

/etc/lvm/lvm.conf -> Changes relevant for Proxmox VE will be updated, and a newer config version might be useful.

If you did not make extra changes yourself and are unsure it's suggested to choose "Yes" (install the package maintainer's version) here.

/etc/ssh/sshd_config -> If you have not changed this file manually, the only differences should be a replacement of ChallengeResponseAuthentication no with KbdInteractiveAuthentication no and some irrelevant changes in comments (lines starting with #).

If this is the case, both options are safe, though we would recommend installing the package maintainer's version in order to move away from the deprecated ChallengeResponseAuthentication option. If there are other changes, we suggest to inspect them closely and decide accordingly.

/etc/default/grub -> Here you may want to take special care, as this is normally only asked for if you changed it manually, e.g., for adding some kernel command line option.

It's recommended to check the difference for any relevant change, note that changes in comments (lines starting with #) are not relevant.
If unsure, we suggested to selected "No" (keep your currently-installed version)

7.) Once the script exits successfully, run the pve7to8 checker script once more:

pve7to8 --full

8.) If there are no issues, go ahead and reboot. If successful you will be in your new proxmox installation. You can now restart your VMs. You’re done!

✍🏻
Doron is a long-time system mangler who got his first taste of Linux compiling and configuring ircd servers from source in the mid 90s. He then dwelled into web hosting operations through reseller accounts and dedicated servers. Offline he plays bass, and is an avid music lover. He co-owns an internet radio station called Genesis Radio which plays all kinds of music 24X7 and features events and live shows. If you need hosting services, you can check out his current business, Genesis Hosting