# Proxmox

# Remove Proxmox Subscription Notice

##### Whenever a Proxmox node is updated we need to remove the subscription notice, as it gets overwritten during the upgrade process. I do this only as a time-saving method. 

[![Proxmox_subscription.png](https://kb.koryalbert.net/uploads/images/gallery/2023-06/scaled-1680-/proxmox-subscription.png)](http://bookstack.korys.lan/uploads/images/gallery/2023-06/proxmox-subscription.png)

[![Proxmox_subscription.png](http://bookstack.korys.lan/uploads/images/gallery/2023-06/scaled-1680-/proxmox-subscription.png)](http://bookstack.korys.lan/uploads/images/gallery/2023-06/proxmox-subscription.png)



#### Option 1

---

1\. Change to the working directory

```bash
cd /usr/share/javascript/proxmox-widget-toolkit
```

2\. Make a backup

```bash
cp proxmoxlib.js proxmoxlib.js.bak
```

3\. Edit the file

```bash
nano proxmoxlib.js
```

4\. Locate the following code (Use ctrl+w in nano and search for “No valid subscription”)

5\. Find the following line to edit

```javascript
.data.status.toLowerCase() !== 'active') {
```

and change the not (!) operator to look like the following

```javascript
.data.status.toLowerCase() == 'active') {
```

[![Proxmox_sub_js.png](https://kb.koryalbert.net/uploads/images/gallery/2023-06/scaled-1680-/proxmox-sub-js.png)](http://bookstack.korys.lan/uploads/images/gallery/2023-06/proxmox-sub-js.png)

6\. Restart the Proxmox web service (also be sure to clear your browser cache, depending on the browser you may need to open a new tab or restart the browser)

```bash
systemctl restart pveproxy.service
```

#### Option 2

---

1\. Change to the working directory

```bash
cd /usr/share/javascript/proxmox-widget-toolkit
```

2\. Make a backup

3\. Edit the file

```bash
nano proxmoxlib.js
```

4\. Locate the following code (Use ctrl+w in nano and search for “No valid subscription”)

```javascript
Ext.Msg.show({
title: gettext('No valid subscription'),
```

5\. Replace “Ext.Msg.show” with “void”

```javascript
void({ //Ext.Msg.show({
title: gettext('No valid subscription'),
```

6\. Restart the Proxmox web service (also be sure to clear your browser cache, depending on the browser you may need to open a new tab or restart the browser)

```bash
systemctl restart pveproxy.service
```



#### <span class="mw-headline" id="bkmrk-additional-notes-1">Additional Notes</span>

---

<span class="mw-headline">If you ever need to revert back you can either copy the backup file and replace the edited version or reinstall the WebKit</span>

Revert from backup:

```bash
mv proxmoxlib.js.bak proxmoxlib.js
```

Reinstall Webkit:

```bash
apt-get install --reinstall proxmox-widget-toolkit
```

# Remove Node from Cluster

##### Introduction

---

These are the steps if a node needs to be removed entirely from the cluster and function as a single node

##### Steps

---

1. Remove or mitigate all VMs and containers from nodes that will be decommissioned
2. Create a ZFS snapshot of rpool and rpool/data on the single node
3. Delete the nodes that will be decommissioned
4. Remove the cluster config
5. (Optional) Disable cluster/HA services

---

Snapshot

```bash
zfs snapshot rpool@<date>
zfs snapshot rpool/data@<date>
```

Turn off the decommissioned node

Remove the decommissioned nodes from the single node

```
pvecm delnode <nodename>
```

Run the following to delete the cluster and create a single node

<p class="callout info">It may be best to run this line by line to see the output.</p>

```
systemctl stop pve-cluster corosync
pmxcfs -l
rm /etc/corosync/*
rm /etc/pve/corosync.conf
killall pmxcfs
systemctl start pve-cluster 
```

(Optional) Disable cluster/HA services

```bash
systemctl disable --now pve-ha-crm pve-ha-lrm corosync.service
```

# Install FreeNAS Initiator

##### When the cluster is used a convergent model with TrueNAS, the patches need to be installed for the FreeNAS Initiator to show under the storage dialogue

[GitHub Repo](https://github.com/TheGrandWazoo/freenas-proxmox)

#### Option 1

---

##### Connect to each node and install the following keys

```
curl <a class="external free" href="https://ksatechnologies.jfrog.io/artifactory/ksa-repo-gpg/ksatechnologies-release.gpg" rel="nofollow">https://ksatechnologies.jfrog.io/artifactory/ksa-repo-gpg/ksatechnologies-release.gpg</a> -o /etc/apt/trusted.gpg.d/ksatechnologies-release.gpg
curl <a class="external free" href="https://ksatechnologies.jfrog.io/artifactory/ksa-repo-gpg/ksatechnologies-repo.list" rel="nofollow">https://ksatechnologies.jfrog.io/artifactory/ksa-repo-gpg/ksatechnologies-repo.list</a> -o /etc/apt/sources.list.d/ksatechnologies-repo.list
```

##### Then issue the following to install the package

<p class="callout info">Line 3 may now be in the code after I put in the GitHub issue. [Link](https://github.com/TheGrandWazoo/freenas-proxmox/issues/109#issuecomment-1367527917)</p>

```
apt update
apt install freenas-proxmox -y
systemctl restart pvescheduler.service
```

#### Option 2

---

This is the manual way to do this. However, I don't do it this way now that there is a package to install. The benefit to the package is that once Proxmox is updated, the TrueNAS configurations are not overwritten.

Let's create the SSH keys on the Proxmox boxes. (The IP must match your iSCSI Portal IP) You only need to create the keys on one node if they are clustered as the keys will replicate to the other nodes.

```bash
$portal_ip=192.168.2.252
mkdir /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/$portal_ip_id_rsa
ssh-copy-id -i /etc/pve/priv/zfs/$portal_ip_id_rsa.pub root@$portal_ip
```

##### <span id="bkmrk--3"></span><span class="mw-headline" id="bkmrk-enable-%22log-in-as-ro-1">Enable "Log in as root with password" under Services -&gt; SSH on the FreeNAS box.</span>

<span class="mw-headline" id="bkmrk-make-an-ssh-connecti-1">Make an SSH connection from every node to the iSCSI Portal IP</span>

```bash
ssh -i /etc/pve/priv/zfs/$portal_ip_id_rsa root@$portal_ip
```

##### <span class="mw-headline" id="bkmrk-install-the-rest-cli-1">Install the REST client on every node</span>

```bash
apt-get install librest-client-perl git
```

##### <span class="mw-headline">Download the patches on every Proxmox node</span>

```bash
git clone https://github.com/TheGrandWazoo/freenas-proxmox
```

##### <span class="mw-headline" id="bkmrk--5"></span>

##### <span class="mw-headline" id="bkmrk-install-the-patches--1">Install the patches on every Proxmox node</span>

<p class="callout info"><span class="mw-headline">These can be run all at once but it is harder to see the output</span></p>

```bash
cd freenas-proxmox
patch -b /usr/share/pve-manager/js/pvemanagerlib.js < pve-manager/js/pvemanagerlib.js.patch
patch -b /usr/share/perl5/PVE/Storage/ZFSPlugin.pm < perl5/PVE/Storage/ZFSPlugin.pm.patch
patch -b /usr/share/pve-docs/api-viewer/apidoc.js < pve-docs/api-viewer/apidoc.js.patch
cp perl5/PVE/Storage/LunCmd/FreeNAS.pm /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm
```

##### <span class="mw-headline" id="bkmrk-restart-the-pve-serv-1">Restart the PVE services</span>

```bash
systemctl restart pvedaemon
systemctl restart pveproxy
systemctl restart pvestatd
```

If you are using a cluster restart the following services as well.

```bash
systemctl restart pve-ha-lrm
systemctl restart pve-ha-crm
systemctl restart pvescheduler.service
```

Reload the PVE webgui. Now FreeNAS-API should be available as an iSCSI provider.

# Reset SSL Certificate

# <span class="mw-headline" id="bkmrk--1"></span>

Navigate to the following directory

```bash
cd /etc/pve/local
```

rename the .key and .pem files for backup

```bash
pvecm updatecerts --force
systemctl restart pveproxy
```

<p class="callout info">The backup .PEM and .KEY files can be deleted if the web interfaces loads without error</p>

# Resize VM Disk

## <span class="mw-headline" id="bkmrk-resizing-the-guest-d-1">Resizing the guest disk</span>

#### <span class="mw-headline" id="bkmrk-general-consideratio-1">General considerations</span>

When you resize the disk of a VM, to avoid confusion and disasters think of the process like adding or removing a disk platter.

If you **enlarge** the hard disk, once you have added the disk plate, your partition table, and file system knows nothing about the new size, so you have to act inside the VM to fix it.

If you **reduce** (shrink) the hard disk, of course removing the last disk plate will probably **destroy** your file system and remove the data in it! So in this case it is paramount to act in the VM in **advance**, reducing the file system and the partition size. SystemRescueCD comes in very handy for it, just add its iso as cdrom of your VM and set boot priority to CD-ROM.

Shrinking disks is not supported by the PVE API and has to be done manually.

Another page (deleted) with overlapping content was [Resizing disks](https://wiki.koryalbert.net/index.php?title=Resizing_disks&action=edit&redlink=1 "Resizing disks (page does not exist)") | [Archive](http://web.archive.org/web/20150914170505/http://pve.proxmox.com/wiki/Resize_disks)

#### <span class="mw-headline" id="bkmrk-qm-command-1">qm command</span>

You can resize your disks online or offline with command line:

```bash
qm resize <vmid> <disk> <size> 
```

exemple: to add 5G to your virtio0 disk on vmid100:

```
qm resize 100 virtio0 +5G
```

For virtio disks:

Linux should see the new size online without reboot with kernel &gt;= 3.6

Windows should see the new size online without reboot with last virtio drivers.

  
for virtio-iscsi disk:

Linux should see the new size online without reboot with kernel &gt;= 3.7

Windows should see the new size online without reboot with last virtio drivers.

## <span id="bkmrk-"></span><span class="mw-headline" id="bkmrk-enlarge-the-partitio-1">Enlarge the partition(s) in the virtual disk</span>

Depending on the installed guest there is several diffent ways to resize the partions

#### <span class="mw-headline" id="bkmrk-offline-for-all-gues-1">Offline for all guests</span>

Use **gparted** or similar tool (recommended)  
In gparted and possibly most other tools, **LVM and Windows dynamic disc is not supported**

Boot the virtual machine with gparted or similar tool, enlarge the partion and optionally the file system. With som linux clients you often need to enlarge the extended partion, move the swappartion, shrink the extended partion and enlarge the root partion. (or simple delete the swap and partion andre create it again - but remember to activwate the swap agin (last step).  
Gparted have some warnings about some specific operations not well supported with windows guest - outside the scope of this document but read the warnings in gparted.

#### <span class="mw-headline" id="bkmrk-online-for-windows-g-1">Online for Windows Guests</span>

- Guest is Windows 7, Windows Vista or Windows Server 2008
- logon as administrator and extend the disk and filesystem (Using Disk manager)
- For more info [www.petri.co.il/extend-disk-partition-vista-windows-server-2008.htm](http://www.petri.co.il/extend-disk-partition-vista-windows-server-2008.htm)
- Guest is Windows 10: logon as administrator and extend the disk and filesystem (Using Disk manager). If you do not see the ability to extend the disk (i.e. nothing seems to have happened as a result of using the resize command), go to the Windows command prompt and do a: shutdown -s -t 0 (This is a "normal" shutdown, as opposed to the "fast" shutdown that's the default for Win 8 and onwards.) After a reboot, you'll now see the ability to expand the disk.

#### <span class="mw-headline" id="bkmrk-online-for-linux-gue-1">Online for Linux Guests</span>

Here we will enlarge a LVM PV partition, but the procedure is the same for every kind of partitions. Note that the partition you want to enlarge should be at the end of the disk. If you want to enlarge a partition which is anywhere on the disk, use the offline method.

- Check that the kernel has detected the change of the hard drive size

(here we use VirtIO so the hard drive is named vda)

```
dmesg | grep vda

[ 3982.979046] vda: detected capacity change from 34359738368 to 171798691840
```

Print the current partition table

```
fdisk -l /dev/vda | grep ^/dev

GPT PMBR size mismatch (67108863 != 335544319) will be corrected by w(rite).
/dev/vda1      34     2047     2014 1007K BIOS boot
/dev/vda2    2048   262143   260096  127M EFI System
/dev/vda3  262144 67108830 66846687 31.9G Linux LVM
```

Resize partition 3 (LVM PV) to occupy the whole remaining space of the hard drive)

```bash
parted /dev/vda
(parted) print

Warning: Not all of the space available to /dev/vda appears to be used, you can
fix the GPT to use all of the space (an extra 268435456 blocks) or continue
with the current setting? 

Fix/Ignore? F 
```

```bash
(parted) resizepart 3 100%
(parted) quit
```

Check the new partition table

```
fdisk -l /dev/vda | grep ^/dev

/dev/vda1      34      2047      2014  1007K BIOS boot
/dev/vda2    2048    262143    260096   127M EFI System
/dev/vda3  262144 335544286 335282143 159.9G Linux LVM
```

## <span class="mw-headline" id="bkmrk-enlarge-the-filesyst-1">Enlarge the filesystem(s) in the partitions on the virtual disk</span>

#### <span class="mw-headline" id="bkmrk-online-for-linux-gue-3">Online for Linux guests with LVM</span>

Enlarge the physical volume to occupy the whole available space in the partition:

```bash
pvresize /dev/vda3
```

Enlarge the logical volume and the filesystem (the file system can be mounted, works with ext4 and xfs)

```bash
lvresize --size +20G --resizefs /dev/xxxx/root #This command will increase the partition up by 20GB
```

```bash
lvresize --extents +100%FREE --resizefs /dev/xxxx/root #Use all the remaining space on the volume group
```

#### <span class="mw-headline" id="bkmrk-online-for-linux-gue-5">Online for Linux guests without LVM</span>

Enlarge the filesystem (in this case root is on vda1)

```bash
resize2fs /dev/vda1
```

# Choose boot Kernel

Proxmox comes with a built-in tool called `proxmox-boot-tool`. We can list the available kernels on the system and choose the best kernel and set it to always boot from this kernel.

#### List and choose boot Kernel

`proxmox-boot-tool kernel list`

<span style="text-decoration: underline;">Example output:</span>

<span style="text-decoration: underline;">[![Screenshot from 2023-08-12 08-53-11.png](https://kb.koryalbert.net/uploads/images/gallery/2023-08/scaled-1680-/screenshot-from-2023-08-12-08-53-11.png)](https://kb.koryalbert.net/uploads/images/gallery/2023-08/screenshot-from-2023-08-12-08-53-11.png)</span>

Now that we know which kernel we want we must run the following command and reboot.

`proxmox-boot-tool kernel pin 5.15.108-1-pve && reboot`

This will pin the kernel we want to use. We may need to unpin a kernel we don't want. The following command can be used to remove a kernel from the pinned list

`proxmox-boot-tool kernel unpin 5.15.108-1-pve`