sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?

Get help with all aspects of SABnzbd
Forum rules
Help us help you:
  • Are you using the latest stable version of SABnzbd? Downloads page.
  • Tell us what system you run SABnzbd on.
  • Adhere to the forum rules.
  • Do you experience problems during downloading?
    Check your connection in Status and Interface settings window.
    Use Test Server in Config > Servers.
    We will probably ask you to do a test using only basic settings.
  • Do you experience problems during repair or unpacking?
    Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
das1996
Newbie
Newbie
Posts: 4
Joined: August 23rd, 2023, 11:21 am

Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?

Post by das1996 »

Did this get resolved? What was the solution?
Tombstone
Newbie
Newbie
Posts: 1
Joined: August 29th, 2023, 8:32 am

Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?

Post by Tombstone »

:o I just about to add this to proxmox :o
das1996
Newbie
Newbie
Posts: 4
Joined: August 23rd, 2023, 11:21 am

Re: sabnzbd in Docker in ubuntu vm on proxmox... destined to be slow?

Post by das1996 »

I should update, I did get my speed issues resolved.

For details, config is as follows.

Sabnzb installed in a lxc container in proxmox (5800x, 64gb)
Truenas installed as a vm with sata controller pass through (for direct disk access)
Two disks defined - 1) ssd for temp/scratch use, and 2) rust for media storage - seagate 4tb 2.5" SMR 5400rpm POS
Above 2 disks shared via NFS4 to proxmox
direct unpack enabled

Disks mapped to container using bind mounts.

There are several places where sync can be defined - more info https://www.avidandrew.com/understandin ... ching.html

1) zfs dataset
2) client*

For #1, the initial setting was "standard" which means allow client to define sync status

I tested 2 variations for client - sync and async.

With sync, speeds were abysmal, 15-30 MB/s on a disk capable of 130MB/s
With async, speeds improved but jumped beween 60-100 MB/s.

Setting zfs dataset sync to "DISABLED", and leaving client at async resulted in the best speeds, of 120-130 MB/s consistently (per iostat and zpool iostat 1). While this is not the safest configuration (in case of powerloss), it doesn't really matter for this data type (media).
Post Reply