Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Get help with all aspects of SABnzbd
Forum rules
Help us help you:
  • Are you using the latest stable version of SABnzbd? Downloads page.
  • Tell us what system you run SABnzbd on.
  • Adhere to the forum rules.
  • Do you experience problems during downloading?
    Check your connection in Status and Interface settings window.
    Use Test Server in Config > Servers.
    We will probably ask you to do a test using only basic settings.
  • Do you experience problems during repair or unpacking?
    Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
Post Reply
bokai
Newbie
Newbie
Posts: 5
Joined: June 16th, 2025, 2:36 am

Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by bokai »

I'm looking to push my download speeds as high as possible just for fun. I'm sitting on 10Gbps ISP with hardware that matches:
Network:
Unifi UDM Se
Unifi USW Pro Max 16

Server:
Intel 14600k
2TB NVME (for downloads and cache)
3x20TB HDD (currently not in use for these tests).
Mellanox ConnectX-4 NIC 25Gbit (connected with 10Gbit SFP+)
Using SABnzbd in Unraid Docker with network set to "host" to maximize performance.
"Permit exclusive shares" in Unraid is enabled and the downloads should bypass the Fuse file system which can limit performance otherwise. (Found instructions on SABs GitHub, discussions/2867#discussioncomment-9583601 )
In-app benchmark clearly shows my server is capable:
Image

I get 10Gbit speeds locally, and Iperf3 from my server to an external server in my country also shows I am able to get it externally:

Code: Select all

[ID] 	Interval           		Transfer     	Bitrate         		Retr
[SUM]  0.00-30.00  sec  	28.3 GBytes  	8.11 Gbits/sec  	42509   sender
[SUM]  0.00-30.00  sec  	28.3 GBytes  	8.09 Gbits/sec                  	  receiver
But when using SABnzbd it seems I can't get it past 350MB/s, and I can't really figure out where the bottleneck is.

I have access to several providers in EU for testing puroses, but not matter how I "mix and match" it seems I max out at around 350MB/s.
I've tried going from fever connections per one server and slowly going up, but it seems that simply adding more connections is better until I hit my max, then it doesn't matter if I have a total of 150 connections or 250, the max speed stay the same. One example:
Image
Another example:
Image

Some settings I've messed with in SAB:

Code: Select all

Max line speed: 1000MB/s
Usage %: 100%
Article cache limit: 4G (doesn't seem to go above 1G).

Direct Unpack is off.
Pause Downloading During Post-Processing is on.
Unwanted Extensions and Action when encrypted RAR is downloaded are both off.

CPU usage doesn't seem to go above 20%, and the NVME definitely should be able to handle it, right?

Any tips on what I can try next to figure out why my DL speed maxes at 350MB/s, when tests show I should be able to get up towards 7-800MBps.
User avatar
sander
Release Testers
Release Testers
Posts: 9261
Joined: January 22nd, 2008, 2:22 pm

Re: Help me figure out my 10gbit bottleneck with SABnzbd

Post by sander »

> Any tips on what I can try next to figure out why my DL speed maxes at 350MB/s,

Yes: run SABnzbd straight on an OS (without docker, without Unraid), on capable hardware, with a 10G (or 5G) ethernet connection. Because Unraid and Docker (with python?) slow things down.
I achieved 630 MB/s with SABnzbd straight on Ubuntu on an i9, with a 10GB download (fitting into RAM of that machine).

Technical background (and workarounds):

https://stackoverflow.com/questions/608 ... s%20slower.

https://github.com/docker-library/python/issues/575

https://github.com/docker-library/python/issues/825

https://stackoverflow.com/questions/761 ... -container
bokai
Newbie
Newbie
Posts: 5
Joined: June 16th, 2025, 2:36 am

Re: Help me figure out my 10gbit bottleneck with SABnzbd

Post by bokai »

Ouch, I was afraid something like this would come along. :(

Do you happen to know what the upper limit is when using docker? Or Unraid?
User avatar
sander
Release Testers
Release Testers
Posts: 9261
Joined: January 22nd, 2008, 2:22 pm

Re: Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by sander »

The "docker run --security-opt seccomp:unconfined" is worth a try?

> Do you happen to know what the upper limit is when using docker? Or Unraid?

The highest I've heard of so far ... 350 MB/s ... ;-)

Also worth a try: nzbget on docker on unraid. That might be faster, ... if "As it turns out, running a program with seccomp enables the infamous STIBP mitigation, making some workloads, such as scripting languages two times slower." is true: nzbget is written in C, not in a scripting language.
bokai
Newbie
Newbie
Posts: 5
Joined: June 16th, 2025, 2:36 am

Re: Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by bokai »

sander wrote: June 16th, 2025, 4:33 am The highest I've heard of so far ... 350 MB/s ... ;-)
Make that 432MB/s. ;)
Image

Granted, this was only with a testfile, and I don't know why, but the difference between a testfile and a real world download from the same server is huge.

For example in the screenshot above, I used only 40 connections in total over 2 providers.
However, If i initiate a real DL with the same 40 connections I get a speed of about 150MB/s.

To get it up to 350MB/s I have to triple the amount of connections for the same 2 servers. Any idea why this is?

Edit: also I tried NZBget as well, and seem to get very similar speeds. Which kind of makes me believe I might be hitting my practical max speed?
User avatar
sander
Release Testers
Release Testers
Posts: 9261
Joined: January 22nd, 2008, 2:22 pm

Re: Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by sander »

> Make that 432MB/s. ;)

Noted!

> also I tried NZBget as well, and seem to get very similar speeds. Which kind of makes me believe I might be hitting my practical max speed?

Oh? I'm surprised.

Back to my original advice: run SABnzbd straight on an OS (without docker, without Unraid), on capable hardware, with a 10G (or 5G) ethernet connection. That will give a nice benchmark. As said: I achieved 630 MB/s with SABnzbd on a 8 Gbps link. So you should be able to get that too.
bokai
Newbie
Newbie
Posts: 5
Joined: June 16th, 2025, 2:36 am

Re: Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by bokai »

sander wrote: June 16th, 2025, 6:45 am > Make that 432MB/s. ;)

Noted!

> also I tried NZBget as well, and seem to get very similar speeds. Which kind of makes me believe I might be hitting my practical max speed?

Oh? I'm surprised.

Back to my original advice: run SABnzbd straight on an OS (without docker, without Unraid), on capable hardware, with a 10G (or 5G) ethernet connection. That will give a nice benchmark. As said: I achieved 630 MB/s with SABnzbd on a 8 Gbps link. So you should be able to get that too.
I will test that, but it takes some time to setup. I'll be back. :)
Thanks for your help!
User avatar
zoggy
Release Testers
Release Testers
Posts: 77
Joined: February 8th, 2011, 3:08 pm

Re: Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by zoggy »

I have a 5Gbps connection from my isp.
Running sab in a docker container on unraid I get 550-600MB/s with 80 connections using one provider (~30ms latency to them).
Upping to 100 connections and using another 100 connections from another provider doesnt yield me any benefit except higher cpu and more thread latency which can have a knock on effect. So one provider ends up just working well for me. I will say for whatever reason the binhex container (uses arch) works better for me than linuxserver or hotio (which both use alpine). With unraid and how it does user shares you do not want to go through fuse (shfs) for your incomplete folder. So either do a direct mount of your cache pool or just do an exclusive share and set that for your incomplete folder. A decent nvme drive should be fine for multi-gig (even upto like 10Gbps) just of course when you are doing that sustain you might want to have a pool to handle the io overhead of extracting with downloading or just dont use direct unpack/pause during pp.

Image


To share some info I tell people in discord when they bring up performance concerns (going to be a bit of word vomit but you will get the idea):

Lower RTT (latency) from you to your provider is better, and any congestion will hurt it (so isp peering matters). Usenet is TCP traffic.. so tcp window scaling/segment sizes all come into play, read:
https://networklessons.com/cisco/ccnp-r ... ze-scaling

The tl;dr; is lower latency means you have higher chance of doing more (stuff grows quicker) but if theres any congestion then that stops it from growing/may reduce and then dealing with overhead of having to retry.. so 1 connection may do 10MB/s for one person but might only do 1 MB/s for another.

You might find 20 connections works well during off peak hours but during peak with congestion you need double the amount to get nearly the same speeds. So test at different times / different servers (ports / address family) to see if it influences it at all.

More connections is not always better and with python socket limit we can only do 512 (which no one really should be doing) and having that many connections can be a lot of overhead, look at your router to make sure its not struggling to manage it.

eval other docker maintainers if your using that. they are not all the same (different os/setups/python version/etc).

sab downloads into ram (article cache) that gets offloaded to your sab incomplete folder.. that incomplete folder should be on a fast local drive.
Drive type matters, sata is half duplex while sas is full but m.2 or u.2 for example have higher bus due to pcie.. but then what pcie you use can matter.
Having a fast dram-less ssd may seem fine, but its like using a glorified usb drive.. a single task it works but anytime you do more than one thing (like download and extract, or copy files and download) it just stumbles and needs cpu to manage.. so iowait skyrockets. You can mask some of that by disabling direct unpack or pausing downloading during pp but overall not a great fit for sab.

Drive selection:
m.2 > sas > sata
dram > dram-less
slc > mlc > tlc > qlc

--

drive speed, local network speeds (from your isp to your computer), and then your usenet service provider through the global internet to your isp to you.. are different things. many variables involved with it all and then beyond all that is how you use the things. where that disk io is actually happening.

ssd while faster than spinning rust, come in various levels of performance. you will find cheap dram-less ones while seem okay on a single task like xfering a single large file but tank in performance when asked to do more than one thing. and then even ones with dram are good until that dram is exhausted and then you might not be able to sustain things. and ssd are usually connected on consumers through sata, which is half duplex. youd need sas to be full duplex.

why using m2 interface can yield much faster speeds as your not limited by sataIII and sharing io with others on the chipset

having a 10G lan wont mean anything if you have a 1G wan. and vice versa. and then even if you have a 10G wan and lan.. if you have a crappy usenet provider or isp, then your just going to struggle to get decent speeds. as usenet is tcp traffic, so congestion/retries kill performance due to tcp window scaling not able to grow and congestion algo holding you back
why ideally when your wanting to do decent speeds youd care more about whats the rtt from you to your usenet service provider
lower the latency the better/consistent things will be. but even so you need to have enough connections then as well. as many providers limit how much one connection to do. and use that to gate things
User avatar
sander
Release Testers
Release Testers
Posts: 9261
Joined: January 22nd, 2008, 2:22 pm

Re: Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by sander »

> Lower RTT (latency) from you to your provider is better

Good point, Zoggy.

And there is formula for that:

TCP throughput = (TCP Window Size / RTT)

So if RTT is 10 times higher, TCP througput is 10 times lower. So a low RTT is King.

Google hits:

The default TCP window size in Linux is often 64KB (65535 bytes), but this can be dynamically adjusted using TCP window scaling. The scaling factor allows for window sizes much larger than 64KB, potentially reaching up to 1 GB.
Linux: /proc/sys/net/ipv4/tcp_rmem ... the middle value

While often used interchangeably, ping and Round-Trip Time (RTT) are not exactly the same, though ping is a common way to measure RTT. RTT is the total time it takes for a data packet to travel from a source to a destination and back, while ping is a specific command-line tool that uses ICMP packets to measure this time.

So I calculated the max TCP Througput for different Window Sizes resp RTTs:

Window Size (kB) RTT (ms) TCP Througput (MB/s)
64 10 6.4
131 9 14.6
131 100 1.3

The middle line with 131 kB resp 9ms is what my Linux resp SABnzbd happyeyeballs say. And with eweka with just one connection, with the 1GB test download, I indeed get a SABnzbd downloadspeed of 16.0 MB/s: "Downloaded in 1 min 6 seconds at an average of 16.0 MB/s" ... close to the calculated 14.6 MB/s. Cool.

Code: Select all

$ cat  /proc/sys/net/ipv4/tcp_rmem
4096    131072  6291456
and

Code: Select all

$ cat .sabnzbd/logs/sabnzbd.log | grep -i happyeyeballs | grep -i eweka | grep -e ms -e Quickest
2025-06-17 10:40:15,726::DEBUG::[happyeyeballs:102] Happy Eyeballs connected to 2001:4de0:1::219 (news6.eweka.nl, port=563) in 12ms
2025-06-17 10:40:15,727::INFO::[happyeyeballs:205] Quickest IP address for news.eweka.nl (port=563, IPv4 or IPv6): 2001:4de0:1::219 (news6.eweka.nl)
2025-06-17 10:40:15,730::DEBUG::[happyeyeballs:102] Happy Eyeballs connected to 2001:4de0:1::219 (news6.eweka.nl, port=563) in 16ms
2025-06-17 10:40:15,731::INFO::[happyeyeballs:205] Quickest IP address for news.eweka.nl (port=563, IPv4 or IPv6): 2001:4de0:1::219 (news6.eweka.nl)
2025-06-17 10:40:15,739::DEBUG::[happyeyeballs:102] Happy Eyeballs connected to 185.90.196.70 (news.eweka.nl, port=563) in 14ms
2025-06-17 10:40:15,740::DEBUG::[happyeyeballs:102] Happy Eyeballs connected to 185.90.196.70 (news.eweka.nl, port=563) in 14ms
2025-06-17 10:40:25,153::DEBUG::[happyeyeballs:102] Happy Eyeballs connected to 2001:4de0:1::233 (news6.eweka.nl, port=563) in 9ms
2025-06-17 10:40:25,153::INFO::[happyeyeballs:205] Quickest IP address for news.eweka.nl (port=563, IPv4 or IPv6): 2001:4de0:1::233 (news6.eweka.nl)

Code: Select all

sander@brixit:~$ date --date='2025-06-17 10:40:25' +"%s" | cut -c-8
17501496
sander@brixit:~$ sqlite3 .sabnzbd/admin/history1.db .dump | grep 17501496 | tr ',' '\n' | grep MB/s | sed -e 's/\\r\\n/\n/g' | grep MB/s
Download:::Downloaded in 1 min 6 seconds at an average of 16.0 MB/s<br/>Age: 1058d

EDIT:

When I disable IPv6 (to be exact: disable ipv6_staging, no enrichment of eweka with ipv6), SAB connects only via IPv4, the connect time aka RTT is 10% higher (because double NAT: on my router and CGNAT at my ISP):

Code: Select all

2025-06-17 13:48:46,728::DEBUG::[happyeyeballs:102] Happy Eyeballs connected to 81.171.92.233 (news.eweka.nl, port=563) in 11ms
2025-06-17 13:50:44,998::DEBUG::[happyeyeballs:102] Happy Eyeballs connected to 81.171.92.219 (news.eweka.nl, port=563) in 10ms
2025-06-17 13:50:44,998::INFO::[happyeyeballs:205] Quickest IP address for news.eweka.nl (port=563, IPv4 or IPv6): 81.171.92.219 (news.eweka.nl)
... and indeed the resulting speed of 1 connection is indeed lower (even more than 10%): 12.1 MB/s. Cool.
User avatar
sander
Release Testers
Release Testers
Posts: 9261
Joined: January 22nd, 2008, 2:22 pm

Re: Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by sander »

So, @bokai, can you do these things:

0) set SABnzbd's logging to +Debug (via Wrench symbol in SABnzb's upper right corner)
1) have one server active (or: only one server at highest), only 1 connection, and then do the 1GB testdownload: what is the resulting speed
2) check sabnzbd.log for "Happy Eyeballs connected to", and post those lines here: I want to know your connect time in ms (millisecond)
bokai
Newbie
Newbie
Posts: 5
Joined: June 16th, 2025, 2:36 am

Re: Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by bokai »

zoggy wrote: June 16th, 2025, 3:19 pm I have a 5Gbps connection from my isp.
Running sab in a docker container on unraid I get 550-600MB/s with 80 connections using one provider (~30ms latency to them).
So it CAN be done! :)
I currently reach sustained 430MB/s with my setup with a 10GB testfile with 70 connections to one provider. However, the same amount of connections with a real world download on the same server, same number of connections net me about ~250MB/s. Any idea why?

Also, I moved to the binhex container but didn't see any difference yet.
My incomplete downloads are to a /data/ share with exclusive share enabled(a single 2TB NVME), so fuse should be bypassed.
zoggy wrote: June 16th, 2025, 3:19 pm Lower RTT (latency) from you to your provider is better, and any congestion will hurt it (so isp peering matters). Usenet is TCP traffic.. so tcp window scaling/segment sizes all come into play, read:
https://networklessons.com/cisco/ccnp-r ... ze-scaling

[...]

why ideally when your wanting to do decent speeds youd care more about whats the rtt from you to your usenet service provider
lower the latency the better/consistent things will be. but even so you need to have enough connections then as well. as many providers limit how much one connection to do. and use that to gate things
sander wrote: June 17th, 2025, 3:53 am [...]
While often used interchangeably, ping and Round-Trip Time (RTT) are not exactly the same, though ping is a common way to measure RTT. RTT is the total time it takes for a data packet to travel from a source to a destination and back, while ping is a specific command-line tool that uses ICMP packets to measure this time.
Interesting!
I've started testing with different window sizes, but have not yet seen any difference.

Code: Select all

$cat  /proc/sys/net/ipv4/tcp_rmem
4096    131072  6291456
@zoggy, what are your window settings?
sander wrote: June 17th, 2025, 6:32 am So, @bokai, can you do these things:

0) set SABnzbd's logging to +Debug (via Wrench symbol in SABnzb's upper right corner)
1) have one server active (or: only one server at highest), only 1 connection, and then do the 1GB testdownload: what is the resulting speed
2) check sabnzbd.log for "Happy Eyeballs connected to", and post those lines here: I want to know your connect time in ms (millisecond)
Absolutely, here are my results:
1.

Code: Select all

Download:::Downloaded in 1 min 50 seconds at an average of 9.6 MB/s<br/>Age: 1058d
2.

Code: Select all

025-06-17 14:08:12,431::DEBUG::[happyeyeballs:102] Happy Eyeballs connected to xxx.xxx.xxx.xx1 (eu.xxxxxx.com, port=563) in 22ms
2025-06-17 14:08:12,442::DEBUG::[happyeyeballs:102] Happy Eyeballs connected to xxx.xxx.xxx.xx2 (eu.xxxxxx.com, port=563) in 22ms
2025-06-17 14:08:17,214::DEBUG::[happyeyeballs:102] Happy Eyeballs connected toxxx.xxx.xxx.xx3 (eu.xxxxxx.com, port=563) in 22ms
2025-06-17 14:08:17,214::INFO::[happyeyeballs:205] Quickest IP address for eu.xxxxxx.com (port=563, IPv4 or IPv6): xxx.xxx.xxx.xx1 ( eu.xxxxxx.com)
(Don't know if it is OK to show IPs for my providers)
User avatar
sander
Release Testers
Release Testers
Posts: 9261
Joined: January 22nd, 2008, 2:22 pm

Re: Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by sander »

Based on the RTT of 22ms and window size of 131072 bytes, the formula says TCP Bandwidth is 5.8 MB/s

Your 9.6 MB/s is ... higher. Weird. Check question: that was with just 1 connection and 1 server active? And not 2 servers with 1 connection each, or 1 server with 2 connections?

Assuming 1 newsserver with 1 connection achieves 9.6 MB/s:

> I currently reach sustained 430MB/s with my setup with a 10GB testfile with 70 connections to one provider.

If you divide 420 by 9.6, you would say you should achieve the same 430 MB/s with ... 44 connections. Can you check?

> However, the same amount of connections with a real world download on the same server, same number of connections net me about ~250MB/s. Any idea why?

Long shot: Might be a more obscure download, which the newsserver has to fetch from a secondary system.

EDIT:

> (Don't know if it is OK to show IPs for my providers)

Yes, please share. That's public info. Plus the name of the newsserver. (Only your own IP is more privacy sensitive)
User avatar
zoggy
Release Testers
Release Testers
Posts: 77
Joined: February 8th, 2011, 3:08 pm

Re: Help me figure out my 10gbit bottleneck with SABnzbd [Docker on Unraid]

Post by zoggy »

yeah, I've seen people do 8Gbps speeds no problem with sab using decent nvme, now when you get past 8Gbps then something like a ram drive might start to make sense to bottlenecks (but those fancy nvme with pcie5 might solve that).. but anyways..

Just because you change/set a larger tcp window size doesnt mean it will get used. tcp has to nego it on both sides and it grows, any congestion and it will stop growing. You would need to run wireshark/packet capture to see what reality looks like for ya. and arch/alpine both have same linux kernel tcp window scaling sizes and such (most of all linux is the same) the big callout is windows is different but you can increase it to mimic linux. And RTT matters because you might have a good connection to the provider but the return route from them to you might suck.

Your incomplete folder should not be "/data" as that would be your normal user share that your completed folder would land on (adjacent to your final resting place in the \*arr) and that cant be an exclusive share (as you would break mover and fill up your cache drive).

With unraid, to do an exclusive share you need to make sure settings>global share settings>exclusive shares is enabled.
Then you would add a share, lets say you called it 'scratch' with its primary storage set to your cache pool and secondary none. Make sure it says exclusive access: yes (which then internally what its doing is adding a symlink of /mnt/user/<share_name> -> /mnt/<cache>/<share_name>)
Then in your sab container you add that volume `/mnt/user/scratch` as some path `/scratch` and then in sab config>general>folders set the incomplete folder to /scratch

then go to sab homepage, wrench and refresh folder speeds.
your incomplete folder speed should be significantly higher than your complete (as that is going through user share with fuse which comes with cpu io overhead)
Post Reply