I have a 5Gbps connection from my isp.
Running sab in a docker container on unraid I get 550-600MB/s with 80 connections using one provider (~30ms latency to them).
Upping to 100 connections and using another 100 connections from another provider doesnt yield me any benefit except higher cpu and more thread latency which can have a knock on effect. So one provider ends up just working well for me. I will say for whatever reason the binhex container (uses arch) works better for me than linuxserver or hotio (which both use alpine). With unraid and how it does user shares you do not want to go through fuse (shfs) for your incomplete folder. So either do a direct mount of your cache pool or just do an exclusive share and set that for your incomplete folder. A decent nvme drive should be fine for multi-gig (even upto like 10Gbps) just of course when you are doing that sustain you might want to have a pool to handle the io overhead of extracting with downloading or just dont use direct unpack/pause during pp.
To share some info I tell people in discord when they bring up performance concerns (going to be a bit of word vomit but you will get the idea):
Lower RTT (latency) from you to your provider is better, and any congestion will hurt it (so isp peering matters). Usenet is TCP traffic.. so tcp window scaling/segment sizes all come into play, read:
https://networklessons.com/cisco/ccnp-r ... ze-scaling
The tl;dr; is lower latency means you have higher chance of doing more (stuff grows quicker) but if theres any congestion then that stops it from growing/may reduce and then dealing with overhead of having to retry.. so 1 connection may do 10MB/s for one person but might only do 1 MB/s for another.
You might find 20 connections works well during off peak hours but during peak with congestion you need double the amount to get nearly the same speeds. So test at different times / different servers (ports / address family) to see if it influences it at all.
More connections is not always better and with python socket limit we can only do 512 (which no one really should be doing) and having that many connections can be a lot of overhead, look at your router to make sure its not struggling to manage it.
eval other docker maintainers if your using that. they are not all the same (different os/setups/python version/etc).
sab downloads into ram (article cache) that gets offloaded to your sab incomplete folder.. that incomplete folder should be on a fast local drive.
Drive type matters, sata is half duplex while sas is full but m.2 or u.2 for example have higher bus due to pcie.. but then what pcie you use can matter.
Having a fast dram-less ssd may seem fine, but its like using a glorified usb drive.. a single task it works but anytime you do more than one thing (like download and extract, or copy files and download) it just stumbles and needs cpu to manage.. so iowait skyrockets. You can mask some of that by disabling direct unpack or pausing downloading during pp but overall not a great fit for sab.
Drive selection:
m.2 > sas > sata
dram > dram-less
slc > mlc > tlc > qlc
--
drive speed, local network speeds (from your isp to your computer), and then your usenet service provider through the global internet to your isp to you.. are different things. many variables involved with it all and then beyond all that is how you use the things. where that disk io is actually happening.
ssd while faster than spinning rust, come in various levels of performance. you will find cheap dram-less ones while seem okay on a single task like xfering a single large file but tank in performance when asked to do more than one thing. and then even ones with dram are good until that dram is exhausted and then you might not be able to sustain things. and ssd are usually connected on consumers through sata, which is half duplex. youd need sas to be full duplex.
why using m2 interface can yield much faster speeds as your not limited by sataIII and sharing io with others on the chipset
having a 10G lan wont mean anything if you have a 1G wan. and vice versa. and then even if you have a 10G wan and lan.. if you have a crappy usenet provider or isp, then your just going to struggle to get decent speeds. as usenet is tcp traffic, so congestion/retries kill performance due to tcp window scaling not able to grow and congestion algo holding you back
why ideally when your wanting to do decent speeds youd care more about whats the rtt from you to your usenet service provider
lower the latency the better/consistent things will be. but even so you need to have enough connections then as well. as many providers limit how much one connection to do. and use that to gate things