Hoping someone can help me out.
I'm using sabnzbd with python26 on a Synology DX1511+. The cpuinfo tells me that I have 2 cores with hyperthreading. All running smooth including various other packages like Sickbeard, CouchPotato and Headphones.
I recently got my link upgraded to 100Mbit fiber and I am not able to saturate my link using the Synology. From OSX or Windows I get a full 11Mb using 40 SSL threads but the Synology only gives me about 6Mb whereas using the Download Redirector does give me the full 11Mb download speed with the same settings.
I did a bit of digging but the difference seems to be in CPU usage. If sab is only downloading (no par/rar) the CPU only shows about 20% to 25% load. If par/rar starts the performance remains 6Mb as the other tasks also appear to be single core max 25% adding up to a little over 50%.
Also. if you start a fresh download, download speed quickly starts at 7 to 8Mb for a few second untill sab need to start putting the articles together using yenc. In fact, without the yenc module performance drops to less than 4Mb as the sum of sab processes easily reaches 25% load.
So where does that leave me. It sure seems sab/python is not able to make use of the 2 cores with hyperthreading as all processes seem to max out @ 25% CPU load. I wrote a quick python script which I launched from 4 different shells and all 4 x 25% gave me fully loaded machine as it should :-)
The effect of this single core usage seems to be worse if you stop using SSL. Download performance increases but with the increased performance you also see huge fluctuations in download speeds rating from 4Mb to 8Mb as the yenc has the same parent and still shares the 25% load.
It's not just sab that is showing this. par2 and rar are both not capable of using the additional core and even worse, seem to have serious issues with the idea of hyperthreading. I wrote a small script of /proc/stat to give me the usage in percent of each CPU core as procinfo shows 4 CPU's and it seems processes are thrown from one core to another never making optimal use of the processing capacity that is available.
I was kinda hoping we have an expert here that might be able to explain this behaviour or better... tell me if it can be fixed :-)