2.0.0 Issues

Report & discuss bugs found in SABnzbd
Forum rules
Help us help you:
  • Are you using the latest stable version of SABnzbd? Downloads page.
  • Tell us what system you run SABnzbd on.
  • Adhere to the forum rules.
  • Do you experience problems during downloading?
    Check your connection in Status and Interface settings window.
    Use Test Server in Config > Servers.
    We will probably ask you to do a test using only basic settings.
  • Do you experience problems during repair or unpacking?
    Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
Post Reply
NunyaBzness
Newbie
Newbie
Posts: 16
Joined: April 24th, 2017, 10:23 am

2.0.0 Issues

Post by NunyaBzness »

Following up on my post prior to the ransomware...

I had an issue, at the time, with failed SQL writes and loss of connection to the web site. The base system is operating on an older i7 machine running Server 2016. My SABNZBD, at the time, was running on a Windows server 2016 VM alongside CouchPotato, Sonarr, and Plex. Per your suggestion, I addressed the CPU utilization by moving Plex to its own VM. All three of the remaining VMs communicate through a single virtual disk housed on a fast SSD. This has worked very nicely to solve the originally observed issue. However, I believe that there is another underlying issue that should be addressed:

1. The PAR2 process is incredibly "loud" in terms of the CPU and disk "thrashing". When compared to similar tools (e.g., QuickPar), there is a significant amount of additional resources required to fix downloads that contain lost articles. This system is running the 64-bit variant. I can pull together NZBs for you to use to test, if you're interested.

2. For folks like me that use a "small" SSD to download files, before depositing them on "large" long term storage, the current method of handling the post-download (e.g., "History" in the UI) is very suboptimal. In cases where there are multiple files that require repair, the entire decoding queue slows to a crawl and a tremendous amount of disk space is consumed from (in my case) fast, limited capacity media. I'd suggest that you consider splitting the pipeline into 2 separate queues, one for repair and one for decoding error-free content. This would significantly decrease the storage required for SABNZBD "temporary" files and improve system throughput.

Thanks for your consideration.
User avatar
safihre
Administrator
Administrator
Posts: 5338
Joined: April 30th, 2015, 7:35 am
Contact:

Re: 2.0.0 Issues

Post by safihre »

You can set how intense the par2 repair is by specifying Extra par2 parameters:
https://sabnzbd.org/wiki/configuration/ ... multi-core

You can set Pause download during post processing to make sure it doesn't fill up the small medium.

I actually tried how my (i5, fast ssd) would handle both a repair and unpack (of undamaged download) at the same time and it wasn't very good. Both processes heavily rely on cpu and disk activity, causing the system to become unusable.
Nzbget does support this feature, in case you really want it.
If you like our support, check our special newsserver deal or donate at: https://sabnzbd.org/donate
NunyaBzness
Newbie
Newbie
Posts: 16
Joined: April 24th, 2017, 10:23 am

Re: 2.0.0 Issues

Post by NunyaBzness »

Thanks for the response!

I've been running par2 with -t0 since I started using SABNZBD several years ago. When I observed the issues I tried to describe above I also tried -t+. Neither setting seems to have improved the time required to verify/repair.

The real issue is the time to verify. Several files that ultimately failed required 10s of minutes to complete verification, perhaps a result of going to -t+. I would have expected to see a failure pop out fairly quickly. The actual repair occurs in a very reasonable timeframe. I thought that would be a better setting, since "trashing" read/write on an SSD is orders of magnitude better than rotating media, as long as you're within the throughput/iops of the device and interface.

Pause per file isn't a great trade, because it lowers the overall throughput, but it could be necessary. BTW, I have also set the "Download all par2 files" switch in post processing.
Post Reply