Page 1 of 1

Auto-fail jobs that need repairing

Posted: February 27th, 2019, 6:32 pm
by SLR
A feature which allows only jobs capable of DirectUnpack would be absolutely fantastic, as many downloads don't need repairing.

Avoiding repairs solves the issue with running on slow devices, and also benefits the fastest of devices - unpacking whilst downloading means you can get your files as soon as your download is done. If you're using software or an indexer that can supply the client with an alternative nzb, almost all of the time you will find one that saves you a lot of time.

After running on a slow device, I feel this would be hugely beneficial and I'm sure many users would greatly appreciate such an option.

Thanks for your consideration.

Re: Auto-fail jobs that need repairing

Posted: February 28th, 2019, 4:10 pm
by safihre
I don't think a regular user would be too interested in this.. But I understand why a more power user would be interested in it.

Re: Auto-fail jobs that need repairing

Posted: February 28th, 2019, 10:23 pm
by SLR
Another benefit is you can access files shortly after they start downloading, I would use DirectUnpack to start playing files, but the problem is I can't tell whether it's going to work and have to delete the download if I'm looking to start playing a file now.

I'm definitely not as aware of users' interests like you are, but this minor change could have amazing capabilities.

Also, is this possible through an extension script? I might give it a go if it is.

Re: Auto-fail jobs that need repairing

Posted: March 1st, 2019, 4:54 am
by jcfp
SLR wrote:
February 28th, 2019, 10:23 pm
Another benefit is you can access files shortly after they start downloading, I would use DirectUnpack to start playing files, but the problem is I can't tell whether it's going to work and have to delete the download if I'm looking to start playing a file now.
You can't know for certain whether a job will need repairing until after all the data has been downloaded, so in a play-while-direct-unpacking scenario this feature won't be of much help. The benefactors would be users who pair very fast no-caps networking with slow storage.

Re: Auto-fail jobs that need repairing

Posted: March 1st, 2019, 11:25 am
by SLR
jcfp wrote:
March 1st, 2019, 4:54 am
You can't know for certain whether a job will need repairing until after all the data has been downloaded, so in a play-while-direct-unpacking scenario this feature won't be of much help.
Correct me if I'm wrong, but for anyone to benefit from this, it would require detecting whether the file needs repair within a duration after you started, otherwise you'd get to the end of the download and then the decision is made to auto-fail it. This could be a duration parameter next to the option.

Even if jobs encounter necessary repairs towards the end, I've found that to be rare. For example, I haven't encountered DirectUnpack being unable to finish unpacking video / audio files once started shortly after download, there's no practical way to get to these without SABnzbd having this option.

As for fast internet with slow storage, that's definitely a colossal benefit, but my internet can only achieve 6MB/s, and because my compact device is slow, repairing (especially large files) makes my internet worth 3MB/s. There's always the benefit to slow devices of decreasing job time, especially if your download has many alternative nzbs.

Re: Auto-fail jobs that need repairing

Posted: March 4th, 2019, 9:32 am
by SLR
After revising the idea, I think the parameters should be based around these principles:

To save job time, it's reliant on how much repair needs to be done.

To play during DirectUnpack, it's reliant on repairs not being allowed within a duration after starting. Though occasionally repairs may occur later on, so maybe the playable files can be left in the completed directory while repair is running, and added to by repaired chunks? The file is started, then if you have fast enough internet and device speed, you can continue.

What are the chances of these implementations? Though SABnzbd has many useful features, I feel there could be more options to quality-control your downloads. Thanks again for your time.

Re: Auto-fail jobs that need repairing

Posted: March 6th, 2019, 2:35 pm
by safihre
That is sort of what happens now: direct Unpack is started, but paused when a missing article is detected and picked up again after repair is finsihed. The pausing happens after any missing article, not keeping particular track where the missing article is.
In any case that would be a lot of bookkeeping for an already complicated system (direct unpack logic has many more trickiness too it than you'd ever expect).

Re: Auto-fail jobs that need repairing

Posted: March 7th, 2019, 2:11 pm
by SLR
Does DirectUnpack pause until the repair scan at the end? If it's possible (but probably tricky) you could track and repair during download, the direct-unpacking could resume and the file could continue playing.

A duration needs to be set where the job fails if repairs are required (bigger duration if slower internet/device), otherwise repairs halt the playing file. Total job time could also be saved by repairing while downloading!

Sorry if this is too much, I do think people would be interested in it though :)


Here's what the options might look like:

Image

Re: Auto-fail jobs that need repairing

Posted: March 8th, 2019, 2:53 am
by jcfp
SLR wrote:
March 7th, 2019, 2:11 pm
Does DirectUnpack pause until the repair scan at the end? If it's possible (but probably tricky) you could track and repair during download, the direct-unpacking could resume and the file could continue playing.
Par2 needs the data from a (mostly) complete download in order to do a repair; i.e. the sum of undamaged blocks + par2-provided redundancy must be at least 100% of the total size of the underlying data. Which means one needs to download pretty much the entire nzb before starting a repair would have any chance of succeeding, given that typical redundancy of usenet posts is in the 5-10% range.

Re: Auto-fail jobs that need repairing

Posted: March 8th, 2019, 11:12 am
by SLR
It seems recovery blocks are created in various numbers / sizes, if a recovery block is 1/20 of a file, couldn't data 1/20 of the file size be matched with the recovery block? Then par2-provided redundancy is sequential by recovery blocks? In this case, if the recovery block size is too big, there wouldn't be much point to this, possible auto-fail then?

Unless there are other ways sequential verification can be achieved...

I hope my questions aren't too silly!