par2 (multithreaded) fails on very large filesets

Support for the Debian/Ubuntu package, created by JCFP.
Forum rules
Help us help you:
  • Are you using the latest stable version of SABnzbd? Downloads page.
  • Tell us what system you run SABnzbd on.
  • Adhere to the forum rules.
  • Do you experience problems during downloading?
    Check your connection in Status and Interface settings window.
    Use Test Server in Config > Servers.
    We will probably ask you to do a test using only basic settings.
  • Do you experience problems during repair or unpacking?
    Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
Post Reply
m255
Newbie
Newbie
Posts: 3
Joined: February 16th, 2014, 7:00 pm

par2 (multithreaded) fails on very large filesets

Post by m255 »

is anyone else having issues with par2 multithreaded failing on very large filesets on ubuntu?

Code: Select all

Linux m3 3.11.0-15-generic #25-Ubuntu SMP Thu Jan 30 17:22:01 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Code: Select all

./par2 r -t+ /mnt/sabnzbd/test/u.par2

Code: Select all

Repair is required.
23 file(s) exist but are damaged.
71 file(s) are ok.
You have 3238 out of 3265 data blocks available.
You have 31 recovery blocks available.
Repair is possible.
You have an excess of 4 recovery blocks.
27 recovery blocks will be used to repair.

Computing Reed Solomon matrix.
Constructing: done.
Solving: done.

Wrote 2000837160 bytes to disk
Wrote 1975559608 bytes to disk
Repair of data file(s) has failed.
i tried using the latest version from chuchusoft.com as well as recompiling with the latest libtbb, same results. windows x64 version works fine on the same filesets. par2 classic works fine, too.
User avatar
sander
Release Testers
Release Testers
Posts: 9264
Joined: January 22nd, 2008, 2:22 pm

Re: par2 (multithreaded) fails on very large filesets

Post by sander »

Are you working with a mounted drive ("/mnt/sabnzbd")? What filesystem is on that drive (check with "mount" command)? How much space (check with "df -h")? Post both outputs here.

Does it work with classic par2 on the mounted drive?
Does it work with multithreading par2 on a local, non-mounted drive?
m255
Newbie
Newbie
Posts: 3
Joined: February 16th, 2014, 7:00 pm

Re: par2 (multithreaded) fails on very large filesets

Post by m255 »

sander wrote:Are you working with a mounted drive ("/mnt/sabnzbd")? What filesystem is on that drive (check with "mount" command)? How much space (check with "df -h")? Post both outputs here.
cifs. there's a terabyte free.
sander wrote: Does it work with classic par2 on the mounted drive?
yes.
sander wrote: Does it work with multithreading par2 on a local, non-mounted drive?
no.

i'll dig into the source when i'm in the mood and figure out what's wrong. might also test it on older linux kernels. seems odd that no one else is having this issue.
User avatar
sander
Release Testers
Release Testers
Posts: 9264
Joined: January 22nd, 2008, 2:22 pm

Re: par2 (multithreaded) fails on very large filesets

Post by sander »

CIFS? Brrrr: it is not (only) free space but also max file size (and LAN performance)

Can you create a 6666 MB file on the CIFS drive using this command:

Code: Select all

time dd if=/dev/zero of=/mnt/blabla/cifs/output.dat  bs=1M  count=6666
Post back the result. If it works, it should be something like this:

Code: Select all

$ time dd if=/dev/zero of=output.dat  bs=1M  count=555
555+0 records in
555+0 records out
581959680 bytes (582 MB) copied, 20.4877 s, 28.4 MB/s

real    0m21.712s
user    0m0.008s
sys     0m1.064s
$
I really wonder what happens with the 6666 MB file on your CIFS drive.
m255
Newbie
Newbie
Posts: 3
Joined: February 16th, 2014, 7:00 pm

Re: par2 (multithreaded) fails on very large filesets

Post by m255 »

sure, i'll post the output, just for brags =)

Code: Select all

$ time dd if=/dev/zero of=/mnt/sabnzbd/output.dat  bs=1M  count=6666
6666+0 records in
6666+0 records out
6989807616 bytes (7.0 GB) copied, 22.5625 s, 310 MB/s

real    0m23.316s
user    0m0.010s
sys     0m6.394s
the back end is solaris/zfs. 10gbe link obviously.
Post Reply