Page 1 of 1

par2 (multithreaded) fails on very large filesets

Posted: February 16th, 2014, 7:07 pm
by m255
is anyone else having issues with par2 multithreaded failing on very large filesets on ubuntu?

Code: Select all

Linux m3 3.11.0-15-generic #25-Ubuntu SMP Thu Jan 30 17:22:01 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Code: Select all

./par2 r -t+ /mnt/sabnzbd/test/u.par2

Code: Select all

Repair is required.
23 file(s) exist but are damaged.
71 file(s) are ok.
You have 3238 out of 3265 data blocks available.
You have 31 recovery blocks available.
Repair is possible.
You have an excess of 4 recovery blocks.
27 recovery blocks will be used to repair.

Computing Reed Solomon matrix.
Constructing: done.
Solving: done.

Wrote 2000837160 bytes to disk
Wrote 1975559608 bytes to disk
Repair of data file(s) has failed.
i tried using the latest version from chuchusoft.com as well as recompiling with the latest libtbb, same results. windows x64 version works fine on the same filesets. par2 classic works fine, too.

Re: par2 (multithreaded) fails on very large filesets

Posted: February 17th, 2014, 10:16 am
by sander
Are you working with a mounted drive ("/mnt/sabnzbd")? What filesystem is on that drive (check with "mount" command)? How much space (check with "df -h")? Post both outputs here.

Does it work with classic par2 on the mounted drive?
Does it work with multithreading par2 on a local, non-mounted drive?

Re: par2 (multithreaded) fails on very large filesets

Posted: February 25th, 2014, 2:16 am
by m255
sander wrote:Are you working with a mounted drive ("/mnt/sabnzbd")? What filesystem is on that drive (check with "mount" command)? How much space (check with "df -h")? Post both outputs here.
cifs. there's a terabyte free.
sander wrote: Does it work with classic par2 on the mounted drive?
yes.
sander wrote: Does it work with multithreading par2 on a local, non-mounted drive?
no.

i'll dig into the source when i'm in the mood and figure out what's wrong. might also test it on older linux kernels. seems odd that no one else is having this issue.

Re: par2 (multithreaded) fails on very large filesets

Posted: February 25th, 2014, 7:09 am
by sander
CIFS? Brrrr: it is not (only) free space but also max file size (and LAN performance)

Can you create a 6666 MB file on the CIFS drive using this command:

Code: Select all

time dd if=/dev/zero of=/mnt/blabla/cifs/output.dat  bs=1M  count=6666
Post back the result. If it works, it should be something like this:

Code: Select all

$ time dd if=/dev/zero of=output.dat  bs=1M  count=555
555+0 records in
555+0 records out
581959680 bytes (582 MB) copied, 20.4877 s, 28.4 MB/s

real    0m21.712s
user    0m0.008s
sys     0m1.064s
$
I really wonder what happens with the 6666 MB file on your CIFS drive.

Re: par2 (multithreaded) fails on very large filesets

Posted: February 25th, 2014, 7:48 pm
by m255
sure, i'll post the output, just for brags =)

Code: Select all

$ time dd if=/dev/zero of=/mnt/sabnzbd/output.dat  bs=1M  count=6666
6666+0 records in
6666+0 records out
6989807616 bytes (7.0 GB) copied, 22.5625 s, 310 MB/s

real    0m23.316s
user    0m0.010s
sys     0m6.394s
the back end is solaris/zfs. 10gbe link obviously.