Preallocate a File

Get help with all aspects of SABnzbd
Forum rules
Help us help you:
  • Are you using the latest stable version of SABnzbd? Downloads page.
  • Tell us what system you run SABnzbd on.
  • Adhere to the forum rules.
  • Do you experience problems during downloading?
    Check your connection in Status and Interface settings window.
    Use Test Server in Config > Servers.
    We will probably ask you to do a test using only basic settings.
  • Do you experience problems during repair or unpacking?
    Enable +Debug logging in the Status and Interface settings window and share the relevant parts of the log here using [ code ] sections.
Post Reply
NvrBst
Newbie
Newbie
Posts: 3
Joined: September 2nd, 2013, 12:30 am

Preallocate a File

Post by NvrBst »

Is there an option somewhere to pre-allocate the entire file when it starts downloading instead of constantly growing the file?

My last usenet client had this option I'm pretty sure, and it makes things much easier if for example I'm missing a few blocks (with no pars), i can simply join a torrent and have it fill the missing sections (way it is now joining a torrent only matches up to the first missing part).

Or even with pars2 it's sometimes nice to see for example 20 blocks short (with pars), I can grab the most corrupt file, join a torrent, get it 100%, and then finish the repair in sabnzbd.


Or maybe there is something already in place which helps me with these situations? Thanks
User avatar
shypike
Administrator
Administrator
Posts: 19774
Joined: January 18th, 2008, 12:49 pm

Re: Preallocate a File

Post by shypike »

The size of a file cannot be accurately determined from the NZB file.
The article size listed in the NZB is not the real size of the data block.
So pre-allocation isn't very useful.
NvrBst
Newbie
Newbie
Posts: 3
Joined: September 2nd, 2013, 12:30 am

Re: Preallocate a File

Post by NvrBst »

Just to make sure I'm not crazy, I loaded up my old usenet client; it for sure pre-allocates (and does it very well since I've used it for ~8 years now and have always joined torrents with incomplete files at a very high %).

So if you can figure out how NewsLeecher does it, then it'd be a very good feature for SABnzbd (at least it is the only feature so far that is a 'okay, need to try other usenet clients or consider going back to my old version of NewsLeecher' thing for me). Keep in mind I didn't have any headers downloaded in NewsLeecher, and only used a .nbz in both cases.

If you need more information, I downloaded rar-name.rar (same file) with SABnzbd and NewsLeecher and this is what happens:
* I downloaded a "rar-name.rar" with SABnbz, it shows up as "0 KB" and grows to "39,766 KB" and stops.
* I downloaded a "rar-name.rar" with NewsLeecher, it shows up in the temporary folder as "1039103092.rar" "244,141 KB". After a bit it moves to the download folder as "RAR-NAME.RAR" "244,141 KB". (NewsLeecher uppercases any files which doesn't complete 100%).


Also keep in mind, I'm not saying switch the default behaviour of SABnzbd, but, having an option somewhere in the configuration would be a great feature (aka some people may not care about manually fixing files externally with things like torrents, and want to save some HDD space).

Another reason for pre-allocation could be fragmentation. The scheme now will create heavily fragmented files, while pre-allocating the entire file (or 99-100% of the file) ahead of time will usually allocate a single sequential block; people not using a SSD would find a performance increase in IO with pre-allocation. AKA it could really help some people on magnetic HDDs that run into IO bottlenecks with SABnzbd with their 50+ MBit connection speeds.

Should really think about it for a future release.
User avatar
shypike
Administrator
Administrator
Posts: 19774
Joined: January 18th, 2008, 12:49 pm

Re: Preallocate a File

Post by shypike »

NL can only guess, but maybe it's good at it.
The fragmentation argument is not valid: files are written in one go, not as articles come in.
I have my doubts about whether there's a significant number of people actually repairing Usenet downloads using torrents.
NvrBst
Newbie
Newbie
Posts: 3
Joined: September 2nd, 2013, 12:30 am

Re: Preallocate a File

Post by NvrBst »

IO performance increase could still be valid. Downloading 1000's of articles to HDD and then merging them at the end seems like it needs more IO than allocating a single file and filling in the data as it comes in.

AKA 2x the disc access in the first case. However it looks like the feature would probably be easier to implement by editing the final "copy + merge" step which would mean no performance IO increase.

EDIT: I also have my doubts about the "written in one go". If I sit in the "incomplete" folder hitting the "F5" button I can see a file grow like so:
10000
15123
18000
20444
30000
32055
...
195313

But looking at it closer it does seem like the files growth speed does seem a lot faster than my download speed, but, for sure it isn't a single "0 > end size" copy; AKA I think the program is doing 1000's of IO appends really quickly when it thinks the file is finished (vs something like appends in RAM and "written in one go").

AKA I'm pretty sure the current scheme still causes as much fragmentation as a "grow single file as articles comes in" scheme, and for sure more than a single pre-allocated file ever would.
Post Reply